The harpy is a very old HARM and wouldn't be anything close to be considered autonomous. It's basically what a lay person would call a "homing" missile that tracks and follows radiation in the form off high power RF aka radar.
Every nation has these and most aare air launched. Not much different than an IR missile like our AIM-9 in that it's a "fire and forget" system. That is compared with active guides guided missiles like the AMRAAM.
Small note but the AMRAAM is also a fire and forget missile as the radar guidance system is contained on the missile itself and requires no additonal input from the jet that fired it. This is contrasted with a missile like the AIM-7 that relies on the active radar from the parent aircraft to stay on target.
Nope. The primary BVR use case uses the APG-63/5/70 to give guidance to the AMRAAM until it can pick up active seeker lock itself. In theory if you are close enough you can "fire and forget" but it would be a waste and put you way closer to a threat than necessary - which would defeat the purpose of the AMRAAM.
The video surprised me to, so I took a closer look at this weapons system. The promotional material looks like it came straight out of the eighties because it does. According to wikipedia[0] this is a late eighties tech. Maybe it'd make sense to add that somewhere to the submission?
It looks like the script is there to block phantomjs based bots (which can evaluate JS). It seems to check for the global properties mentioned in this article:
The structure and class names? Probably not, the structure looks kinda verbose, but not unreasonable if you imagine them using an off-the-shelf HTML/CSS library for the site.
It's just a result of ruthless optimization. Management is always pushing us devs to optimize the metrics that are easiest to measure, and there are plenty of online tools that will score a site based partly on the file sizes for JS, CSS and even HTML.
Right now, we're running the whole response through something like an optimizing compiler stack (Webpack, with lots of plugins and some custom modules). It renames everything to the shortest possible name, eliminates code that will never execute, inlines CSS into the <head> and <body> to reduce HTTP requests, splits the code into large bundles that can be cached forever and a small bundle that will be updated nearly every workday, stuff like that.
I'm afraid we're at the point now where a modern HTML response requires advanced tools in order to make it legible to a human. For that, I apologize, no sarc, we are aware of the loss of culture that is happening. There is some pushback happening, in the form a ruthless minimalism (plain HTML, no css, no JS), but good luck demonstrating the beauty of that to the UX team.
If it's any consolation, these advanced tools are built into FF and Chrome; just right click, inspect element, and look for associated JS handlers. The browser can clean up the code formatting for you, although the names will still be arbitrary. Set a breakpoint on an interesting statement, trigger (or manually fire) the action you followed to get there, and explore the values that are in scope at that point. Watching the values change will often tell you more than reading the code itself, when the names are meaningless.
As for the inconsistency mentioned by my sibling, that sounds a little broken. One of the things we do with the Webpack stack, is specialize the code for known browsers, by eliminating the browser-specific stuff that isn't needed for the current visitor; w3m and wget are not high priority targets :/
It doesn't always do that. I first loaded the page in emacs-w3m, which doesn't interpret javascript, and got the output I pasted above. Then I went back a bit later and loaded the page again, and got normal html (while still using emacs-w3m, a browser that can't handle javascript).
Then I tried wget and got the same weird obfuscated stuff I pasted above, and got it again when I wgot it again.
See also, the infamous "collateral damage" video on LiveLeak. I will not link it here, because of how absolutely horrific it is. Do not seek it out if you value your faith in humanity. Having said that, I believe it is representative of modern warfare, and I do not think that that atrocity should ever be forgotten.
There are so many videos like that out there. The fact that two Reuters journalists were killed in the "collateral murder" video is notable, but unfortunately children and first responders being killed is not out of the ordinary.
Yesterday I posted a compilation of Apache gunships blowing up black and white silhouettes of people and massive expanses of land. I don't buy for a second that the US military or their contractors really know anything about the people they are killing, beyond the fact that they have identified with certain groups for reasons that could range from nationalism to just plain self-preservation and a salary. It is undebatable that most of the 1,000,000s of people who died in the Middle East in the last 15 years were not guilty of any crime, and even if they were the US consistently provided no proof before incinerating them extra-judicially.
The video is very sad. I feel the news in the UK is way way way too soft. All citizens should see the real cost of war. It is so easy to just be ignorant. "oh that, yawn". If people aren't shown it, how can they ever judge (with a vote at the ballot box) to be for or against it.
Individuals are not held accountable now. The US defines all drone kills as terrorists, by definition. It's on the corpse to prove its innocence after the fact.
Obviously, this makes great training data for ML algorithms.
Note that this is very old HARM (radar seeking weapon) tech. There is no difference legally or morally between launching this mini plane it from a vehicle and launching a normal radar seeking missile or bomb from a plane as has been done for decades. And if the weapon causes collateral damage that is the responsibility of the person who pulled the trigger, then the commander, and ultimately the nation that did it. Even if this was some kind of "smart" weapon, nothing would change. Whoever decided to used it has responsibility for the outcome.
The moral hazard of autonomous and remote controlled weapons isn't that there is no responsibility, but that the threshold for use is lowered.
The government deploying this in war time is responsible for its actions. "Corporations" not directly associated with a government actor would be considered mercenaries with its own set of international law.
But who will the deploying government's army hold responsible? If a soldier recklessly kills several civilians they will likely be held for responsible for their actions and discharged. If an autonomous robot does the same, then what will happen?
Not that it will make you feel any better, but often the offending soldier is not held responsible. Ultimately might will make right and the victors will decide on the punishment.
The "system" is already an excuse. Example the Tarnak Farm Incident where a US pilot killed 4 Canadian soldiers:
"... as much as the F-16 pilots bear final responsibility for the fratricide incident, there existed other systemic shortcomings in air coordination and control procedures, as well as mission planning practices by the tactical flying units, that may have prevented the accident had they been corrected."[5]
Despite the fact he "flagrantly disregarded a direct order", "exercised a total lack of basic flight discipline", and "blatantly ignored the applicable rules of engagement". He was merely fined $5,700.
While it may feel emotionally gratifying to punish the human soldier, isn't it better overall to have a robot that can be fixed with a patch? The same sorts of arguments can be made about self-driving cars.
The U.S. military's current approach, the "Third Offset" which is still in development, is based on the theory that AI is good at defined tasks (e.g., identifying enemy planes) but not at handling the chaos of war (e.g., everything involved in flying a plane on a mission); that is, it's not good at general intelligence. AI can park your car, but the other cars aren't intelligent adversaries who are trying to stop you from parking, destroy your car, and kill you.
The general design is to combine AIs and humans in what they call "centaurs", utilizing the strengths of each in a team. At least, that is what is said publicly.
But autonomy already is deployed. For example, in ships, the last line of defense against incoming missiles are basically large, autonomous machine guns; a human could never act quickly enough against supersonic incoming missiles. Computer system attack and defense ("cyber") is expected to be run by AI, simply because humans can't keep up with an attacking AI. Autonomy also is necessary because you can't depend on having secure, effective communication with your drones; that would be a huge vulnerability.
But autonomy is much trickier to define than it first appears: If you shoot a 'dumb' artillery shell at a target, you lose control of the shell the moment after you pull the trigger. If a group of children subsequently enters the target area, that 'autonomous' shell is going to kill them and you can't stop it. (An AI weapon might be safer in that kind of circumstance; it can change course.) Is it different to shoot a dumb shell to kill everything in the target area than to send an AI-operated drone to the same target area with the same instructions?
> If you shoot a 'dumb' artillery shell at a target, you lose control of the shell the moment after you pull the trigger. If a group of children subsequently enters the target area...
Nitpick: The children would have to move very quickly to get there during the time of flight of the shell (a few seconds). The interval during which you can intervene is actually the time from the forward observer (remember, arty gunners don't see their own targets) making the call for fire to the time at which the shell is fired. If the notional target is assessed as a high priority (target priorities are pre-assigned), and you trust the forward observer to have checked for collaterol damage, this could be a very short period.
If loitering munitions are used instead of conventional arty, and they can accept targeting updates in-flight, then the interval could be as long as you like (within weapon endurance constraints).
> Nitpick: The children would have to move very quickly to get there during the time of flight of the shell (a few seconds).
Double nitpick: 155mm artillery rarely has a time of flight of a "few seconds" - as they usually call "splash" 5 seconds before the anticipated impact in order to give the forward observer a heads up. A TOF of a minute isn't unusual - in that time a lot can happen.
Don't they have a range of something like 20 miles, which would take much more than a few seconds? And that is measured in a straight line from the gun to the target; the shell, of course, takes the long way.
Unlikely. But see Ukraine (and isis) for how the drone thing is playing out. Drones are now integrated into the battlefield more and more. But they haven't replaced humans, and they won't until they're multi-purpose and able to adapt.
That's actually an interesting business idea. Pretty risky though, if you sold to both sides you'll always also sell to the losing side. The eventual winning side might not be happy about that. Also, depending on the anti air capabilities on the battlefield and the price of a drone, the losses might not make up for the profits.
> depending on the anti air capabilities on the battlefield
Yes. The use of drones by western forces has coincided with operations in which the west has had total air supremacy. The larger weaponised drones (Predator, etc) have never been flown in a hostile air environment, against a peer force that has serious anti-air. I would be reluctant to fly my drones in such an environment. Smaller throw-away drones are a different issue.
Smaller drones could deliver small payloads anti-personnel/incendiaries on eg. London or just as ransomware - a dozen expendable drones could occupy the airspace over Heathrow until some anoncoin was paid. Sky's the limit literally! Exciting times!
I'm sure both sides will stop fighting long enough for that company to send some consultants in to establish an independent airfield and command and control then sell their services. Drones still need human support.
Every nation has these and most aare air launched. Not much different than an IR missile like our AIM-9 in that it's a "fire and forget" system. That is compared with active guides guided missiles like the AMRAAM.