Falcon 4 in the real world
-
Air combat
Fighter aircraft will soon get AI pilots
Nov 19th 2020Classic dogfights, in which two pilots match wits and machines to shoot down their opponent with well-aimed gunfire, are a thing of the past. Guided missiles have seen to that, and the last recorded instance of such duelling was 32 years ago, near the end of the Iran-Iraq war, when an Iranian f-4 Phantom took out an Iraqi Su-22 with its 20mm cannon.
But memory lingers, and dogfighting, even of the simulated sort in which the laws of physics are substituted by equations running inside a computer, is reckoned a good test of the aptitude of a pilot in training. And that is also true when the pilot in question is, itself, a computer program. So, when America’s Defence Advanced Research Projects Agency (darpa), an adventurous arm of the Pentagon, considered the future of air-to-air combat and the role of artificial intelligence (ai) within that future, it began with basics that Manfred von Richthofen himself might have approved of.
In August eight teams, representing firms ranging from large defence contractors to tiny startups, gathered virtually under the auspices of the Johns Hopkins Applied Physics Laboratory (apl) in Laurel, Maryland, for the three-day final of darpa’s AlphaDogfight trials. Each had developed algorithms to control a virtual f-16 in simulated dogfights. First, these were to be pitted against each other. Then the winner took on a human being.
Dropping the pilot?
“When I got started”, says Colonel Dan Javorsek, who leads darpa’s work in this area, “there was quite a bit of scepticism of whether the ai algorithms would be up to the task.” In fact, they were. The winner, created by Heron Systems, a small firm in the confusingly named town of California, Maryland, first swept aside its seven digital rivals and then scored a thumping victory against the human, a pilot from America’s air force, in five games out of five.Though dogfighting practice, like parade-ground drill and military bands, is a leftover from an earlier form of warfare that still serves a residual purpose, the next phase of darpa’s ace (air combat evolution) project belongs firmly in the future, for it will require the piloting programs to control two planes simultaneously. Also, these virtual aircraft will be armed with short-range missiles rather than guns. That increases the risk of accidental fratricide, for a missile dispatched towards the wrong target will pursue it relentlessly. Tests after that will get more realistic still, with longer-range missiles, the use of chaff and flares, and a requirement to deal with corrupt data and time lags of a sort typical of real radar information.
The point of all this, putative Top Guns should be reassured, is not so much to dispense with pilots as to help them by “a redistribution of cognitive workload within the cockpit”, as Colonel Javorsek puts it. In theory, taking the pilot out of the plane lets it manoeuvre without regard for the impact of high g-forces on squishy humans. An uncrewed plane is also easier to treat as cannon-fodder. Still, most designs for new fighter jets have not done away with cockpits. For example, both of the rival European programmes—the British-led Tempest and the Franco-German-Spanish Future Combat Air System (fcas)—are currently “optionally manned”. There are several reasons for this, explains Nick Colosimo, a lead engineer at bae Systems, Tempest’s chief contractor.
One is that eliminating the pilot does not provide much of a saving. The cockpit plus the assorted systems needed to keep a human being alive and happy at high altitude—cabin pressure, for example—contribute only 1-2% of a plane’s weight. A second is that even ai systems of great virtuosity have shortcomings. They tend not to be able to convey how they came to a decision, which makes it harder to understand why they made a mistake. They are also narrowly trained for specific applications and thus fail badly when outside the limits of that training or in response to “spoofing” by adversaries.
An example of this inflexibility is that, at one point in the AlphaDogfight trials, the organisers threw in a cruise missile to see what would happen. Cruise missiles follow preordained flight paths, so behave more simply than piloted jets. The ai pilots struggled with this because, paradoxically, they had beaten the missile in an earlier round and were now trained for more demanding threats. “A human pilot would have had no problem,” observes Chris DeMay, who runs the apl’s part of ace. “ai is only as smart as the training you give it.”
This matters not only in the context of immediate military success. Many people worry about handing too much autonomy to weapons of war—particularly when civilian casualties are possible. International humanitarian law requires that any civilian harm caused by an attack be no more than proportionate to the military advantage hoped for. An ai, which would be hard to imbue with relevant strategic and political knowledge, might not be able to judge for itself whether an attack was permitted.
Of course, a human being could pilot an uncrewed plane remotely, says Mr Colosimo. But he doubts that communications links will ever be sufficiently dependable, given the “contested and congested electromagnetic environment”. In some cases, losing communications is no big deal; a plane can fly home. In others, it is an unacceptable risk. For instance, fcas aircraft intended for France’s air force will carry that country’s air-to-surface nuclear missiles.
The priority for now, therefore, is what armed forces call “manned-unmanned teaming”. In this, a pilot hands off some tasks to a computer while managing others. Today’s pilots no longer need to point their radars in the right direction manually, for instance. But they are still forced to accelerate or turn to alter the chances of the success of a shot, says Colonel Javorsek. Those, he says, “are tasks that are very well suited to hand over”.
One example of such a handover comes from Lockheed Martin, an American aerospace giant. It is developing a missile-avoidance system that can tell which aircraft in a formation of several planes is the target of a particular missile attack, and what evasive actions are needed. This is something that currently requires the interpretation by a human being of several different displays of data.
Another example is ground-collision avoidance. In 2018 a team led by the American air force, and including Lockheed Martin, won the Collier Trophy, an award for the greatest achievement in aeronautics in America, for its Automatic Ground Collision Avoidance System, which takes control of a plane if it is about to plough into the terrain. Such accidents, which can happen if a pilot experiencing severe g-forces passes out, account for three-quarters of the deaths of f-16 pilots. So far, the system has saved the lives of ten such pilots.
A dog in the fight?
Eventually, darpa plans to pit teams of two planes against each other, each team being controlled jointly by a human and an ai. Many air forces hope that, one day, a single human pilot might even orchestrate, though not micromanage, a whole fleet of accompanying unmanned planes.For this to work, the interaction between human and machine will need to be seamless. Here, as Suzy Broadbent, a human-factors psychologist at bae, observes, the video-game and digital-health industries both have contributions to make. Under her direction, Tempest’s engineers are working on “adaptive autonomy”, in which sensors measure a pilot’s sweat, heart-rate, brain activity and eye movement in order to judge whether he or she is getting overwhelmed and needs help. This approach has been tested in light aircraft, and further tests will be conducted next year in Typhoons, fighter jets made by a European consortium that includes bae.
Ms Broadbent’s team is also experimenting with novel ways to deliver information to a pilot, from a Twitter-like feed to an anthropomorphic avatar. “People think the avatar option might be a bit ridiculous,” says Ms Broadbent, who raises the spectre of Clippy, a famously irritating talking paper clip that harangued users of Microsoft Office in the 1990s and 2000s. “Actually, think about the information we get from each other’s faces. Could a calming voice or smiling face help?”
Getting humans to trust machines is not a formality. Mr Colosimo points to the example of an automated weather-information service introduced on aircraft 25 years ago. “There was some resistance from the test pilots in terms of whether they could actually trust that information, as opposed to radioing through to air traffic control and speaking to a human.” Surrendering greater control requires breaking down such psychological barriers.
One of the aims of AlphaDogfight, says Mr DeMay, was to do just that by bringing pilots together with ai researchers, and letting them interact. Unsurprisingly, more grizzled stick-jockeys tend to be set in their ways. “The older pilots who grew up controlling the radar angle…see this sort of technology as a threat,” says Colonel Javorsek. “The younger generation, the digital natives that are coming up through the pipeline…trust these autonomous systems.” That is good news for darpa; perhaps less so for Colonel Javorsek. “These things that I’m doing can be rather hazardous to one’s personal career”, the 43-year-old officer observes, “given that the people who make decisions on what happens to me are not the 25-year-old ones. They tend to be the 50-year-old ones.”■
-
Ok, I’m a dinosaur and a romantic I guess with regard to flying and driving, but I hate the development of these pilotless planes and cars. I love driving and I love flying and controlling these vehicles. I don’t like it that everybody is now working so hard to make these autonomous vehicles. I guess it’s still a few years down the road, but still…
-
Its already been done decades ago. What is an ICBM but a bomber which can be shot down without having to write letters to families?
-
If computers are so good as they trying to proof here why just not skip whole war and simulate it results. Winner just send loser lots of explosives to destroy targets that computer decide to be destroyed and loser just send bit less explosives to winner to do same.
I don’t think that any humanitarian law is against it if lives can be saved this way by evacuating cities before destroying those. Computer just gives results that both countries must obey.Fight of 4 AI vs 4 human pilots is something that can really show how good computers really are and it needs to do quite many times to see if computer or human can adapt new fighting technics against each other
-
Ok, I’m a dinosaur and a romantic I guess with regard to flying and driving, but I hate the development of these pilotless planes and cars. I love driving and I love flying and controlling these vehicles. I don’t like it that everybody is now working so hard to make these autonomous vehicles. I guess it’s still a few years down the road, but still…
AIs will certainly drive and fly better than humans. To some extent they already do. It doesn’t mean that we have to give up driving and flying altogether.
-
I doubt it. AI can’t really think. Driving is mostly based on following the rules, so it does well there, but any unexpected situations are still difficult for the AI to handle. Not only that, stupidly designed rules (say, intended not to make driving safe, but to boost traffic ticket revenue when people inevitably ignore them) will slow AI traffic to a crawl, though I suspect we can expect them to be revised once that starts to happen with any sort of regularity.
Also, I don’t think that the AI can fly better than a human in a real tactical situation. When you have dissimilar aircraft with partially unknown capabilities, you can’t train a machine learning algorithm against them. Of course, human pilots will be disadvantaged there, as well, but based on what I know about machine learning, they will be disadvantaged less than the AI. For flying bomb trucks, sure, drones are already doing air to ground, but for air to air, I’d expect humans to be a necessary components for a while.
We can, however, expect an emergence of anti-AI tactics, just like in chess, where there are special strategies designed to defeat computers. The problem with AIs is that they have no concept of “bloody stupid”. As such, a self-driving car for example won’t realize “OK, this guy in front of me is an idiot” or “whoever’s driving that 18-wheeler is either drunk or asleep” and won’t take appropriate action. This may result in a collision in which the AI-driven car won’t necessary be at fault, but it won’t be much comfort to the occupants of a self-driving car crushed by a drunk trucker. An all-AI road system would be safer than an all-human one, but mixed humans and AI will likely be somewhat less safe than both. And that’s just the roads, where predictability is considered desirable.
-
I doubt it. AI can’t really think. Driving is mostly based on following the rules, so it does well there, but any unexpected situations are still difficult for the AI to handle. Not only that, stupidly designed rules (say, intended not to make driving safe, but to boost traffic ticket revenue when people inevitably ignore them) will slow AI traffic to a crawl, though I suspect we can expect them to be revised once that starts to happen with any sort of regularity.
Also, I don’t think that the AI can fly better than a human in a real tactical situation. When you have dissimilar aircraft with partially unknown capabilities, you can’t train a machine learning algorithm against them. Of course, human pilots will be disadvantaged there, as well, but based on what I know about machine learning, they will be disadvantaged less than the AI. For flying bomb trucks, sure, drones are already doing air to ground, but for air to air, I’d expect humans to be a necessary components for a while.
We can, however, expect an emergence of anti-AI tactics, just like in chess, where there are special strategies designed to defeat computers. The problem with AIs is that they have no concept of “bloody stupid”. As such, a self-driving car for example won’t realize “OK, this guy in front of me is an idiot” or “whoever’s driving that 18-wheeler is either drunk or asleep” and won’t take appropriate action. This may result in a collision in which the AI-driven car won’t necessary be at fault, but it won’t be much comfort to the occupants of a self-driving car crushed by a drunk trucker. An all-AI road system would be safer than an all-human one, but mixed humans and AI will likely be somewhat less safe than both. And that’s just the roads, where predictability is considered desirable.
Agree 100%
-
We can, however, expect an emergence of anti-AI tactics, just like in chess, where there are special strategies designed to defeat computers. The problem with AIs is that they have no concept of “bloody stupid”. As such, a self-driving car for example won’t realize “OK, this guy in front of me is an idiot” or “whoever’s driving that 18-wheeler is either drunk or asleep” and won’t take appropriate action. This may result in a collision in which the AI-driven car won’t necessary be at fault, but it won’t be much comfort to the occupants of a self-driving car crushed by a drunk trucker. An all-AI road system would be safer than an all-human one, but mixed humans and AI will likely be somewhat less safe than both. And that’s just the roads, where predictability is considered desirable.
Thats not AI in the modern usage of the term.
You’ve described a system where the AI follows a set of rules which are pre-determined by a human programmer. This is not ‘intelligence’, just following rules. Programming is basically writing rules, so it works well there, as you say. When talking about AI in this context, we are not talking about AI in the same sense as when we discuss the BMS AI, which is still a set of programmed conditions as a state machine.
you can’t train a machine learning algorithm
You can. It all depends on your training data - and you can develop training data for handling unknown capabilities, too.
In the case of a collision where the AI-driven car “isnt necessarily at fault”, its not a great analogy. Partly because AI development is not done generally by devising clever state machines to follow rules, but partly because AI is already capable of reacting to other traffic accordingly.
If you want to see some excellent video showcasing what you can achieve with AI and machine learning, Id suggest comparing Boston Dynamics videos from the start of the last decade with the videos from the end of the last decade. Id highlight that at the start of that development timeframe, you have a robot that looks like an excellent example of your argument: Not smart enough to, as you put it, take appropriate action to a dynamic situation. I suggest that at the end of that timeframe, the same robot hardware is an excellent counter to your argument.
-
@M79:
If computers are so good as they trying to proof here why just not skip whole war and simulate it results. Winner just send loser lots of explosives to destroy targets that computer decide to be destroyed and loser just send bit less explosives to winner to do same.
I don’t think that any humanitarian law is against it if lives can be saved this way by evacuating cities before destroying those. Computer just gives results that both countries must obey.Fight of 4 AI vs 4 human pilots is something that can really show how good computers really are and it needs to do quite many times to see if computer or human can adapt new fighting technics against each other
…wasn’t there a Star Trek episode based on this? And then as the casualty reports came in people on the list were just expected to report to a disintegration location for…er…disposal?
-
I agree with Blu3wolf…this isn’t really about “AI” as we tend to think about it - no tactics, no “thinking” involved. Just the achievement of a mathematical solution to an end game on an optimal path. It’ a dynamic situation, but it’s just pure, straight math based on kinetics and dynamics. The only tactical decisions involved are still left to humans at the strategic level - humans control the battle, “AI” only executes the/a fight. Pretty much like any other weapon system.
-
You can. It all depends on your training data - and you can develop training data for handling unknown capabilities, too.
In the case of a collision where the AI-driven car “isnt necessarily at fault”, its not a great analogy. Partly because AI development is not done generally by devising clever state machines to follow rules, but partly because AI is already capable of reacting to other traffic accordingly.
If you want to see some excellent video showcasing what you can achieve with AI and machine learning, Id suggest comparing Boston Dynamics videos from the start of the last decade with the videos from the end of the last decade. Id highlight that at the start of that development timeframe, you have a robot that looks like an excellent example of your argument: Not smart enough to, as you put it, take appropriate action to a dynamic situation. I suggest that at the end of that timeframe, the same robot hardware is an excellent counter to your argument.
Perhaps so, but do we really want AI controlled fighters making decisions by themselves on which plane to shoot or which target to bomb? I think we’re headed down a potential dangerous path with this development. I can see the benefit of it that you don’t have to send your soldiers into harm’s way, but at the same time this is also the danger. The decision for politicians to go to war might become a lot easier if they don’t have to fear loss of lives on their side. Don’t want to turn this into a political debate and I realize that the development is going full speed ahead and is probably inevitable. But I think it is very questionable if we should really want to go down this road.
But for me a big part is also the romantic side of it, because fighter pilot is a job I always dreamed about and one day that job is likely to be gone. Not that it matters much, since I’m too old to become a fighter pilot anyway, but still…
Anyway, that’s inevitable too. Every weapons system is outdated at some point in time, so that will be true for (manned) fighter jets as well. -
I agree with Blu3wolf…this isn’t really about “AI” as we tend to think about it - no tactics, no “thinking” involved. Just the achievement of a mathematical solution to an end game on an optimal path. It’ a dynamic situation, but it’s just pure, straight math based on kinetics and dynamics. The only tactical decisions involved are still left to humans at the strategic level - humans control the battle, “AI” only executes the/a fight. Pretty much like any other weapon system.
Not just at the strategic level. In a cas situation there are a lot of tactical decisions to be made on the battlefield. In an air battle the AI has to identify friend or foe etc. So the AI has plenty of tactical decisions to make which can be made more complicated in all kinds of ways. Jamming, bad weather etc.
-
Not just at the strategic level. In a cas situation there are a lot of tactical decisions to be made on the battlefield. In an air battle the AI has to identify friend or foe etc. So the AI has plenty of tactical decisions to make which can be made more complicated in all kinds of ways. Jamming, bad weather etc.
I consider those decisions to be of a “strategic” nature too, just at a finer level - and typically in an actual CAS mission there isn’t as much room to deviate as on might like to imagine - the call i made for support and the supported either CANCO or CANTCO. Once the contract is made it’s executed and the time for any “tactical” decisions is past.
IFF is just another technical problem that doesn’t really require “AI” but can be solved at a purely automated/technical level. The issue then becomes data quality and reliability…and if the entire battle field is automated do you even still require the CAS mission, or do you simply manage point defense/offense (strategically) within the AOR? Does an AI air support vehicle simply fly in and destroy anything that isn’t ID’d as a friend? And do tacticians develop new ideas about “acceptable casualties”? How much autonomy do you actually give an AI weapon?
-
You can. It all depends on your training data - and you can develop training data for handling unknown capabilities, too.
You can’t train a machine learning to have imagination. At most, you can get a diverse data set, but then, it’ll still be at a loss when something outside these parameters crops up. To some degree, you can compensate for “known unknowns”, but I have not yet seen an AI algorithm that could compensate for “unknown unknowns”, the way humans do.
Yes, I am talking about ML and not just a state machine. Anti-computer tactics should exist for that, too, just more sophisticated ones. It’s good for some things, but it just isn’t an actual, thinking human. Even the Boston Dynamics bots are just pretending really well. I saw the video, they’re impressive as far as robots go, but it’s not the kind of dynamic environment I was thinking about. Think about a student who learns everything by heart - such a student may pass an exam, maybe pass off for someone with a clue, but put that student in a situation that requires creative thinking, and you’ll only get a dog-eyed stare. Even something like speech recognition requires a lot of training and is utterly dependent on what microphone you use, when even a preschool child can recognize, over the telephone, not only the words but also who’s talking, even if the speech is very noisy.
Show me an AI that gets trained in F-4 vs. F-4, then takes down a human-flown MiG-21 (or at least survives the fight), and then we’ll be getting somewhere.
-
…wasn’t there a Star Trek episode based on this? And then as the casualty reports came in people on the list were just expected to report to a disintegration location for…er…disposal?
Well I got very good library to this. Just name episode.
Whole conversation is how good computer is to adopt new “tactics” where human is not using same losing tactic again(trial and error). Can human do this faster or can it invent new strategies to beat AI better? -
There is no hard line between robots operating on specific instructions and generalized AI. The generalization of instruction is not a binary thing (pun) but a continuum. Pick apart any AI no matter how generalized and you’ll find some deliberate human instruction somewhere.
AI will and already has changed the nature of air warfare. The new PGMs coming or here already smartly prioritize targets they sense as they arrive in a designated area, network and coordinate with others, etc. The idea that an AI can never replace a human pilot is not seeing that AI isn’t going to be a drop in direct replacement. The nature of the whole system will adjust to the new characteristics of the components. Airframes driven by AI have happened and something resembling a current fighter jet will happen. AI is already superior at a lot of tasks that the human pilot already performs and that list is growing. The things that AI can’t do well yet will be a combination of accepted degradation, shifted by redesign off of that platform, and improved.
Simulated clean and nice war (yes like the Star Trek episode) misses the point of war: that it’s the lack of rules. Gentleman’s agreements usually favor one side of a fight. Warfare is the breakdown of diplomacy and warfare rules are a form of diplomacy. A terrorist bombing a subway is conducting warfare to their strengths. The nation state may demand “hey you can’t do that, you have to use multi-million dollar machines wearing special matching clothing attacking only other multi-million dollar machines with other people with different matching clothing inside.” It’s no surprise that a force which knows it will lose in that system won’t participate. Would you agree to play poker to resolve the conflict if you’re better at checkers? If the result of a simulation was annihilation, why would you agree to the results? What do you have to lose by rebelling?
-
@M79:
Well I got very good library to this. Just name episode.
Whole conversation is how good computer is to adopt new “tactics” where human is not using same losing tactic again(trial and error). Can human do this faster or can it invent new strategies to beat AI better?Kind of goes to something I’ve heard said about tech and warfare - that a truly significant advancement doesn’t just change the fight, it changes HOW you fight. The real problem here is that we’re thinking of AI in the context of how it can be used to win chess games…that’s not how it’s used on a battlefield. Not yet, at least…and I expect to be long dead before that time, or because of it.
Oh…and it’s this episode -
-
This thread made me do some research on the current state of machine learning and its creativity. Attached below is my most worrying finding; the joker’s joke was beyond funny
-
The biggest problem with AI operating any kind of platform, is the the idea of “perception”. The world is full of almost infinite objects and infinite problems (with their solutions) at infinite resolutions that, without some sort of filter (humans use the idea of perception, where our brain filters that which is “not required” out of our area of perceptible thought) to deal with the problem.
Our machines currently have 2 problems:
1. The sensors and perception. we have right now are not good enough (or they are too good, depending on your idea of perception). The information provided to the AI is too much for it to work with, at the speeds required, to complete complex tasks like driving down a road in a suburban environment. We have specific sensors that can accomplish simple tasks, like staying a certain distance away from vehicles in front of us, or keeping within the lines, but the entire context is something that can not currently be addressed at the computing rates we currently have with the sensors that are available.
2. The perception of what is right - the trolley problem: it will be hard for people/society to accept that someone in a cubicle somewhere will devise a program that prioritizes life. Or worse yet, will support a machine learning algorithms that will calculate the solution with no “emotion” for us. For now, when a situation like this occurs, we write it off as “well, the driver/pilot/whoever made the best decision they could with the information they had at the time.” For the leap to full automation, society will have to approve the response before it happens. I think this will be very difficult.
If we can get around those two issues (the first is bound to happen eventually), then we’ll be in business!
-
Perhaps so, but do we really want AI controlled fighters making decisions by themselves on which plane to shoot or which target to bomb? I think we’re headed down a potential dangerous path with this development. I can see the benefit of it that you don’t have to send your soldiers into harm’s way, but at the same time this is also the danger. The decision for politicians to go to war might become a lot easier if they don’t have to fear loss of lives on their side. Don’t want to turn this into a political debate and I realize that the development is going full speed ahead and is probably inevitable. But I think it is very questionable if we should really want to go down this road.
So far, the answer has been no, we don’t want that. Hence why the Samsung auto turrets on the DMZ are not operating in Autonomous mode.
Notably, you can still have weapons systems which are very effective, without the AI making that decision for itself. Instead the AI makes a request, which a human operator then approves. Still humans killing humans, just with a fancier tool than usual.