Falcon 4 in the real world
-
Not just at the strategic level. In a cas situation there are a lot of tactical decisions to be made on the battlefield. In an air battle the AI has to identify friend or foe etc. So the AI has plenty of tactical decisions to make which can be made more complicated in all kinds of ways. Jamming, bad weather etc.
I consider those decisions to be of a “strategic” nature too, just at a finer level - and typically in an actual CAS mission there isn’t as much room to deviate as on might like to imagine - the call i made for support and the supported either CANCO or CANTCO. Once the contract is made it’s executed and the time for any “tactical” decisions is past.
IFF is just another technical problem that doesn’t really require “AI” but can be solved at a purely automated/technical level. The issue then becomes data quality and reliability…and if the entire battle field is automated do you even still require the CAS mission, or do you simply manage point defense/offense (strategically) within the AOR? Does an AI air support vehicle simply fly in and destroy anything that isn’t ID’d as a friend? And do tacticians develop new ideas about “acceptable casualties”? How much autonomy do you actually give an AI weapon?
-
You can. It all depends on your training data - and you can develop training data for handling unknown capabilities, too.
You can’t train a machine learning to have imagination. At most, you can get a diverse data set, but then, it’ll still be at a loss when something outside these parameters crops up. To some degree, you can compensate for “known unknowns”, but I have not yet seen an AI algorithm that could compensate for “unknown unknowns”, the way humans do.
Yes, I am talking about ML and not just a state machine. Anti-computer tactics should exist for that, too, just more sophisticated ones. It’s good for some things, but it just isn’t an actual, thinking human. Even the Boston Dynamics bots are just pretending really well. I saw the video, they’re impressive as far as robots go, but it’s not the kind of dynamic environment I was thinking about. Think about a student who learns everything by heart - such a student may pass an exam, maybe pass off for someone with a clue, but put that student in a situation that requires creative thinking, and you’ll only get a dog-eyed stare. Even something like speech recognition requires a lot of training and is utterly dependent on what microphone you use, when even a preschool child can recognize, over the telephone, not only the words but also who’s talking, even if the speech is very noisy.
Show me an AI that gets trained in F-4 vs. F-4, then takes down a human-flown MiG-21 (or at least survives the fight), and then we’ll be getting somewhere.
-
…wasn’t there a Star Trek episode based on this? And then as the casualty reports came in people on the list were just expected to report to a disintegration location for…er…disposal?
Well I got very good library to this. Just name episode.
Whole conversation is how good computer is to adopt new “tactics” where human is not using same losing tactic again(trial and error). Can human do this faster or can it invent new strategies to beat AI better? -
There is no hard line between robots operating on specific instructions and generalized AI. The generalization of instruction is not a binary thing (pun) but a continuum. Pick apart any AI no matter how generalized and you’ll find some deliberate human instruction somewhere.
AI will and already has changed the nature of air warfare. The new PGMs coming or here already smartly prioritize targets they sense as they arrive in a designated area, network and coordinate with others, etc. The idea that an AI can never replace a human pilot is not seeing that AI isn’t going to be a drop in direct replacement. The nature of the whole system will adjust to the new characteristics of the components. Airframes driven by AI have happened and something resembling a current fighter jet will happen. AI is already superior at a lot of tasks that the human pilot already performs and that list is growing. The things that AI can’t do well yet will be a combination of accepted degradation, shifted by redesign off of that platform, and improved.
Simulated clean and nice war (yes like the Star Trek episode) misses the point of war: that it’s the lack of rules. Gentleman’s agreements usually favor one side of a fight. Warfare is the breakdown of diplomacy and warfare rules are a form of diplomacy. A terrorist bombing a subway is conducting warfare to their strengths. The nation state may demand “hey you can’t do that, you have to use multi-million dollar machines wearing special matching clothing attacking only other multi-million dollar machines with other people with different matching clothing inside.” It’s no surprise that a force which knows it will lose in that system won’t participate. Would you agree to play poker to resolve the conflict if you’re better at checkers? If the result of a simulation was annihilation, why would you agree to the results? What do you have to lose by rebelling?
-
@M79:
Well I got very good library to this. Just name episode.
Whole conversation is how good computer is to adopt new “tactics” where human is not using same losing tactic again(trial and error). Can human do this faster or can it invent new strategies to beat AI better?Kind of goes to something I’ve heard said about tech and warfare - that a truly significant advancement doesn’t just change the fight, it changes HOW you fight. The real problem here is that we’re thinking of AI in the context of how it can be used to win chess games…that’s not how it’s used on a battlefield. Not yet, at least…and I expect to be long dead before that time, or because of it.
Oh…and it’s this episode -
-
This thread made me do some research on the current state of machine learning and its creativity. Attached below is my most worrying finding; the joker’s joke was beyond funny
-
The biggest problem with AI operating any kind of platform, is the the idea of “perception”. The world is full of almost infinite objects and infinite problems (with their solutions) at infinite resolutions that, without some sort of filter (humans use the idea of perception, where our brain filters that which is “not required” out of our area of perceptible thought) to deal with the problem.
Our machines currently have 2 problems:
1. The sensors and perception. we have right now are not good enough (or they are too good, depending on your idea of perception). The information provided to the AI is too much for it to work with, at the speeds required, to complete complex tasks like driving down a road in a suburban environment. We have specific sensors that can accomplish simple tasks, like staying a certain distance away from vehicles in front of us, or keeping within the lines, but the entire context is something that can not currently be addressed at the computing rates we currently have with the sensors that are available.
2. The perception of what is right - the trolley problem: it will be hard for people/society to accept that someone in a cubicle somewhere will devise a program that prioritizes life. Or worse yet, will support a machine learning algorithms that will calculate the solution with no “emotion” for us. For now, when a situation like this occurs, we write it off as “well, the driver/pilot/whoever made the best decision they could with the information they had at the time.” For the leap to full automation, society will have to approve the response before it happens. I think this will be very difficult.
If we can get around those two issues (the first is bound to happen eventually), then we’ll be in business!
-
Perhaps so, but do we really want AI controlled fighters making decisions by themselves on which plane to shoot or which target to bomb? I think we’re headed down a potential dangerous path with this development. I can see the benefit of it that you don’t have to send your soldiers into harm’s way, but at the same time this is also the danger. The decision for politicians to go to war might become a lot easier if they don’t have to fear loss of lives on their side. Don’t want to turn this into a political debate and I realize that the development is going full speed ahead and is probably inevitable. But I think it is very questionable if we should really want to go down this road.
So far, the answer has been no, we don’t want that. Hence why the Samsung auto turrets on the DMZ are not operating in Autonomous mode.
Notably, you can still have weapons systems which are very effective, without the AI making that decision for itself. Instead the AI makes a request, which a human operator then approves. Still humans killing humans, just with a fancier tool than usual.
-
Insert the joke about AI being just if statements here.
-
You can’t train a machine learning to have imagination. At most, you can get a diverse data set, but then, it’ll still be at a loss when something outside these parameters crops up. To some degree, you can compensate for “known unknowns”, but I have not yet seen an AI algorithm that could compensate for “unknown unknowns”, the way humans do.
Yes, I am talking about ML and not just a state machine. Anti-computer tactics should exist for that, too, just more sophisticated ones. It’s good for some things, but it just isn’t an actual, thinking human. Even the Boston Dynamics bots are just pretending really well. I saw the video, they’re impressive as far as robots go, but it’s not the kind of dynamic environment I was thinking about. Think about a student who learns everything by heart - such a student may pass an exam, maybe pass off for someone with a clue, but put that student in a situation that requires creative thinking, and you’ll only get a dog-eyed stare. Even something like speech recognition requires a lot of training and is utterly dependent on what microphone you use, when even a preschool child can recognize, over the telephone, not only the words but also who’s talking, even if the speech is very noisy.
Show me an AI that gets trained in F-4 vs. F-4, then takes down a human-flown MiG-21 (or at least survives the fight), and then we’ll be getting somewhere.
AI can analyze data and optimize decisions in a way that no human can ever do. It will only get better with time too. It was once thought that AI could not learn how to play the game Go, given that there is no clear strategy to it. Now a neural net, that learned how to play without being explicitly programmed with the rules, is unbeatable for any human.
Pilotless aircraft are an obvious thing for the future. It will no longer be constrained during maneuvering to not crush the pilot or cause him to black out. It will never miss a sensor reading or a threat warning. Always apply the optimal, by the book, problem resolution method instantly. It can also make 99.99% of decisions by itself, and perfectly so, while it can refer to human operators for remote guidance on the rare occasion when it is needed.
-
AI can analyze data and optimize decisions in a way that no human can ever do. It will only get better with time too. It was once thought that AI could not learn how to play the game Go, given that there is no clear strategy to it. Now a neural net, that learned how to play without being explicitly programmed with the rules, is unbeatable for any human.
This is incorrect. Go is a game with clear goals and static rules. It is impossible to make a state machine that would be beyond decent at it. However, for a neural net, this is an easy problem to solve.
Yes, the AI will always apply the “optimal”, by the book method, and that’s how it’ll be utterly trounced by the humans. Tactics will be developed accounting for the AI always flying a “perfect” procedure, probably some sort of feint to bait it into a maneuver that, if you know it’s coming, you can counter. The feint doesn’t even have to be a maneuver, it could be achieved by spoofing the AI’s sensors to make it think it sees something different than it does. Machine vision, in particular, is very far behind human capabilities, and its peculiarities could likely be exploited.
-
This is incorrect. Go is a game with clear goals and static rules. It is impossible to make a state machine that would be beyond decent at it. However, for a neural net, this is an easy problem to solve.
Yes, the AI will always apply the “optimal”, by the book method, and that’s how it’ll be utterly trounced by the humans. Tactics will be developed accounting for the AI always flying a “perfect” procedure, probably some sort of feint to bait it into a maneuver that, if you know it’s coming, you can counter. The feint doesn’t even have to be a maneuver, it could be achieved by spoofing the AI’s sensors to make it think it sees something different than it does. Machine vision, in particular, is very far behind human capabilities, and its peculiarities could likely be exploited.
-
The air combat is also have only quite few rule of thumb rules. Stay out of NEZ, keep your speed and Ps at high to certain moanuvers.
-
The “AI” can calculate the performance of the opponent while the human at least can guess or estimate. In REAL time.
-
The only real problem for an AI to translate the data which is gathered by radar, EO and other system. If these works the SA and decision making capability of a computer if far, far, far, far above of humans.
Considering these against a well programmed AI easily will defeat sooner or later any human pilot as long as they have the same SA. But in a target is tracked radar or EO + data link that AI can identify even the type of the opponent –> using PS curves and how the opponent flies even the fuel level of the plane can be estimated --> the AI simply knows how much time it has to win because of fuel level
-
-
This is incorrect. Go is a game with clear goals and static rules. It is impossible to make a state machine that would be beyond decent at it. However, for a neural net, this is an easy problem to solve.
Yes, the AI will always apply the “optimal”, by the book method, and that’s how it’ll be utterly trounced by the humans. Tactics will be developed accounting for the AI always flying a “perfect” procedure, probably some sort of feint to bait it into a maneuver that, if you know it’s coming, you can counter. The feint doesn’t even have to be a maneuver, it could be achieved by spoofing the AI’s sensors to make it think it sees something different than it does. Machine vision, in particular, is very far behind human capabilities, and its peculiarities could likely be exploited.
I said there is no clear strategy for playing it, not clear goals.
99.99% of aerial combat is sensor based now. Vision is almost irrelevant. AI will beat a human pilot almost every time.
-
- The “AI” can calculate the performance of the opponent while the human at least can guess or estimate. In REAL time.
No, it can’t. It only knows what it can see on sensors. If you’ve got a plane coming at you at mach 0.9, that doesn’t give you much besides the fact it can go at least that fast. If you’ve got an enemy that’s turning, you don’t know if it’s because it’s the hardest turn he can pull, or if he’s turning at less than that to conserve energy. There’s so much data it simply won’t have. In fact, due to limitations of computer vision, even recognizing an enemy aircraft is a challenge, though it can be mitigated by AWACS and datalink.
In fact, that the human can work off a vague guess or estimate, while AI can only try to calculate things from what it sees (which might be wrong) is an advantage going towards a human. If I know I’m fighting the Su-57, then if I see it going at mach 0.9 I’ll know it can probably go faster than that. Machine learning, unless trained on its real stats (which the Russians are keeping under wraps), won’t. AI doesn’t think, it learns by pure, unconstrained trail and error.
Everyone is consistently giving these algorithms too much credit. Those who actually work with them (I personally don’t, but my uncle does) know they’re limited. They are amazing tools, but they’re nothing more than tools that can be used by a thinking human, like a chemist or a pilot, to achieve something, like finding a drug candidate or getting a firing solution. Rather than EDI, think ERS from HAWX. IMO, AI will supplement, rather than replace, the pilot. Whether the pilot sits in a comfy chair on the ground and flies remotely or is crammed into the cockpit is another matter entirely, and more a question of how well we can’t prevent the remote control datalink from being jammed.
-
Agree again.
-
Let me put this way: I am Physical-Chemist, I work with big data analysis of multidimensional attosecond/femtosecond spectroscopy. I have developed tons of code in MATLAB, C++, Python, etc for ML/DL, but also for my hobbies (flight simulation and wargamming) and I can tell you, dragon, you dont understand what you are talking about. Actually, this topic has a huge number of misconceptions about AI, ML, DL, CNN, training, generalized intelligence, conscious vs unconscious processes in human mind, models of cognitive brain, etc. Even the meaning of the word “think”, dragon, requires a definition. There is enough experimental evidence that most of our actions are pre-computed in a unconscious way prior to the final conscious step. Where is “thinking” take place? There is evidence that there are tons of sub-cortex circuits running C0 computations before the consciousness “takes a decision”. Think about the neuronal workspace model by Dehaene et al, for example. ML/DL are already simulating C0 algorithms nowadays……few lines of code can do miracles. Last year a paper was published where a kind of DL algorithm was trained to read and “understand” (sic) the literature about some physico-chemical properties of materials for organic electronic (AFIR). At the end, the network was asked for a suggestion of a novel material with enhanced properties. And it delivered…it read the literature, and proposed at the end a material with novel properties that were never thought before. the research group tested the material and indeed the network was right!
Although C0 algorithms are nowadays easily implemented, C1/C2 features are indeed still in development…first steps. But this is happening though. Bayesian networks are hot topics in that regard, and are being implemented as we speak with amazing results.If you want any reviews (papers) about these topics, I am more than glad to share with you.
This paper here is very interesting, although requires some understanding of the topic:
“What is consciousness, and could machines have it?”
https://science.sciencemag.org/content/358/6362/486?rss=1 -
No, it can’t. It only knows what it can see on sensors.
Same case for humans which have only very weak eyes - can’t see with at night them - and very, very slow brain in term of processing power.
-
The human pilot It uses the same radar as AI. But an AI easily can calculate itself the possible engagement zone and flies 100% fine in a calculated path. The human can’t do this. Literally it can focus only 1-2 targets and that is all. The AI easily can do with dozens.
-
Same case with DAS, in fact the system has to translate things for the human and the brain + eye has to process what the display shows. While the AI can identify on TGP using from some pixels a tank and can distinguish from a SHORAD…
In a many vs many air combat using jus a data from AWACS and other plane the AI simply can keep itself on the edge of DLZ and can loft missiles far better then ANY human. Because it calculates thing and not “feels” them.
About 4 years ago I read a test where AI controlled plane against superior numbers kicked the asses of human pilots…If you’ve got a plane coming at you at mach 0.9, that doesn’t give you much besides the fact it can go at least that fast. If you’ve got an enemy that’s turning, you don’t know if it’s because it’s the hardest turn he can pull, or if he’s turning at less than that to conserve energy. There’s so much data it simply won’t have. In fact, due to limitations of computer vision, even recognizing an enemy aircraft is a challenge, though it can be mitigated by AWACS and datalink.
But it can analyze a turn. While te human simply does not sense that opponent does a 3G or 5G turn.
And in air combat it can be quite well predicted when is needed the best performance turn which easily can identify an AI based on principles of physics of the air combat.You way, way, way, way underestimate difference what can calculate almost exactly and AI using raw data and where fails the human brain.
You know a cheap SW on your smartphone easily can interpolate many types of image on any face even. Can you guess what could do a SW on a plane which can see shapes?
At close even using passive IR can calculate the heading of a plane when the human eye see literally a single dot…In fact the BVR air combat and dogfight are very close to such a closed environment as a chess or any game. As long as you work with digitized target coordinates. The pilots also have such informatin which went throught this filter…
-
-
It only knows what it can see on sensors.
Not correct. There are several examples of ML/DL developed with a priori knowledge.
If you’ve got a plane coming at you at mach 0.9, that doesn’t give you much besides the fact it can go at least that fast. If you’ve got an enemy that’s turning, you don’t know if it’s because it’s the hardest turn he can pull, or if he’s turning at less than that to conserve energy. There’s so much data it simply won’t have.
I think this can be implemened with some sort of domain adapted network.
In fact, due to limitations of computer vision, even recognizing an enemy aircraft is a challenge,
Source? Some years ago I attended a talk of someone making acft recognition in distances over 5nm by mixing UV/NIR images. Cant find the abstract anymore.
But here are some other examples that I could find.
“TOWARD AUTOMATED AERIAL REFUELING: AUTOMATED VISUAL AIRCRAFT IDENTIFICATION WITH CONVOLUTIONAL NEURAL NETWORKS”
“Deep Multiple Instance Learning for Airplane Detection in High Resolution Imagery”
“Neural Network Classifier for Fighter Aircraft Model Recognition”
“Aircraft identification by moment invariants”
“Aircraft visual identification by neural networks”In fact, that the human can work off a vague guess or estimate, while AI can only try to calculate things from what it sees (which might be wrong) is an advantage going towards a human. If I know I’m fighting the Su-57, then if I see it going at mach 0.9 I’ll know it can probably go faster than that. Machine learning, unless trained on its real stats (which the Russians are keeping under wraps), won’t. AI doesn’t think, it learns by pure, unconstrained trail and error.
Yes, humans can learn and make decisions from very few examples. “AI” as you described cant. The problem you are comparing C0 algorithms with full human brains. Bayesian networks, on the other hand, have the potential to ponder in a very similar way what you are suggesting. Again, ML!=DL!=Bayesian Networks!= Art. narrow inteligence!=art. General intelligence.
-
As long az “AI” can do such thing speaking about ID issues for a system seems to me pretty funny…
https://www.researchgate.net/figure/SAR-images-of-a-T-72-tank-in-MSTAR-data-set-from-different-views-a-Two-target-images_fig1_326488086
https://www.researchgate.net/publication/324722164_Target_Reconstruction_Based_on_Attributed_Scattering_Centers_with_Application_to_Robust_SAR_ATR -
Those are SAR images, though. I’m not sure how they compare to optical from AI perspective, but radar tends to have better contrast, due to how it interacts with metallic surfaces. This narrows the possibilities down.
@tbuc:Not correct. There are several examples of ML/DL developed with a priori knowledge.
I didn’t mean that. I meant that it has to work off what it sees. It can have a priori knowledge, but that doesn’t mean it’ll be able to glean parameters that were unknown to its creators. Going back to the PAK-FA example, it may know how fast it can officially go (same as a human can), but that doesn’t give it a magical ability to calculate its actual max speed until it actually does go that fast.
Source? Some years ago I attended a talk of someone making acft recognition in distances over 5nm by mixing UV/NIR images. Cant find the abstract anymore.
This looks like a very recent development. I don’t follow the field that closely, it’s possible they solved it by now. Admittedly, planes should be one of the easier things to deal with (as long as they’re up against the sky), especially in combination with other sensors.
Yes, humans can learn and make decisions from very few examples. “AI” as you described cant. The problem you are comparing C0 algorithms with full human brains. Bayesian networks, on the other hand, have the potential to ponder in a very similar way what you are suggesting. Again, ML!=DL!=Bayesian Networks!= Art. narrow inteligence!=art. General intelligence.
Well, of course we have to compare with human brains, since that’s what we currently have. The DARPA pilotbot was deep learning, so this is what I was talking about. Bayesian Networks are awesome, but don’t they require hardware capable of solving NP-hard problems?
I’m a biophysicist, and I’ve had some indirect exposure to DL software intended for drug discovery work. I know that chemists who work with it do not foresee being put out of their jobs by robots, but they do look forward to their jobs getting less tedious. I also happen to know that human brain is a fiendishly complex, intricate system that is non-deterministic, nonlinear to the extreme and likely involves quantum effects. A deterministic machine is not capable of emulating a non-deterministic one. Quantum computers might help here, but so far, we’re still in the “getting it to work” phase.
To be clear: all my “AI” references were to deep learning, particularly as used by commercial software and DARPA. This is where we’re at when it comes to AI that’s actually seeing widespread use. I’m not up to date on the latest, hot from the stove research developments, as it’s not my field. Also, given the difference between hyped-up promises and what was actually delivered (useful as it might be), I’m also wary of claims of new, game-changing algorithms that can replace humans in any role except the menial ones. TBH, I might be wrong, but I think these algorithms won’t become any more than tools until they’re running on quantum computers.