Falcon 4 in the real world
-
@M79:
If computers are so good as they trying to proof here why just not skip whole war and simulate it results. Winner just send loser lots of explosives to destroy targets that computer decide to be destroyed and loser just send bit less explosives to winner to do same.
I don’t think that any humanitarian law is against it if lives can be saved this way by evacuating cities before destroying those. Computer just gives results that both countries must obey.Fight of 4 AI vs 4 human pilots is something that can really show how good computers really are and it needs to do quite many times to see if computer or human can adapt new fighting technics against each other
…wasn’t there a Star Trek episode based on this? And then as the casualty reports came in people on the list were just expected to report to a disintegration location for…er…disposal?
-
I agree with Blu3wolf…this isn’t really about “AI” as we tend to think about it - no tactics, no “thinking” involved. Just the achievement of a mathematical solution to an end game on an optimal path. It’ a dynamic situation, but it’s just pure, straight math based on kinetics and dynamics. The only tactical decisions involved are still left to humans at the strategic level - humans control the battle, “AI” only executes the/a fight. Pretty much like any other weapon system.
-
You can. It all depends on your training data - and you can develop training data for handling unknown capabilities, too.
In the case of a collision where the AI-driven car “isnt necessarily at fault”, its not a great analogy. Partly because AI development is not done generally by devising clever state machines to follow rules, but partly because AI is already capable of reacting to other traffic accordingly.
If you want to see some excellent video showcasing what you can achieve with AI and machine learning, Id suggest comparing Boston Dynamics videos from the start of the last decade with the videos from the end of the last decade. Id highlight that at the start of that development timeframe, you have a robot that looks like an excellent example of your argument: Not smart enough to, as you put it, take appropriate action to a dynamic situation. I suggest that at the end of that timeframe, the same robot hardware is an excellent counter to your argument.
Perhaps so, but do we really want AI controlled fighters making decisions by themselves on which plane to shoot or which target to bomb? I think we’re headed down a potential dangerous path with this development. I can see the benefit of it that you don’t have to send your soldiers into harm’s way, but at the same time this is also the danger. The decision for politicians to go to war might become a lot easier if they don’t have to fear loss of lives on their side. Don’t want to turn this into a political debate and I realize that the development is going full speed ahead and is probably inevitable. But I think it is very questionable if we should really want to go down this road.
But for me a big part is also the romantic side of it, because fighter pilot is a job I always dreamed about and one day that job is likely to be gone. Not that it matters much, since I’m too old to become a fighter pilot anyway, but still…
Anyway, that’s inevitable too. Every weapons system is outdated at some point in time, so that will be true for (manned) fighter jets as well. -
I agree with Blu3wolf…this isn’t really about “AI” as we tend to think about it - no tactics, no “thinking” involved. Just the achievement of a mathematical solution to an end game on an optimal path. It’ a dynamic situation, but it’s just pure, straight math based on kinetics and dynamics. The only tactical decisions involved are still left to humans at the strategic level - humans control the battle, “AI” only executes the/a fight. Pretty much like any other weapon system.
Not just at the strategic level. In a cas situation there are a lot of tactical decisions to be made on the battlefield. In an air battle the AI has to identify friend or foe etc. So the AI has plenty of tactical decisions to make which can be made more complicated in all kinds of ways. Jamming, bad weather etc.
-
Not just at the strategic level. In a cas situation there are a lot of tactical decisions to be made on the battlefield. In an air battle the AI has to identify friend or foe etc. So the AI has plenty of tactical decisions to make which can be made more complicated in all kinds of ways. Jamming, bad weather etc.
I consider those decisions to be of a “strategic” nature too, just at a finer level - and typically in an actual CAS mission there isn’t as much room to deviate as on might like to imagine - the call i made for support and the supported either CANCO or CANTCO. Once the contract is made it’s executed and the time for any “tactical” decisions is past.
IFF is just another technical problem that doesn’t really require “AI” but can be solved at a purely automated/technical level. The issue then becomes data quality and reliability…and if the entire battle field is automated do you even still require the CAS mission, or do you simply manage point defense/offense (strategically) within the AOR? Does an AI air support vehicle simply fly in and destroy anything that isn’t ID’d as a friend? And do tacticians develop new ideas about “acceptable casualties”? How much autonomy do you actually give an AI weapon?
-
You can. It all depends on your training data - and you can develop training data for handling unknown capabilities, too.
You can’t train a machine learning to have imagination. At most, you can get a diverse data set, but then, it’ll still be at a loss when something outside these parameters crops up. To some degree, you can compensate for “known unknowns”, but I have not yet seen an AI algorithm that could compensate for “unknown unknowns”, the way humans do.
Yes, I am talking about ML and not just a state machine. Anti-computer tactics should exist for that, too, just more sophisticated ones. It’s good for some things, but it just isn’t an actual, thinking human. Even the Boston Dynamics bots are just pretending really well. I saw the video, they’re impressive as far as robots go, but it’s not the kind of dynamic environment I was thinking about. Think about a student who learns everything by heart - such a student may pass an exam, maybe pass off for someone with a clue, but put that student in a situation that requires creative thinking, and you’ll only get a dog-eyed stare. Even something like speech recognition requires a lot of training and is utterly dependent on what microphone you use, when even a preschool child can recognize, over the telephone, not only the words but also who’s talking, even if the speech is very noisy.
Show me an AI that gets trained in F-4 vs. F-4, then takes down a human-flown MiG-21 (or at least survives the fight), and then we’ll be getting somewhere.
-
…wasn’t there a Star Trek episode based on this? And then as the casualty reports came in people on the list were just expected to report to a disintegration location for…er…disposal?
Well I got very good library to this. Just name episode.
Whole conversation is how good computer is to adopt new “tactics” where human is not using same losing tactic again(trial and error). Can human do this faster or can it invent new strategies to beat AI better? -
There is no hard line between robots operating on specific instructions and generalized AI. The generalization of instruction is not a binary thing (pun) but a continuum. Pick apart any AI no matter how generalized and you’ll find some deliberate human instruction somewhere.
AI will and already has changed the nature of air warfare. The new PGMs coming or here already smartly prioritize targets they sense as they arrive in a designated area, network and coordinate with others, etc. The idea that an AI can never replace a human pilot is not seeing that AI isn’t going to be a drop in direct replacement. The nature of the whole system will adjust to the new characteristics of the components. Airframes driven by AI have happened and something resembling a current fighter jet will happen. AI is already superior at a lot of tasks that the human pilot already performs and that list is growing. The things that AI can’t do well yet will be a combination of accepted degradation, shifted by redesign off of that platform, and improved.
Simulated clean and nice war (yes like the Star Trek episode) misses the point of war: that it’s the lack of rules. Gentleman’s agreements usually favor one side of a fight. Warfare is the breakdown of diplomacy and warfare rules are a form of diplomacy. A terrorist bombing a subway is conducting warfare to their strengths. The nation state may demand “hey you can’t do that, you have to use multi-million dollar machines wearing special matching clothing attacking only other multi-million dollar machines with other people with different matching clothing inside.” It’s no surprise that a force which knows it will lose in that system won’t participate. Would you agree to play poker to resolve the conflict if you’re better at checkers? If the result of a simulation was annihilation, why would you agree to the results? What do you have to lose by rebelling?
-
@M79:
Well I got very good library to this. Just name episode.
Whole conversation is how good computer is to adopt new “tactics” where human is not using same losing tactic again(trial and error). Can human do this faster or can it invent new strategies to beat AI better?Kind of goes to something I’ve heard said about tech and warfare - that a truly significant advancement doesn’t just change the fight, it changes HOW you fight. The real problem here is that we’re thinking of AI in the context of how it can be used to win chess games…that’s not how it’s used on a battlefield. Not yet, at least…and I expect to be long dead before that time, or because of it.
Oh…and it’s this episode -
-
This thread made me do some research on the current state of machine learning and its creativity. Attached below is my most worrying finding; the joker’s joke was beyond funny
-
The biggest problem with AI operating any kind of platform, is the the idea of “perception”. The world is full of almost infinite objects and infinite problems (with their solutions) at infinite resolutions that, without some sort of filter (humans use the idea of perception, where our brain filters that which is “not required” out of our area of perceptible thought) to deal with the problem.
Our machines currently have 2 problems:
1. The sensors and perception. we have right now are not good enough (or they are too good, depending on your idea of perception). The information provided to the AI is too much for it to work with, at the speeds required, to complete complex tasks like driving down a road in a suburban environment. We have specific sensors that can accomplish simple tasks, like staying a certain distance away from vehicles in front of us, or keeping within the lines, but the entire context is something that can not currently be addressed at the computing rates we currently have with the sensors that are available.
2. The perception of what is right - the trolley problem: it will be hard for people/society to accept that someone in a cubicle somewhere will devise a program that prioritizes life. Or worse yet, will support a machine learning algorithms that will calculate the solution with no “emotion” for us. For now, when a situation like this occurs, we write it off as “well, the driver/pilot/whoever made the best decision they could with the information they had at the time.” For the leap to full automation, society will have to approve the response before it happens. I think this will be very difficult.
If we can get around those two issues (the first is bound to happen eventually), then we’ll be in business!
-
Perhaps so, but do we really want AI controlled fighters making decisions by themselves on which plane to shoot or which target to bomb? I think we’re headed down a potential dangerous path with this development. I can see the benefit of it that you don’t have to send your soldiers into harm’s way, but at the same time this is also the danger. The decision for politicians to go to war might become a lot easier if they don’t have to fear loss of lives on their side. Don’t want to turn this into a political debate and I realize that the development is going full speed ahead and is probably inevitable. But I think it is very questionable if we should really want to go down this road.
So far, the answer has been no, we don’t want that. Hence why the Samsung auto turrets on the DMZ are not operating in Autonomous mode.
Notably, you can still have weapons systems which are very effective, without the AI making that decision for itself. Instead the AI makes a request, which a human operator then approves. Still humans killing humans, just with a fancier tool than usual.
-
Insert the joke about AI being just if statements here.
-
You can’t train a machine learning to have imagination. At most, you can get a diverse data set, but then, it’ll still be at a loss when something outside these parameters crops up. To some degree, you can compensate for “known unknowns”, but I have not yet seen an AI algorithm that could compensate for “unknown unknowns”, the way humans do.
Yes, I am talking about ML and not just a state machine. Anti-computer tactics should exist for that, too, just more sophisticated ones. It’s good for some things, but it just isn’t an actual, thinking human. Even the Boston Dynamics bots are just pretending really well. I saw the video, they’re impressive as far as robots go, but it’s not the kind of dynamic environment I was thinking about. Think about a student who learns everything by heart - such a student may pass an exam, maybe pass off for someone with a clue, but put that student in a situation that requires creative thinking, and you’ll only get a dog-eyed stare. Even something like speech recognition requires a lot of training and is utterly dependent on what microphone you use, when even a preschool child can recognize, over the telephone, not only the words but also who’s talking, even if the speech is very noisy.
Show me an AI that gets trained in F-4 vs. F-4, then takes down a human-flown MiG-21 (or at least survives the fight), and then we’ll be getting somewhere.
AI can analyze data and optimize decisions in a way that no human can ever do. It will only get better with time too. It was once thought that AI could not learn how to play the game Go, given that there is no clear strategy to it. Now a neural net, that learned how to play without being explicitly programmed with the rules, is unbeatable for any human.
Pilotless aircraft are an obvious thing for the future. It will no longer be constrained during maneuvering to not crush the pilot or cause him to black out. It will never miss a sensor reading or a threat warning. Always apply the optimal, by the book, problem resolution method instantly. It can also make 99.99% of decisions by itself, and perfectly so, while it can refer to human operators for remote guidance on the rare occasion when it is needed.
-
AI can analyze data and optimize decisions in a way that no human can ever do. It will only get better with time too. It was once thought that AI could not learn how to play the game Go, given that there is no clear strategy to it. Now a neural net, that learned how to play without being explicitly programmed with the rules, is unbeatable for any human.
This is incorrect. Go is a game with clear goals and static rules. It is impossible to make a state machine that would be beyond decent at it. However, for a neural net, this is an easy problem to solve.
Yes, the AI will always apply the “optimal”, by the book method, and that’s how it’ll be utterly trounced by the humans. Tactics will be developed accounting for the AI always flying a “perfect” procedure, probably some sort of feint to bait it into a maneuver that, if you know it’s coming, you can counter. The feint doesn’t even have to be a maneuver, it could be achieved by spoofing the AI’s sensors to make it think it sees something different than it does. Machine vision, in particular, is very far behind human capabilities, and its peculiarities could likely be exploited.
-
This is incorrect. Go is a game with clear goals and static rules. It is impossible to make a state machine that would be beyond decent at it. However, for a neural net, this is an easy problem to solve.
Yes, the AI will always apply the “optimal”, by the book method, and that’s how it’ll be utterly trounced by the humans. Tactics will be developed accounting for the AI always flying a “perfect” procedure, probably some sort of feint to bait it into a maneuver that, if you know it’s coming, you can counter. The feint doesn’t even have to be a maneuver, it could be achieved by spoofing the AI’s sensors to make it think it sees something different than it does. Machine vision, in particular, is very far behind human capabilities, and its peculiarities could likely be exploited.
-
The air combat is also have only quite few rule of thumb rules. Stay out of NEZ, keep your speed and Ps at high to certain moanuvers.
-
The “AI” can calculate the performance of the opponent while the human at least can guess or estimate. In REAL time.
-
The only real problem for an AI to translate the data which is gathered by radar, EO and other system. If these works the SA and decision making capability of a computer if far, far, far, far above of humans.
Considering these against a well programmed AI easily will defeat sooner or later any human pilot as long as they have the same SA. But in a target is tracked radar or EO + data link that AI can identify even the type of the opponent –> using PS curves and how the opponent flies even the fuel level of the plane can be estimated --> the AI simply knows how much time it has to win because of fuel level
-
-
This is incorrect. Go is a game with clear goals and static rules. It is impossible to make a state machine that would be beyond decent at it. However, for a neural net, this is an easy problem to solve.
Yes, the AI will always apply the “optimal”, by the book method, and that’s how it’ll be utterly trounced by the humans. Tactics will be developed accounting for the AI always flying a “perfect” procedure, probably some sort of feint to bait it into a maneuver that, if you know it’s coming, you can counter. The feint doesn’t even have to be a maneuver, it could be achieved by spoofing the AI’s sensors to make it think it sees something different than it does. Machine vision, in particular, is very far behind human capabilities, and its peculiarities could likely be exploited.
I said there is no clear strategy for playing it, not clear goals.
99.99% of aerial combat is sensor based now. Vision is almost irrelevant. AI will beat a human pilot almost every time.
-
- The “AI” can calculate the performance of the opponent while the human at least can guess or estimate. In REAL time.
No, it can’t. It only knows what it can see on sensors. If you’ve got a plane coming at you at mach 0.9, that doesn’t give you much besides the fact it can go at least that fast. If you’ve got an enemy that’s turning, you don’t know if it’s because it’s the hardest turn he can pull, or if he’s turning at less than that to conserve energy. There’s so much data it simply won’t have. In fact, due to limitations of computer vision, even recognizing an enemy aircraft is a challenge, though it can be mitigated by AWACS and datalink.
In fact, that the human can work off a vague guess or estimate, while AI can only try to calculate things from what it sees (which might be wrong) is an advantage going towards a human. If I know I’m fighting the Su-57, then if I see it going at mach 0.9 I’ll know it can probably go faster than that. Machine learning, unless trained on its real stats (which the Russians are keeping under wraps), won’t. AI doesn’t think, it learns by pure, unconstrained trail and error.
Everyone is consistently giving these algorithms too much credit. Those who actually work with them (I personally don’t, but my uncle does) know they’re limited. They are amazing tools, but they’re nothing more than tools that can be used by a thinking human, like a chemist or a pilot, to achieve something, like finding a drug candidate or getting a firing solution. Rather than EDI, think ERS from HAWX. IMO, AI will supplement, rather than replace, the pilot. Whether the pilot sits in a comfy chair on the ground and flies remotely or is crammed into the cockpit is another matter entirely, and more a question of how well we can’t prevent the remote control datalink from being jammed.
-
Agree again.
-
Let me put this way: I am Physical-Chemist, I work with big data analysis of multidimensional attosecond/femtosecond spectroscopy. I have developed tons of code in MATLAB, C++, Python, etc for ML/DL, but also for my hobbies (flight simulation and wargamming) and I can tell you, dragon, you dont understand what you are talking about. Actually, this topic has a huge number of misconceptions about AI, ML, DL, CNN, training, generalized intelligence, conscious vs unconscious processes in human mind, models of cognitive brain, etc. Even the meaning of the word “think”, dragon, requires a definition. There is enough experimental evidence that most of our actions are pre-computed in a unconscious way prior to the final conscious step. Where is “thinking” take place? There is evidence that there are tons of sub-cortex circuits running C0 computations before the consciousness “takes a decision”. Think about the neuronal workspace model by Dehaene et al, for example. ML/DL are already simulating C0 algorithms nowadays……few lines of code can do miracles. Last year a paper was published where a kind of DL algorithm was trained to read and “understand” (sic) the literature about some physico-chemical properties of materials for organic electronic (AFIR). At the end, the network was asked for a suggestion of a novel material with enhanced properties. And it delivered…it read the literature, and proposed at the end a material with novel properties that were never thought before. the research group tested the material and indeed the network was right!
Although C0 algorithms are nowadays easily implemented, C1/C2 features are indeed still in development…first steps. But this is happening though. Bayesian networks are hot topics in that regard, and are being implemented as we speak with amazing results.If you want any reviews (papers) about these topics, I am more than glad to share with you.
This paper here is very interesting, although requires some understanding of the topic:
“What is consciousness, and could machines have it?”
https://science.sciencemag.org/content/358/6362/486?rss=1