ChatGPT's answer about the flight model of BMS
-
Just out of curiosity, I asked chatGPT about the accuracy of the BMS’s flight model. Here’s the answer.
The flight model of Falcon BMS, which simulates the F-16 aircraft, is highly regarded for its accuracy and realism. The developers have put significant effort into making the simulation as close to the real F-16 as possible. This includes detailed modeling of aerodynamics, mechanical aspects, and the aircraft’s control systems.
One aspect that’s highlighted is the advanced modeling of the aircraft’s roll axis. The flight model aims to simulate the roll behavior of the real F-16, including its dynamics and how it’s influenced by various factors like the aircraft’s velocity vector. This level of detail contributes to the feeling of flying a real F-16 in the simulation.
The developers have also incorporated data and principles from NASA’s technical papers, specifically NASA Technical Paper 1538, which discusses the stall and post-stall characteristics of a fighter airplane with relaxed longitudinal static stability. This incorporation of real-world, scientific data has significantly contributed to the realism of the flight model.
In addition, the flight model in Falcon BMS takes into account the mechanical parameters like the mass and position of the Center of Gravity (CG), as well as the Inertia Matrix. This approach ensures that the physical model of the aircraft in the simulation behaves realistically, especially in terms of its trajectory and rotations. The model is continuously updated to reflect changes in mass due to fuel consumption and weapon load, further adding to the realism.
However, it’s important to note that despite these efforts, some specific maneuvers may not be perfectly replicated in the simulation compared to their execution in the actual aircraft. For example, there has been discussion around the accuracy of the knife edge maneuver in the simulation, indicating that while the flight model is highly advanced, it may not be perfect in every aspect.
Overall, the Falcon BMS flight model is a result of extensive development and research, aiming to closely mimic the real-world behavior of the F-16. This makes it a highly respected and realistic flight simulation for enthusiasts and professionals alike. For more in-depth information and technical details about the flight model, you can explore the Falcon BMS Forum and Falcon BMS Articles.
-
Very interesting. Artificial Intelligence on top of Artificial Intelligence. Teledyne Here we come!
-
it’s pretty clearly scraping this discussion forum as part of its training / knowledge graph
-
@airtex2019 said in ChatGPT's answer about the flight model of BMS:
it’s pretty clearly scraping this discussion forum as part of its training / knowledge graph
…and keep in mind that this A.I. ability to inhale info, process it, and provide it back to us in a meaningful way is just beginning. I would not be surprised if, 10 years from now, you could feed Bard or his contemporary the entire BMS source code (or any source code, for that matter), and receive in seconds a drastically optimized version of that source code.
This is going beyond the scope of this post, but what I really look forward to is A.I. musical composition. I’m confident that A.I. will be able to study all of the classics, find patterns of musical sequences that are associated with human emotions, and produce masterpieces without assistance. Composers could then take the results and enhance them even further. I would not be surprised if the entertainment industry-- especially the movie-making industry-- would show their finished film to an A.I. which would then create a soundtrack which a human composer would then polish for presentation. I truly hope to live at least 20 more years to see the amazing advances that are going to happen if we can just somehow manage to maintain global civility.
-
On one hand it’s fascinating, on the other side it’s frightening. All these technical advances habe their downsides, especiall, when rulers use them against humanity, and they already do. That’s the other side of the coin often overlooked by tech enthusiasts. Apparently ChatGPT is using and repeating information it finds and “remixes” them. I have seen examples concerning history and politics where the “AI” is blatently lying and spouting propaganda. As said, it’s a tool that can in theory be great, but it can, is and will be abused for less positive purposes.
-
Knife edge lol
-
[…]
However, it’s important to note that despite these efforts, some specific maneuvers may not be perfectly replicated in the simulation compared to their execution in the actual aircraft. For example, there has been discussion around the accuracy of the knife edge maneuver in the simulation, indicating that while the flight model is highly advanced, it may not be perfect in every aspect.
[…]This virtual guy spends too much time on Reddit
-
Seems quite on par with standard ChatGPT – give it a topic, and it will go on and on, with little care for cross-referencing facts – presenting misinformation alongside actual information with oblivious confidence, mirroring tone and inference sometimes amounting to propaganda depending on the original content it scraped the web to attain. At times, you can point out errors in responses to ChatGPT, or even just reply, “Are you sure about that?” to illicit a review of the previous response and attempt to provide corrected or more accurate information (and sometimes failing miserably). You can even end up in endless loops of idiocy where it suggests “A”, you tell it “A” is incorrect, it apologizes and suggests “B”, which you tell it is also not correct, so it goes back to suggesting “A” which you already told it was incorrect.
I’ve been playing with the OpenAI API for a year now for my OpenAI Plugin for VoiceAttack, and I am quite familiar with the limitations of ChatGPT. Once you know how the sausage is made, it’s hard to look at it the same way again. That being said, I have found that responses are far more accurate and dependable when we provide the answer for ChatGPT so it can formulate a human sounding response based on that data rather than relying upon its existing (and dated) knowledge base baked into the model.
I know… go figure, right? Give it the answers and the answers it give you will always be correct.
The OpenAI Embeddings API allows us to provide a section of text (a sentence or paragraph, or a few of either) to generate 1,501 float vectors representing the meaning of content and context of the text. Creating a local database of such entries built from an entire BMS technical manual, or all BMS manuals, would allow us to use Embeddings to quickly and accurately match up natural language queries to relevant entries in the database by generating new embedding vectors for new user input. These new vectors can be compared to those existing vectors of each entry in the database using cosine similarity to gather one or more entries of data most similar to the original user input. Finally, those entries can be provided as part of a user input to ChatGPT to formulate a response, along with simple instructions at the top to ‘Use the supplied data below to formulate a response to the supplied user input’.
I have had quite a bit of success in early proof of concept testing, and can see this becoming a much better way to ask about information for a game or even highly technical simulator like BMS. I’m hoping to explore means to reference other data, such as the ability to offer a list of relevant sections of (various) manuals for cross reference and to view diagrams and images - or even just simple text-to-speech responses. I’m sure this will be a very useful way of retrieving information using natural language rather than a keyword(s) in a search bar, and definitely more accurate and dependable than a Google search. This concept could even be used to carry out instructions or execute applications based on predefined triggers, though obviously would be too slow for keypress voice commands as the turnaround time for processing would exceed a second and voice commands are best with a speech-to-action turnaround time in the sub-second range, preferably millisecond range.
Presently, the task of creating such an embeddings database of a PDF (or several) is a manual and laborious task, even though much of it is offloaded for AI processing, requiring a keen attention to detail and a lot of hand-holding and pre- or post-processing to minimize useless data in the database as well as properly parsing a topic into multiple entries. I hope to streamline this process into an application which can more heavily rely upon AI to make decisions and more ambiguously handle pre- and post-processing automatically. The new OpenAI Assistants API will be an invaluable tool in combination with this concept one day, but realistically it will be many months before independent utility developers such as myself have a ready-to-use hands-off application.
Very interesting stuff, very powerful tools - very accessible even right now. I rarely use Google to look up basic facts anymore, I simply “talk” to AVCS CHAT and listen to the response spoken back to me with text-to-speech… and when something is clearly wrong or even seems wrong, I can berate it verbally which is very cathartic as compared to typing out “you are mistaken”.
-
This perfectly demonstrates that there is nothing “intelligent” in chatGPT
It’s just spitting what it gathers , beeing true or not is not even a factor .
Knife edge pass comment beeing the perfect example.
Our world will become worse and worse.
-
@AviationPlus said in ChatGPT's answer about the flight model of BMS:
Knife edge lol
Oh Sh**t here we go again!
LOL
-
Can you also ask chatGPT the difference between the various blocks?
-
@AviationPlus said in ChatGPT's answer about the flight model of BMS:
Knife edge lol
GPT is a bit confused.
-
Sometimes it comes to me that’s good no aliens ever tried to browse human internet. With all the nonsense they’d come to conclusion to nuke us from the orbit before we spread across the universe in less than 5 minutes.
-
@SoBad said in ChatGPT's answer about the flight model of BMS:
@airtex2019 said in ChatGPT's answer about the flight model of BMS:
it’s pretty clearly scraping this discussion forum as part of its training / knowledge graph
…and keep in mind that this A.I. ability to inhale info, process it, and provide it back to us in a meaningful way is just beginning. I would not be surprised if, 10 years from now, you could feed Bard or his contemporary the entire BMS source code (or any source code, for that matter), and receive in seconds a drastically optimized version of that source code.
> This is going beyond the scope of this post, but what I really look forward to is A.I. musical composition. I’m confident that A.I. will be able to study all of the classics, find patterns of musical sequences that are associated with human emotions, and produce masterpieces without assistance. Composers could then take the results and enhance them even further. I would not be surprised if the entertainment industry-- especially the movie-making industry-- would show their finished film to an A.I. which would then create a soundtrack which a human composer would then polish for presentation. I truly hope to live at least 20 more years to see the amazing advances that are going to happen if we can just somehow manage to maintain global civility.
Itt happened years ago.
But in fact this is pretty easy, it is just remixing of many existing music to hear a new but similar. The music is in fact very algorithmic and mathematical based. This is why you can’t distinguish from human.
While you can VERY easily detect when the dumb bots drawing a human with 6 fingers…
The hype is about the Chat GTP and other stuff is large but if you ask me this is just another dotcom bubble. When I asked anything what you can’t easily find and scan especially not in ENG the AI becomes a plain stupid liar.
Even if “AI” produces something is many times inaccurate or useless. The fans of the AI using the survivor bias, they show only what is valuable but they do not show the n+1 useless / junk output of the AI…
The AI seems to intelligent to average ppl. because they ask mostly such things which is a wiki level thing or below that and the AI simply recognize the most general pattern which is acceptable for many human. Even if it is not accurate just believable. Ask a real specific thing and the AI is hopeless because it can’t produce a real learning and it does not have a real human like knowledge with the power of real mental combination.
Just try to draw any military aviation related full basic thing. I spent hours with these AI generated images, even the most basic request was impossible on n+1 areas. Yes, it generated in fantasy theme nice image but it was ANYTHING but not what I requested…
Does it looks nice this? Yes. Is it what I requested?
NOPE. NOT EVEN A BIT.I wanted to get a woman warrior who is leaning on its sword and stands with one foot on a helmet with dark long hair. Not even the air and eye color is accurate on ANY detail. It is just a random mixed woman face with a fantasy armor layer.
This is the current level of the AI. I can generate 99.99% random images.
According to the Midjourney this is an F-15 Eagle fighter aircraft. You can’t define more exactly. It is not able to draw not even a full random imageg about an EXACT object. Just imagine how many parameters can describe an F-15…
The only thing what AI can do now produce full random things and sometimes as AI is lucky to provide at least barely similar what you request. The AI can serve the “needs” customers only what do not have any real and exact request…
And the AI does not have any idea what happens in such case like this.
What a 100% noob person can do with PS7 within 20 min…
-
If the off topic is still allowed, and being an AI practitioner for many years now, I will make 2 comments on LLMs (large langage models): (i do not know / master well other GenR models):
-
LLMs are not smart. As Mav JP stated, they spit out information that was injected before. The way they do it is really interesting. But this is not smart.
-
Where I see very interesting application is that LLMs can simplify the interface between the machine and humans. I see it as some sort of modern age “Windows”.
e.g.: Feed a custom LLMs with a BMS manual, train it & customize it to minimize hallucination, and then, instead of spamming BMS forum with: Why is my aircraft not turning on the ground? and risk a “RTFM” answer (i miss these good old days :p), the LLM will provide the NWS info, even with the default shortcut to it.
And this is cool… I think.
And this would be possible ONLY because there is a BMS manual, well written, with all the relevant info inside it (it does not need to be super well written btw, it’s quite robust of mixed up ideas within documents). So the smartness is within the BMS dev team, LLM is just the tool to help you access the info (in case you are so freaking lazy & can’t freaking read a well written manual).
Joke aside, this has many interesting applications when you have crazy complicated norms to follow, with updated version on top of each other and crazy badly done versioning…
-
-
@yop217 Well said. LLM AI is a great tool for information so long as the information it presents is accurate to the input and factual to the subject. The largest issue being that an LLM AI model is incapable of being unsure, and can never respond with, “I am not quite certain about your question given the provided context. Could you please elaborate, providing more detailed information about what you are asking?”
Instead, it will make an assumption and then boldly attempt to answer the user input based on that assumption – if it is an incorrect assumption, the response can be incorrect or even misleading.
I think these faults should be the focus for improvement moving forward, as they are the single largest barrier to more dependable mainstream use. Right now, the best users of ChatGPT who are able to actually improve productivity using it are those who know its faults very well and can account for them and work around them in a timely manner which does not detract from the productivity gained by using such a tool.
In the end, that is all it is: a tool. “Needs improvement” should rightfully be stamped on the side, and I very much understand @Mav-jp’s trepidation and lack of enthusiasm for this tool. It’s one thing for it to regurgitate incorrect facts with “the best intentions”, but for tasks such as providing some numbers and instructions for calculations to still provide wrong answers or use bad math to produce the result is downright annerving to say the least.
That being said, I very much find it useful as well as entertaining. I feel it is more useful for those who know and can work around these limitations, but for simple tasks such as asking if you’re using grammar correctly in a sentence or if the word you are using is appropriate, and other such light workloads, it seems to function perfectly 99% of the time. It is the more complex and nuanced topics, or technical tasks, where the flaws become too apparent to ignore, and where success is dependent upon working around those flaws to achieve something productive - and again, in a timeframe which does not make the use of this productivity tool redundant.
An experienced coder will have more success than a neonate because flaws in code or syntax can be spotted early and corrections can be requested, as they won’t need to go back and forth from an IDE to test clearly incorrect code snippets. Those of us already experienced with coding can offload dry/dull/boring tasks to a completion AI, while we focus on more crucial logic and flow of our programs - I find it quite useful for generating accurate summary comments, however asking it to generate comments for the code body is excessive and redundant (comments above a file read instruction stating that this instruction will read a file is more clutter than comment).
I don’t see this tool replacing coders any more than a paintbrush could replace an artist. Anyone can generate AI art with prompts, but it takes skill and experience to consistently generate good AI art that closely approximates the directions provided - just as possessing a paintbrush does not make a person an artist, it requires training and skill gained over time with practice. This is the same with coding tasks, and even writing compositions.
I’ve said before that I feel this tool is useful and will only improve with time, but likely no time soon will it be a magic wand to generate a working software without human direction, interaction, and refinement – great helper (sometimes) for a script or method or a class, but not full stack.
-
I call ChatGPT an advanced version of Wikipedia: you can search it and it will provide answers to topics it has answers for, but you must still verify and cross-check it afterwards.
“AI” is completely mis-used, and not even an accurate name for the … art? Science? I’d just call it “a different way to use computers for finding information”.
-
Until Chat GPT learns to, and CAN - FLY, it means/knows NOTHING.
-
@Wazaza Still a nice read and makes me curious what Doctor-Chat says about the ‘other’ simulator
-
Ask and ye shall receive…
Says Grumble: compare the flight models of Falcon BMS 4.37 with the F16 in DCS
Says the AI doc…
Comparing the flight models of Falcon BMS 4.37 (specifically for the F-16) with the F-16 in Digital Combat Simulator (DCS) involves delving into aspects like realism, accuracy, and the overall flying experience. Both simulators are highly regarded in the flight sim community, but there are key differences:-
Realism and Accuracy:
- Falcon BMS 4.37: Known for its highly realistic and accurate depiction of the F-16’s flight characteristics. The flight model is often praised for its attention to detail in aerodynamics, engine performance, and the aircraft’s response to various flight conditions. It is developed based on real-world data and pilot feedback, aiming to replicate the F-16’s behavior as closely as possible.
- DCS F-16: The DCS F-16 module also offers a highly realistic and accurate flight model. DCS is known for working closely with aircraft manufacturers and using real-world flight test data to create their models. The F-16 in DCS is meticulously crafted, focusing on the nuances of the aircraft’s performance across different flight regimes.
-
Feedback and Handling:
- Falcon BMS 4.37: Pilots often note the immersive feel of the aircraft, citing how the F-16 reacts realistically to inputs and environmental factors. The force feedback and control response are well-regarded, offering a highly engaging flying experience.
- DCS F-16: DCS also provides an immersive and realistic flying experience. The handling of the F-16 is noted for its precision and responsiveness. Pilots can feel subtle differences in flight behavior based on loadout, fuel state, and environmental conditions.
-
Systems Integration:
- Falcon BMS 4.37: While focused on the flight model, Falcon BMS also integrates avionics and systems simulation deeply into the flight experience. This integration impacts the flight model, as pilots must manage systems that affect aircraft performance.
- DCS F-16: DCS places a strong emphasis on the integration of avionics and aircraft systems as well. The way these systems interact with the flight model is also highly detailed, contributing to a comprehensive simulation experience.
-
Flight Envelope and Limitations:
- Falcon BMS 4.37: The simulation covers the entire flight envelope of the F-16, including edge-of-the-envelope flight conditions. It accurately represents the limitations and capabilities of the aircraft under various scenarios.
- DCS F-16: DCS also covers the full flight envelope, with detailed modeling of extreme flight conditions, high angle of attack behavior, and stall characteristics. The effects of various external factors on aircraft performance are meticulously simulated.
-
Community Feedback and Updates:
- Falcon BMS 4.37: As a community-driven project, Falcon BMS continually evolves with updates and improvements based on user feedback and new information.
- DCS F-16: DCS is supported by a professional team and receives regular updates and patches. User feedback is also considered, and the simulation is continuously refined.
In summary, both Falcon BMS 4.37 and DCS provide highly realistic and accurate flight models for the F-16. The choice between the two often comes down to personal preference, the specific aspects of simulation one is looking for, and the broader ecosystem of aircraft and scenarios each platform offers.
-