ChatGPT's answer about the flight model of BMS
-
@AviationPlus said in ChatGPT's answer about the flight model of BMS:
Knife edge lol
Oh Sh**t here we go again!
LOL
-
Can you also ask chatGPT the difference between the various blocks?
-
@AviationPlus said in ChatGPT's answer about the flight model of BMS:
Knife edge lol
GPT is a bit confused.
-
Sometimes it comes to me that’s good no aliens ever tried to browse human internet. With all the nonsense they’d come to conclusion to nuke us from the orbit before we spread across the universe in less than 5 minutes.
-
@SoBad said in ChatGPT's answer about the flight model of BMS:
@airtex2019 said in ChatGPT's answer about the flight model of BMS:
it’s pretty clearly scraping this discussion forum as part of its training / knowledge graph
…and keep in mind that this A.I. ability to inhale info, process it, and provide it back to us in a meaningful way is just beginning. I would not be surprised if, 10 years from now, you could feed Bard or his contemporary the entire BMS source code (or any source code, for that matter), and receive in seconds a drastically optimized version of that source code.
> This is going beyond the scope of this post, but what I really look forward to is A.I. musical composition. I’m confident that A.I. will be able to study all of the classics, find patterns of musical sequences that are associated with human emotions, and produce masterpieces without assistance. Composers could then take the results and enhance them even further. I would not be surprised if the entertainment industry-- especially the movie-making industry-- would show their finished film to an A.I. which would then create a soundtrack which a human composer would then polish for presentation. I truly hope to live at least 20 more years to see the amazing advances that are going to happen if we can just somehow manage to maintain global civility.
Itt happened years ago.
But in fact this is pretty easy, it is just remixing of many existing music to hear a new but similar. The music is in fact very algorithmic and mathematical based. This is why you can’t distinguish from human.
While you can VERY easily detect when the dumb bots drawing a human with 6 fingers…
The hype is about the Chat GTP and other stuff is large but if you ask me this is just another dotcom bubble. When I asked anything what you can’t easily find and scan especially not in ENG the AI becomes a plain stupid liar.
Even if “AI” produces something is many times inaccurate or useless. The fans of the AI using the survivor bias, they show only what is valuable but they do not show the n+1 useless / junk output of the AI…
The AI seems to intelligent to average ppl. because they ask mostly such things which is a wiki level thing or below that and the AI simply recognize the most general pattern which is acceptable for many human. Even if it is not accurate just believable. Ask a real specific thing and the AI is hopeless because it can’t produce a real learning and it does not have a real human like knowledge with the power of real mental combination.
Just try to draw any military aviation related full basic thing. I spent hours with these AI generated images, even the most basic request was impossible on n+1 areas. Yes, it generated in fantasy theme nice image but it was ANYTHING but not what I requested…
Does it looks nice this? Yes. Is it what I requested?
NOPE. NOT EVEN A BIT.I wanted to get a woman warrior who is leaning on its sword and stands with one foot on a helmet with dark long hair. Not even the air and eye color is accurate on ANY detail. It is just a random mixed woman face with a fantasy armor layer.
This is the current level of the AI. I can generate 99.99% random images.
According to the Midjourney this is an F-15 Eagle fighter aircraft. You can’t define more exactly. It is not able to draw not even a full random imageg about an EXACT object. Just imagine how many parameters can describe an F-15…
The only thing what AI can do now produce full random things and sometimes as AI is lucky to provide at least barely similar what you request. The AI can serve the “needs” customers only what do not have any real and exact request…
And the AI does not have any idea what happens in such case like this.
What a 100% noob person can do with PS7 within 20 min…
-
If the off topic is still allowed, and being an AI practitioner for many years now, I will make 2 comments on LLMs (large langage models): (i do not know / master well other GenR models):
-
LLMs are not smart. As Mav JP stated, they spit out information that was injected before. The way they do it is really interesting. But this is not smart.
-
Where I see very interesting application is that LLMs can simplify the interface between the machine and humans. I see it as some sort of modern age “Windows”.
e.g.: Feed a custom LLMs with a BMS manual, train it & customize it to minimize hallucination, and then, instead of spamming BMS forum with: Why is my aircraft not turning on the ground? and risk a “RTFM” answer (i miss these good old days :p), the LLM will provide the NWS info, even with the default shortcut to it.
And this is cool… I think.
And this would be possible ONLY because there is a BMS manual, well written, with all the relevant info inside it (it does not need to be super well written btw, it’s quite robust of mixed up ideas within documents). So the smartness is within the BMS dev team, LLM is just the tool to help you access the info (in case you are so freaking lazy & can’t freaking read a well written manual).
Joke aside, this has many interesting applications when you have crazy complicated norms to follow, with updated version on top of each other and crazy badly done versioning…
-
-
@yop217 Well said. LLM AI is a great tool for information so long as the information it presents is accurate to the input and factual to the subject. The largest issue being that an LLM AI model is incapable of being unsure, and can never respond with, “I am not quite certain about your question given the provided context. Could you please elaborate, providing more detailed information about what you are asking?”
Instead, it will make an assumption and then boldly attempt to answer the user input based on that assumption – if it is an incorrect assumption, the response can be incorrect or even misleading.
I think these faults should be the focus for improvement moving forward, as they are the single largest barrier to more dependable mainstream use. Right now, the best users of ChatGPT who are able to actually improve productivity using it are those who know its faults very well and can account for them and work around them in a timely manner which does not detract from the productivity gained by using such a tool.
In the end, that is all it is: a tool. “Needs improvement” should rightfully be stamped on the side, and I very much understand @Mav-jp’s trepidation and lack of enthusiasm for this tool. It’s one thing for it to regurgitate incorrect facts with “the best intentions”, but for tasks such as providing some numbers and instructions for calculations to still provide wrong answers or use bad math to produce the result is downright annerving to say the least.
That being said, I very much find it useful as well as entertaining. I feel it is more useful for those who know and can work around these limitations, but for simple tasks such as asking if you’re using grammar correctly in a sentence or if the word you are using is appropriate, and other such light workloads, it seems to function perfectly 99% of the time. It is the more complex and nuanced topics, or technical tasks, where the flaws become too apparent to ignore, and where success is dependent upon working around those flaws to achieve something productive - and again, in a timeframe which does not make the use of this productivity tool redundant.
An experienced coder will have more success than a neonate because flaws in code or syntax can be spotted early and corrections can be requested, as they won’t need to go back and forth from an IDE to test clearly incorrect code snippets. Those of us already experienced with coding can offload dry/dull/boring tasks to a completion AI, while we focus on more crucial logic and flow of our programs - I find it quite useful for generating accurate summary comments, however asking it to generate comments for the code body is excessive and redundant (comments above a file read instruction stating that this instruction will read a file is more clutter than comment).
I don’t see this tool replacing coders any more than a paintbrush could replace an artist. Anyone can generate AI art with prompts, but it takes skill and experience to consistently generate good AI art that closely approximates the directions provided - just as possessing a paintbrush does not make a person an artist, it requires training and skill gained over time with practice. This is the same with coding tasks, and even writing compositions.
I’ve said before that I feel this tool is useful and will only improve with time, but likely no time soon will it be a magic wand to generate a working software without human direction, interaction, and refinement – great helper (sometimes) for a script or method or a class, but not full stack.
-
I call ChatGPT an advanced version of Wikipedia: you can search it and it will provide answers to topics it has answers for, but you must still verify and cross-check it afterwards.
“AI” is completely mis-used, and not even an accurate name for the … art? Science? I’d just call it “a different way to use computers for finding information”.
-
Until Chat GPT learns to, and CAN - FLY, it means/knows NOTHING.
-
@Wazaza Still a nice read and makes me curious what Doctor-Chat says about the ‘other’ simulator
-
Ask and ye shall receive…
Says Grumble: compare the flight models of Falcon BMS 4.37 with the F16 in DCS
Says the AI doc…
Comparing the flight models of Falcon BMS 4.37 (specifically for the F-16) with the F-16 in Digital Combat Simulator (DCS) involves delving into aspects like realism, accuracy, and the overall flying experience. Both simulators are highly regarded in the flight sim community, but there are key differences:-
Realism and Accuracy:
- Falcon BMS 4.37: Known for its highly realistic and accurate depiction of the F-16’s flight characteristics. The flight model is often praised for its attention to detail in aerodynamics, engine performance, and the aircraft’s response to various flight conditions. It is developed based on real-world data and pilot feedback, aiming to replicate the F-16’s behavior as closely as possible.
- DCS F-16: The DCS F-16 module also offers a highly realistic and accurate flight model. DCS is known for working closely with aircraft manufacturers and using real-world flight test data to create their models. The F-16 in DCS is meticulously crafted, focusing on the nuances of the aircraft’s performance across different flight regimes.
-
Feedback and Handling:
- Falcon BMS 4.37: Pilots often note the immersive feel of the aircraft, citing how the F-16 reacts realistically to inputs and environmental factors. The force feedback and control response are well-regarded, offering a highly engaging flying experience.
- DCS F-16: DCS also provides an immersive and realistic flying experience. The handling of the F-16 is noted for its precision and responsiveness. Pilots can feel subtle differences in flight behavior based on loadout, fuel state, and environmental conditions.
-
Systems Integration:
- Falcon BMS 4.37: While focused on the flight model, Falcon BMS also integrates avionics and systems simulation deeply into the flight experience. This integration impacts the flight model, as pilots must manage systems that affect aircraft performance.
- DCS F-16: DCS places a strong emphasis on the integration of avionics and aircraft systems as well. The way these systems interact with the flight model is also highly detailed, contributing to a comprehensive simulation experience.
-
Flight Envelope and Limitations:
- Falcon BMS 4.37: The simulation covers the entire flight envelope of the F-16, including edge-of-the-envelope flight conditions. It accurately represents the limitations and capabilities of the aircraft under various scenarios.
- DCS F-16: DCS also covers the full flight envelope, with detailed modeling of extreme flight conditions, high angle of attack behavior, and stall characteristics. The effects of various external factors on aircraft performance are meticulously simulated.
-
Community Feedback and Updates:
- Falcon BMS 4.37: As a community-driven project, Falcon BMS continually evolves with updates and improvements based on user feedback and new information.
- DCS F-16: DCS is supported by a professional team and receives regular updates and patches. User feedback is also considered, and the simulation is continuously refined.
In summary, both Falcon BMS 4.37 and DCS provide highly realistic and accurate flight models for the F-16. The choice between the two often comes down to personal preference, the specific aspects of simulation one is looking for, and the broader ecosystem of aircraft and scenarios each platform offers.
-
-
@Grumble said in ChatGPT's answer about the flight model of BMS:
- **DCS F-16**: DCS also provides an immersive and realistic flying experience. The handling of the F-16 is noted for its precision and responsiveness. Pilots can feel subtle differences in flight behavior based on loadout, fuel state, and environmental conditions.
Best joke of the year
-
@Grumble said in ChatGPT's answer about the flight model of BMS:
DCS F-16: DCS is supported by a professional team and receives regular updates and patches. User feedback is also considered, and the simulation is continuously refined.
Yes. They use it to determine whether to ban the user or not.