WIP/Teaser: LSOBot …. BMS Carrier OP Landing Evaluation Tool
-
LSOBot Released: https://www.benchmarksims.org/forum/showthread.php?28908-Release-LSOBot-A-Carrier-Landing-Grading-Tool
Something Tyrant at 1st VFW has been working on for a little while. Still in ‘beta’, but there is a ‘release candidate’. With luck …. 3 to 4 weeks.
-
Well done Tyrant!
-
The grading and comments are all automatic then? I’m not sure what features it includes but it would be nice to export the grades to a csv file for upload to a greenie board on a website.
-
Yes …. comments, grade and wire are ‘calculated’ automatically. Good feature suggestion.
-
Great work, looks awesome!
Also, what HUD mod is that? The glass has a purple tint which would make the Hornet HUD so much easier to see during the daytime!
-
Great work, looks awesome!
Also, what HUD mod is that? The glass has a purple tint which would make the Hornet HUD so much easier to see during the daytime!
Yeah, Demo modded the texture color. It’s much better. I thought he shared it here somewhere, but I can’t find it.
-
Yes …. comments, grade and wire are ‘calculated’ automatically. Good feature suggestion.
Do you know all of the grading criteria it includes? Maybe a better question for Tyrant himself since he wrote the code.
-
Do you know all of the grading criteria it includes? Maybe a better question for Tyrant himself since he wrote the code.
Definitely better for Tyrant. As far as I can tell, it’s magic. …. Or sorcery. You know, however you roll with that kind of thing.
-
I’m not sure what features it includes but it would be nice to export the grades to a csv file for upload to a greenie board on a website.
Both a good idea and easy to do - we’ll be including that one. Thanks!
-
Do you know all of the grading criteria it includes? Maybe a better question for Tyrant himself since he wrote the code.
At the moment, we assess AoA, Glideslope, and Lineup at each of the phases “At the Start”, “Mid”, and “At the Ramp”. We’re working from ACMI data, and we get numbers about five times a second. That raw data is categorized into “ideal”, “good”, “minor deviation”, “major deviation”, and “unacceptable”. We color the dots you can see in the video according to those categories. Next, we summarize each phase by doing a weighted count of the “dots” - larger deviations are weighted more heavily, so if you only have one major deviation and the rest are minor, we’ll call that phase “a little” whatever - slow, high, left, etc.
Next we combine the phases into an overall grade. Basically, deviations are worth points, and smaller deviations are worth fewer points. AoA deviations are worth fewer points than lineup and glideslope. Fewer points get you a better grade. There are a couple of special cases, like that being more than a little low or slow at the ramp will earn you a cut pass, even if that’s the only mistake you made.
It’s not perfect, and we’re still working on it, but it’s about 100x better than the garbage algorithm I did before that.
A few notes: first, we did work from the NATOPS manual, but we have so far been unable to get feedback from anyone with real-world LSO experience. We would love to incorporate that. Second, we know that there are deviations that we do not currently include. For example, there’s a symbol for “crossed the glideslope from low to high” and the current version does not include that particular assessment. When we release, in 3 to 4 weeks, do not take it to be “this is the best LSOBot there will ever be”. We do plan to make it better over time.
I also want to point out that if you don’t like the grades and comments it hands out, I think you will still find it very useful as a visualization tool. Just ignore the grade sheet and come up with your own based on what it shows in the glideslope and lineup views. We think that the current assessment algorithm is handing out pretty reasonable comments, but we’ll welcome feedback on whether you agree. As we understand grading to be a subjective thing, that implies there does not actually exist an actual formula for computing a grade, so there will probably always be some room for disagreement, but we want to get to the point where it seems basically sane. IMO we’re about 95% there with the current version.
Finally, I know the whole thing says, “LSOBot by Tyrant”, but the truth is I had a bunch of help, particularly from Agave_Blue, who has done a lot more than just make that awesome video, and our wingmate Shady over at the 1st VFW. Definitely a team effort.
-
A few notes: first, we did work from the NATOPS manual, but we have so far been unable to get feedback from anyone with real-world LSO experience. We would love to incorporate that.
What info are you looking for? I know a guy…
-
I’m also curious to know how (or if) you’ve implemented OK passes. It would be wicked if there was some way for LSOBot to determine if there were faults or damage and then hand out that coveted grade if you flew a nice pass.
-
What info are you looking for? I know a guy…
Well, it’s mostly that we’re in a “we don’t know what we don’t know” state. The goal for the tool is to provide feedback on traps so that pilots can improve. I think the visualization absolutely does that - even if you totally ignore the grading, you can be able to see where you were off glideslope, AoA, etc., since it’s just a presentation of the position and attitude of the aircraft. It’s the grading where things get more difficult, since it’s more subjective. For instance, if you enter mid very high but correct to being right on glideslope, should that be marked as “very high”? “High”? “A little high”? There can’t be one right answer, since we’re trying to boil down a whole bunch of numbers into what’s effectively one number, but it would be good to know that our general approach is sane.
And that said, we actually do have a few questions about the concrete parts, too. For instance, the source materials we’ve found suggest that ideal AoA might be 3.3 degrees, but we also understand this can be adjusted for different conditions. How should we handle that?
At a minimum, then, it would be great to have someone who knows how the pros do it take a look at it and say, “Basically sane”, “Close-ish”, or “Totally FUBAR”. One step better would be a description of how it deviates. A step better still would be an extended conversation about how to improve the tool in the context of the sim, since there may be good reasons to make it different from RL.
I’m also curious to know how (or if) you’ve implemented OK passes. It would be wicked if there was some way for LSOBot to determine if there were faults or damage and then hand out that coveted grade if you flew a nice pass.
We don’t have quite enough information to do this in all cases. For instance, aircraft damage is not necessarily present in the ACMI. Right now you can get an OK+ by flying a pass with no deviations whatsoever - not even a little high/low/fast/slow/left/right at any stage of the landing. I’m not sure I’ve ever seen even the AI do this, though. Kudos if you pull it off.
I think the right way to handle this in the future might be to simply include a checkboxes labeled “Difficult conditions” or “Damaged” or whatever that would let a human add in additional factors. Then the program could adjust grades based on that - bump it up by half a grade or whatever makes sense.
I’d also point out that to some degree chasing more and more “accurate” grading is a slippery slope. First, it’s subjective. Second, in the limit it involves creating the sort of machine learning system that would be fun to write, but is probably outside the scope of a hobby system, when I have other things I’d like to do for the community as well.
But we’re definitely not done, so I welcome conversations about what would be helpful, efficient, and beneficial to add or change.
P.S. Adding CSV export right now.
-
…… And that said, we actually do have a few questions about the concrete parts, too. For instance, the source materials we’ve found suggest that ideal AoA might be 3.3 degrees, but we also understand this can be adjusted for different conditions. How should we handle that? …
I think you mean a 3.3* glideslope here? That ‘estimate’ is based on what we observe in BMS to be ‘correct’ …. basically what AI flies, iirc. I do believe there is some external source material (NATOPs?) that indicates a ‘normal’ glide slope range and our ‘estimate’ fit within that range. More to the point … what is the ideal ‘BMS Glideslope’ and does it change with conditions, AC selection, etc.?
Another ‘good to know’ item would be different AoA ranges for different model AC. I’m pretty sure C/D may have a different optimum AoA range than the E/F, for example.
-
I think you mean a 3.3* glideslope here?
Yes, sorry - you are correct. And +1 on the rest of your comments.
-
Actually, the AOA has very little (if nothing) to do with setting the lens - it’s based on hook to eye distance for a particular jet, and what the LSO is really doing is flying the hook point into the wire, not the “jet” itself. So as long as you can get hold of proper lens settings for a platform (which I think also depend on landing gross weight - as AOA will also be) you can get closer to the desired “realisimo”.
-
Actually, the AOA has very little (if nothing) to do with setting the lens - it’s based on hook to eye distance for a particular jet, and what the LSO is really doing is flying the hook point into the wire, not the “jet” itself. So as long as you can get hold of proper lens settings for a platform (which I think also depend on landing gross weight - as AOA will also be) you can get closer to the desired “realisimo”.
And in fact what LSOBot assesses is the hook position - you have to fly it through the correct profile to get a good grade. Obviously, the farther out you are, the less it matters.
But I think what Agave_Blue was referring to what that we observe AI F-18C and F-18D pilots flying quite different AoA on approach. The D model flies a fair amount faster.
-
That does sound odd - especially given that the D should be lighter (thus slower), but that does not mean in any way that on-speed for the two jets is or even should be similar in a given situation. It has to do with geometry of the airplane and landing gross weight. One of the things we VR Viper-drivers are spoiled by is that the Viper doesn’t seem to have a landing weight limit - at least not that one I can find documented, anyway.
For CV ops bring back and GWT into the wire are critical - and limiting - factors. In any event, a proper CV approach is flown with the indexer centered, and that speed and AOA will vary substantially with GWT and CG. So it’s not really fair to compare even two jets of the same type, unless you know something about how they are configured and what they weigh. It sounds like you’re doing the right thing, just ignore the actual approach speed - only that the jet should be on-speed during approach. In which case you’d want to grade by monitoring the indexer…if that’s even possible.
-
For CV ops bring back and GWT into the wire are critical - and limiting - factors. In any event, a proper CV approach is flown with the indexer centered, and that speed and AOA will vary substantially with GWT and CG. So it’s not really fair to compare even two jets of the same type, unless you know something about how they are configured and what they weigh. It sounds like you’re doing the right thing, just ignore the actual approach speed - only that the jet should be on-speed during approach. In which case you’d want to grade by monitoring the indexer…if that’s even possible.
We’re assuming the indexer in the Hornet shows AoA unadjusted for any other factor - like the one in the Viper does - and that’s exactly what we grade. Ideal AoA is 8.1* and there are thresholds around that defining minor, major, and unacceptable deviations.
-
I’m already certain that the BMS Hornet model isn’t very accurate (in a lot of respects)…it should show on-speed for the gear down/full flaps configuration only, and unlike the Viper be off (dark/blank/not on) in all other situations. It should also show five vice three indications vs the Viper…which makes it a bit more “ticklish”. But attitude to get to on-speed AOA is still going to vary with GWT and CG, and that will put the hook point higher or lower in the groove as a result. I strongly suspect that all of the variables involved are not in play at this time in BMS history…after all, it’s a Viper sim.