TrackID moving HMD only, for simpit, not moving with the rendering camera? Possible?
-
Hi flying men,
So I never was big into cockpits, but recently since the VR not being yet as great as I would like, I was wondering about getting a beamer setup for a super-large screen in a curve. Something like on the last picture below.
BUT I wondered how would that work with TrackIR and the fact that HMD targeting is a must, but whole camera movement on such setup is contra-productive. So the question is:
Is there a BMS mode for camera/TrackIR where camera is stable looking forward, and the TrackIR input only moves HMD targeting. ? Here is a lame paintbrush representation of what I am thinking about before actually dedicating to building something like that: -
I’d like to do this myself…you need warping software to do the background OTW on the screen, and then some sort of projection/reticle in front of your eye.
The real stumbling block is that the alignment mechanics for aligning the JHMCS with the OTW is not implemented in BMS…or at least not fully implemented. This would be required in order to decouple the reticle from the OTW just as in RL.
…so I’m standing by - maybe in 4-6 weeks.
-
The warping part and the screen FOV is solved I assume. There are several warping SW vendors that pre-warp projector image to match a curved screen/wall. And in BMS the FOV should stretch just fine, since I had the pleasure of configuring it for a 34ich ultrawide monitor, that didn’t prove to be difficult.
I was hoping however that if this would be available, on a traditional monitor you would see this as HMD moving off to the sides, just like if it was a giant mouse cursor mapped to TrackIR. Wanted to check before investing as without HMD the digfights would become much harder or enabling camera moving TrackIR on such large screen would be weird and killing the immersion.
But yes, if internally in BMS the targeting functions are centered around the camera’s main axis and not possible to decouple (any developer comment would be super-welcomed) we are out of luck. This would need a developers focus for future BMS releases, but I guess this is niche request within a niche community
-
As I understand things, if you are using more than one monitor/projector you also need some sort of software blending/warping package in order to smoothly blend the multiple screens at the edges. Also, if you are using a non-standard screen size/shape/resolution you will need a blending package.
The JHCMS reticle as presented is part of the overall OTW scene, so it has to be decoupled somehow, otherwise the reticle and the whole scene are going to move together. What you need to do is blank the reticle on the screen and then use a display extractor to to put it onto something like Google Glass or something like this, similar to how you’d build a working HUD -
https://www.vuzix.com/products/blade-smart-glasses
But the problem of aligning a device like Google Glass or Vuzix Blade to BMS OTW still remains. One could write an add-on routine to interface with BMS and make it look like the JHMCS Align function was implemented (the options are there on the DED, but BMS does nothing with them), but this is another level of effort required.
Personally, I absolutely hate TrackIR…I prefer inertial head tracking solutions if I’m going to use one at all, because they allow both greater freedom of natural head movement and also aren’t sensitive to lighting conditions within the room - which can be a real issue when using projectors and/or large screens. The two devices I’ve mentioned have inertial tracking built in, though I have and prefer the EDTracker Pro as a stand alone unit -
-
Well, coding effort is one thing, I am not afraid of that when it comes to building something cool long term. And TrackIR is just entry point library for the game, even if you use something better like the edtracker (didn’t know existed, thanks!), you would still take TrackIR library as your input as that is what most games support. Just disabling that TrackIR inputs changing camera position and coding in an TrackIR provided “angle to X,Y position” translation for HMD GUI is what is needed here. Plus inside the engine substituting he camera agles with the new angle of the HMD (angle of camera source via the X,Y of HMD as new camra axis) for target locking.
However the second half here is that in times like these I regret that the FreeFalcon didn’t survived. There it would be at least remotely possible for me to get the code and on lone nights attempt something to code such stuff in. With secretive BMS development it looks like I am out of luck. And I do not have the time to become active developer with NDA with someone. But I do believe that putting BMS on github would bring fresh impulse of life here. But again this is offtopic.
-
Yeah - using the TrackIR interface I have no issue with…I just don’t like the device itself. I think if BMS went open source it would descend into chaos, though…
It’s far easier to just blank the reticle from the screen and display it on a display on your head - I’m going to do that to implement a working HUD…and I’m going to knock out that silly nose model - I’ve never had a seat in a real jet where you could see the nose from the seat. Why build a fighter where the nose would obscure the view? I just have to come up with a way to align the reticle in my on-head display, which I don’t think will be that difficult using the existing interface. I just need to make my routine talk to BMS.
What I don’t know yet is if the reticle can be treated like an MFD or the HUD for display extraction…I may need to use something like YAME to do that.
-
So about this “add-on routine to interface with BMS” you want to write for your HMD input. How does that work ? I am not aware that you can write input like to for BMS. Ergo you will get vector from your head orientation, how do you plan to give this vector to BMS to use as in-game HMD vector ?
-
Hi do you know whether the edtrackerpro will work with arma3?
Thanks.
Regards Metalhead -
Hi do you know whether the edtrackerpro will work with arma3?
Thanks.
Regards MetalheadIt will work with any game/sim that will work with TrackIR.
-
So about this “add-on routine to interface with BMS” you want to write for your HMD input. How does that work ? I am not aware that you can write input like to for BMS. Ergo you will get vector from your head orientation, how do you plan to give this vector to BMS to use as in-game HMD vector ?
So, what I would have to do is:
- blank the HMCS reticle from the BMS OTW.
- extract the BMS HMCS reticle and display it on a head/helmet mounted display over my eye.
- extract my look angle from my head mounted device and feed that to BMS, presumably through the BMS TrackIR interface.
- find a way to align that look angle to the BMS world - similar to boresighting a MAV. (This is how it’s done in RL, BTW)
- implement #4) in such a manner that I can use the existing HMCS Align options on the DED to make it all work; i.e. - I’ll also have to read SM from BMS to drive my alignment.
I seem to recall that we now have some limited ability to write to BMS SM…I may not be able to do any of the above without that ability.
-
“3) extract my look angle from my head mounted device and feed that to BMS, presumably through the BMS TrackIR interface.”
This point shares exactly my problem from the opening post. The momemt you feed this data to the TrackIR interfaces, the whole game camera starts moving to it. Yon need to fixate the camera just like I need to.
-
That goes straight to my point about the requirement to do an alignment - you have to de-couple the reticle from the OTW and then blank it from the OTW. Then deselect TIR as your View control.
Even though you tell BMS not to use the TIR input to move the OTW, that input data are still present and the angles wrt target can be calculated from them. It’s this calculation that is needed to slave sensors and bug the target you’re looking at. This is also where there may be a requirement to be able to write to SM and not just read it in order to do this.
-
Yes, I get all that. My point was I do not see how to even start achieving that with some form of development. I tried to search for any BMS APIs or plug-in system to code this stuff against, but didn’t found anything that looked useful to base our “3rd party” development of such decoupling against.
So how do we even start here?
-
The first thing to do is to blank the reticle from the OTW, and still be able to extract it. People can do this with the HUD, already so it’s just a matter of capturing the other graphic and recycling that code/process. The do some experimenting with a second screen to see if you’ve decoupled the reticle from the OTW and it is still using TIR inputs by default - that would certainly make life simpler, and could possibly eliminate the need for an actual alignment…but I strongly doubt it.
The trickier part is what can we do via SM - read/write vs read only. I’d start by looking over the SM map and then noodle things out…but as you’ve already guessed, it’s all going to be new ground and new learning. A lot of new learning.