Beta Release: GPT (cockpit texture extraction, remote cockpit control, shm mirror)
-
.
Beta Release: GPT v0.3
Beta 0.3 for 4.33 released. This is very much an experimental software… expect bugs…
https://github.com/GiGurra/gpt/releasesThere is a bug tracker available at:
https://github.com/GiGurra/gpt/issues- You can also request new features there.
–-----------------------------
OLD POST for reference below:
Beta Release: GPT v0.2
Been a while but I’m now releasing GPT beta v0.2.
Please note that this version is likely more unstable than the previous ones and has a new installation and configuration procedure. For new people I would still recommend trying the old gpt first (still available below in this post).But for everonye out there who wants to try the latest release, you can grab it from:
https://github.com/GiGurra/gpt/releasesNote that there is a new README inside the release zip. Read it please.
There is a bug tracker available at:
https://github.com/GiGurra/gpt/issues- You can also request new features there.
–-----------------------------
OLD POST for reference below:
What started with this has turned into a full toolkit release, now called GPT.
I make this post to announce the first beta release of the software tools I’ve been working on.A full explanation of what the toolkit does and how to install and use it can be found in the manual:
http://gigurra.se/gurrasPitTools/manual.pdf (The manual itself is also very much a beta release)Here is a quick demo video (sorry about the lousy video quality, I only have a phone as a camera) of what is possible with the tools. While this could be done before as well, with GPT it is possible to use a slave PC to run all the cockpit tools and render the mfds etc. The touch screen cockpit gauges and switches are handled by Helios by Gadroc (see http://www.gadrocsworkshop.com/).
Big Thanks to Scratch for helping out with making the manual, foobie42 for pointing me in the right direction for finding the correct FBOs to download, and some of the BMS guys who sent me advice over PMs.
Download links can be found in the manual, Please read the manual :).
Help, how do I use this?
Refer to the manual (see above) first. If that is not enough, please do not PM me with help requests. I find PMs are too slow. Contact me on msn, skype or teamspeak instead. If your time zone does not match mine (CET) perhaps you can find someone else to help you?
Can I use GPT for my own software projects?
Yes, and that was the intention from the start. GPT is released to the public as open source under GPL (v3). See the manual for how to acquiring the source code. For example: In theory it would be possible to create an iphone/android program which receives the downloaded cockpit texture exported by GPT.
Future ideas
- I’ve been thinking that maybe the tools should support multiple transmitters/receivers (not just 1<->1). Something for the future perhaps.
- An installer would be good to have, but I have no clue how to make that ^^
BUGS!
Report any issues you encounter in this thread. If you suspect you’ve found a bug please post it here. If it’s not reported I have no chance of fixing it :P.
LICENSING AND EXCEPTIONS
This software is released to the public under GPL (see the manual for details), however exceptions can be made and the software may be released specific people, groups and/or organizations under other licenses.
The DisplaysTransmitter part of GPT is dual licensed, and can also be used under the MIT license (see the DisplaysTransmitter svn repository to download the full license text).
.
-
Update, sorry guys, I managed to mess up the build scripts when I renamed all the parts. I’ve uploaded a new version (RELEASE_20120602_1346) which should be working. Everything above still applies :). But if you were quick enough to download the initial release then you will get a java error saying “no main method found”, because it was looking for the old names.
-
Hello! Just finished reading your manual and would’ve liked this program months ago! Unfortunately, I’ve now built my setup to run with one PC. I wonder though if there is a way to extract MFD/DED to a second monitor while keeping the game screen “full screen” and reduce the “overhead” and reduce the stutters or increase FPS?
I also wonder what your system specs are to compare to my own and my stutter experience.
-
Hello! Just finished reading your manual and would’ve liked this program months ago! Unfortunately, I’ve now built my setup to run with one PC. I wonder though if there is a way to extract MFD/DED to a second monitor while keeping the game screen “full screen” and reduce the “overhead” and reduce the stutters or increase FPS?
I also wonder what your system specs are to compare to my own and my stutter experience.
I’m not sure if this will improve your situation, but you can try using the GPT DisplaysTransmitter, and set the ini file to use SHM instead of TCP messages (That way it will just download the displays texture and write it to SHM as raw RGBA instead of sending over TCP as JPEG). Then launch the DisplaysReceiver on the same computer. Though this will probably reduce your fps compared to BMS’s built mfd extractor, but it might allow you to play full screen, and might reduce stutters.
My specs are:
Primary PC (runs bms with the GPT DisplaysTransmitter, the GPT ShmTransmitter and the GPT KeyComReceiver)
generation 1 i7@3,2 GHz (i7 960)
pcie 2.0 16x
gtx 680Slave PC (runs all the other GPT tools and Helios)
generation 1 [email protected] GHz (i7 920)
2xNvidia GT210s (low end gpus, but enough for helios & mfds)––
At first I got the Gtx680 in order to try to alleviate the stutters and fps loss that I had before with my previous graphics card on my primary PC (using the built-in extractor) - back then I had the gt210s installed in my primary pc to run the mfd monitors (I also tried a gtx260 with the same results), but the performance wasn’t very good. I had hoped that the GTX680 would solve the problem (since then I didnt need any extra GPUs), but unfortunately it didnt - the stutters when using the built in extractor got almost worse. At that point I decided I would try to make GPT instead, and let a slave computer render the MFDs, which has really made my FPS a LOT smoother. I recently flew a campaign mission with all the cool stuff like HDR on and superb fps from the tarmac to the target, which I was never able to do before :).
But I expect it will be different from system to system how GPT performs. I expect it to work good on nvidia cards though, as that’s what I’ve been using. The important thing is that the texture download is issued directly after the buffer swap (d3d Present function), and is performed asynchronously up to the point where the sim starts painting the cockpit displays texture again.
-
You deserve a monument. I hope this development continues and flourishes into a fully featured two seater simulation. That will make Krausey very happy.
-
GiGurra thank you! Will this work over the internet and not just local network? Such software has a huge potential for pilot/WSO simulation and for training purposes.
-
GiGurra thank you! Will this work over the internet and not just local network? Such software has a huge potential for pilot/WSO simulation and for training purposes.
In its current form it transmits everything as jpeg encoded images… This requires a great deal of bandwidth, amounts far exceeding normal Internet connection speeds. In the future if we want to run this online we will need to encode it as a proper video stream instead… Unfortunately I do not have experience with such stuff, but if anyone wants to contribute and make some small code samples on how this can be done (just encode raw rgba frames in c++ to a video stream) then it would be possible…
-
Nice setup Yoda
-
Note to self, add used network ports to the manual: 8050-8055ish
-
Well m8 u are on another path here. As u started for the MFD over LAN is fine. But this if it doesn’t work flawlessly over internet will be a dead horse. Have in mind that from the same line must pass falcon heavy mission data, IVC or TeamSpeak data and your app data all synched perfectly performing top knots…
Sounds like a net nightmare. -
Great job man got it working its really nice
-
Well m8 u are on another path here. As u started for the MFD over LAN is fine. But this if it doesn’t work flawlessly over internet will be a dead horse. Have in mind that from the same line must pass falcon heavy mission data, IVC or TeamSpeak data and your app data all synched perfectly performing top knots…
Sounds like a net nightmare.Well I dont have need for it atm. If someone wants to provide code that I can implement then no problem, but for me I’m satisfied with what I got right now. If you really want to share textures online, then someone needs to hook me up with some video encoding source code samples :).
-
Well I dont have need for it atm. If someone wants to provide code that I can implement then no problem, but for me I’m satisfied with what I got right now. If you really want to share textures online, then someone needs to hook me up with some video encoding source code samples :).
In layman terms are you capturing the video of a remote machine? is the issue that you are capturing it without compression at high resolution?
There are tons of open source video encoding algorithms, h264 immediately comes to mind. Xvid would be a logical choice too.
-
Loaded the files per the Manual.PDF…
Would like some guidance with:
a. running on one computer with a secondary monitor (USB LCD or VGA LCD).
I assume that the DisplaysTransmitter.ini file must have (only showing variables having to do with displaying on one PC, the others are in the file but Im not showing them because they are unchanged), are these correct?
[falconhook_shm]
active = 1[falconhook_socket]
addr = “127.0.0.1”b. Setting up the MFD display area on the remote PCs primary or local PCs secondary monitor
To get the Left MFD to display on a secondary monitor (1680x1050 or 800x480)) which is placed to the right of the main monitor (1920x1200)
Again Im guessing here, but do we use the file CONFIG.XML to edit the height and width of the secondary monitor with respect to size of the primary monitor?
If so, the upper left hand corner of the primary monitor is considered (x=0, y=0) and the upper right is (x=1920, y=0)?
Thus, the X_SCR, Y_SCR is the upper left of the remote display area on the secondary monitor, and the W_SCR and H_SCR are the width and height of the height and width of the L MFD area.<displays><left_mfd><active>true</active>
<x_tex>0.625</x_tex>
<y_tex>0.625</y_tex>
<w_tex>0.375</w_tex>
<h_tex>0.375</h_tex>
<x_scr>1920</x_scr>
<y_scr>50</y_scr>
<w_scr>680</w_scr>
<h_scr>680</h_scr>
<alwaysontop>true</alwaysontop>
<border>false</border></left_mfd>c. To run the MFD shared memory on the same PC but secondary monitor, do we just:
1. Put both the contents of the DisplaysTransmitter and DisplaysReciever into the BMS/bin/x86 (for 32 bit systems)
2. Edit the DisplaysTransmitter.ini and Config.XLM files as mentioned in parts A and B above
3. Run the Start.bat and then DisplaysTransmitter appWhat else?
I tried the above and it did not work.
The Manual.PDF does not go to this level of detail for what Im trying to do.Thanks</displays>
-
Loaded the files per the Manual.PDF…
Would like some guidance with:
a. running on one computer with a secondary monitor (USB LCD or VGA LCD).
I assume that the DisplaysTransmitter.ini file must have (only showing variables having to do with displaying on one PC, the others are in the file but Im not showing them because they are unchanged), are these correct?
[falconhook_shm]
active = 1This is stated in the DisplaysTransmitter.ini file. If you set shm to active, then the socket export will be inactive.
This setting is useful if you only intend to draw on the local pc, and not send over network.[falconhook_socket]
addr = “127.0.0.1”None of the “falconhook_socket” settings have any effect when “falconhook_shm” is set to active.
b. Setting up the MFD display area on the remote PCs primary or local PCs secondary monitor
To get the Left MFD to display on a secondary monitor (1680x1050 or 800x480)) which is placed to the right of the main monitor (1920x1200)
Again Im guessing here, but do we use the file CONFIG.XML to edit the height and width of the secondary monitor with respect to size of the primary monitor?
If so, the upper left hand corner of the primary monitor is considered (x=0, y=0) and the upper right is (x=1920, y=0)?
Thus, the X_SCR, Y_SCR is the upper left of the remote display area on the secondary monitor, and the W_SCR and H_SCR are the width and height of the height and width of the L MFD area.<displays><left_mfd><active>true</active>
<x_tex>0.625</x_tex>
<y_tex>0.625</y_tex>
<w_tex>0.375</w_tex>
<h_tex>0.375</h_tex>
<x_scr>1920</x_scr>
<y_scr>50</y_scr>
<w_scr>680</w_scr>
<h_scr>680</h_scr>
<alwaysontop>true</alwaysontop>
<border>false</border></left_mfd></displays>tex coordinates are relative internal positions of the cockpit display data’s coordinates in FBO/render target which is downloaded from VRAM to SYSRAM. If you do not know what that means, do not touch it.
_scr are the desktop coordinates and size of the window. These are global desktop coordiantes, so they relate to your windows desktop setup. If you are unsure, set border to true, and you can move the windows around yourself.
c. To run the MFD shared memory on the same PC but secondary monitor, do we just:
1. Put both the contents of the DisplaysTransmitter and DisplaysReciever into the BMS/bin/x86 (for 32 bit systems)
2. Edit the DisplaysTransmitter.ini and Config.XLM files as mentioned in parts A and B above
3. Run the Start.bat and then DisplaysTransmitter appNo. The DisplaysTransmitter zip contents goes into your x86 folder. This is the direct3d dll hook and the loader. Start falcon by running the loader (the DisplaysTransmitter.exe)
The contents of the DisplaysReceiver.zip you place somewhere you like, basically anywhere, like programs/DisplaysReceiver/.
The start.bat starts the application with maximum java optimizations. You can double click the jar file itself instead, but that yields worse performance.What else?
I tried the above and it did not work.
The Manual.PDF does not go to this level of detail for what Im trying to do.Thanks
Well, there are details in the manual and the ini files, and some of the xml stuff are self explanatory, but it depends on how much experience you have with this stuff. If you have time to make some list (numbered or separated), either as a forum post or some document, then we could perhaps add the answers to those things more detailed in the manual?
-
@Yoda,
Please be patient with me as I struggle to figure this out. I will help with the documentation of how it worked for me and my single PC to USB LCDs application.
I currently do this for DCS using the LUA scripts and Helios for the CDU, and I hope to do the same for FalconBMS.Cheers mate.
-
@Yoda,
Please be patient with me as I struggle to figure this out. I will help with the documentation of how it worked for me and my single PC to USB LCDs application.
I currently do this for DCS using the LUA scripts and Helios for the CDU, and I hope to do the same for FalconBMS.Cheers mate.
No prob, take your time.
PS: IM IM IMI have it set up currently so that as soon as I switch on the main power for my slave PC, it starts up everything. On my main pc I run a startup script and then BMS, and then it’s all alive…really handy. That’s how I would like it to be for any user.
-
Not sure if this could help but I use this program for streaming live. It’s able to multicast, so you could have more than 1 client, and it takes care of all the encoding…
http://www.umediaserver.net/umediaserver/overview.html
RTMP/RTMPT Flash protocol
RTSP IETF RFC2326
H264, MPEG4 Video over RTP IETF RFC3984, IETF RFC3640
AAC, MP3 Audio over RTP ISO/IEC 14496-3, IETF RFC2250
MS-WMSP Windows Media Streaming Protocol
MPEG2-TS ISO/IEC 13818-1SDK is available!
http://www.umediaserver.net/umediaserver/source.html
Contents of the package: COM components, samples and documentation for:
Web-based user administration, Session-based authentication, Custom live transform, Custom encoding support, Alarm acceptors for Motion detection, Custom user logging, Adding/removing resources to/from Media Server configuration metabase, Starting/stopping recording for Live Server, Connecting Live Server to Media Server, Archival Server automation. ActiveX control documentation. Sample web pages hosting ActiveX control. C# player: a simple container of ActiveX control. Flash player and sample web pages hosting it. WebCam Community demo web application demonstrating the use of the SDK for WebCam portals creation.It’s also HTTP streaming compatible, so can be viewed from a browser.
This could also remove the need to program for apple / android devices as most should be java / flash compatible.
Another possible solution? Emulating your output as a “WDM virtual camera device” could technically allow it to interface with any real time streaming solution.
Latency would be the main hurdle, I figure.
-
Not sure if this could help but I use this program for streaming live. It’s able to multicast, so you could have more than 1 client, and it takes care of all the encoding…
http://www.umediaserver.net/umediaserver/overview.html
RTMP/RTMPT Flash protocol
RTSP IETF RFC2326
H264, MPEG4 Video over RTP IETF RFC3984, IETF RFC3640
AAC, MP3 Audio over RTP ISO/IEC 14496-3, IETF RFC2250
MS-WMSP Windows Media Streaming Protocol
MPEG2-TS ISO/IEC 13818-1SDK is available!
http://www.umediaserver.net/umediaserver/source.html
Contents of the package: COM components, samples and documentation for:
Web-based user administration, Session-based authentication, Custom live transform, Custom encoding support, Alarm acceptors for Motion detection, Custom user logging, Adding/removing resources to/from Media Server configuration metabase, Starting/stopping recording for Live Server, Connecting Live Server to Media Server, Archival Server automation. ActiveX control documentation. Sample web pages hosting ActiveX control. C# player: a simple container of ActiveX control. Flash player and sample web pages hosting it. WebCam Community demo web application demonstrating the use of the SDK for WebCam portals creation.It’s also HTTP streaming compatible, so can be viewed from a browser.
This could also remove the need to program for apple / android devices as most should be java / flash compatible.
Another possible solution? Emulating your output as a “WDM virtual camera device” could technically allow it to interface with any real time streaming solution.
Latency would be the main hurdle, I figure.
Thanks for investigating.
If anyone is willing to produce some C++ code samples using such tools (though I’d prefer ffmpeg), I could work from there. Personally though I’m not going to spend time doing it from scratch. Don’t get me wrong I’m very grateful for your help here, it’s just that for me it’s not something I want to prioritize atm. For my purposes I’ve got enough as it is. Give me a couple of source code samples? Thats another story, I could implement something better frmo there :). -
No prob, take your time.
PS: IM IM IMI have it set up currently so that as soon as I switch on the main power for my slave PC, it starts up everything. On my main pc I run a startup script and then BMS, and then it’s all alive…really handy. That’s how I would like it to be for any user.
I tried the boarders enabled, and still I dont see any windows on my primary or secondary monitors (tried USB and VGA).
Theres something Im not doing right, and I havent figured it out yet. Where does the SHM-TX and SHM-RX fit into the display on one PC use model fit into this methodology?Regarding your startup script:
That would make a nice small GUI application, to have it start all GPTs or selectively, then save config.
Or just provide the script and let folks edit what they wanted started and when.Regarding IM, IM, IM…
I found you on SKYPE, left you an IM. Youre probably sleeping at 5PM PST or 1AM GMT. Sweden and Pacific Coast USA arent so easliy IMed. Maybe weekends.