RTT Linux client possible?
-
Yeah, you are absolutely right, as the name already suggests. - How could I forget that in my first post…
But regardless, my point about the additional effort still holds true. It would be way easier to simply start one executable than having to set up multiple - let’s call it ‘layers of abstraction’ to enable the RPI platform to run a non-native executable.You might have a look onto the MFDE which provides the same or better to say even more functionality as RTT and has a client/server as well and you have at least access to the source code in order to write a native RPI Client
https://github.com/lightningviper/lightningstools/tree/master/src/MFDExtractor -
Thanks for the hint. I already had a look at that repo a lot of times. It is a great source of information on how things work in BMS, not only DE. And I already implemented a small non-networked MFD viewer, that reads the information from shared mem based on the information I found in this repo. But if I would start at that point for this project, I would also have to write the server component myself and I specifically wanted to avoid to implement yet another server for another tool but wanted to extend on the RTT tools shipped with BMS. Since this tool now also supports other shared mem areas I think it could be used as a general server component for all kinds of client implementations so it could be regarded as the “official API” for cockpit extensions.
I like the FakeBMS approach of RTT that mirrors all the shared mem stuff on a remote machine so shared mem clients don’t have to bother whether they are running locally or remote but this is again only working for Windows.
The only/best way to include all those Linux based clients like RPI or embedded stuff like Arduinos is reading directly from the network as there is no (or very little) way to install Windows on these devices.
That’s why I believe it would be great if the RTT server would have a small manual similar to the keyfile-manual for the input that describes the UDP protocol / the output-side of things. -
You might have a look onto the MFDE which provides the same or better to say even more functionality as RTT…
Indeed, but this is what it’s author says about it:
I’ll be very interested in hearing if MFDE still works, because it is what I’m most familiar with… :whistle: .
Define “still works” MFDE is extremely obsolete, and has been for many years now. It dates back to 2007 and is built on antiquated 32-bit technology frameworks from the Windows 7 era like WinForms and non-hardware-accelerated 2D graphics technologies like GDI+. It’s now almost 2021. MFDE was dead a long time ago, but it just somehow keeps refusing to stop breathing. Damned zombie code.
-
What I found useful was not the MFDE itself but the shared memory part of the sourcecode that keeps updated with every BMS version. With it comes a shared mem tester that also includes a raw MFD texture export that works just fine even with this version. I tested it just yesterday.
But again: that doesn’t help with my current “requirement”.
-
Wasn’t komurcu’s mfde server source released a year ago or so as well?
All the best,
Uwe
-
Yes but I won’t go for that as it’s vb spagetti code
-
I spent a couple of minutes trying to find it’s sources yesterday only to dig a little deeper into that topic, without intending to really write my own or use it to connect my client to, but I wasn’t successful.
But even if I would have been, and could have come up with something based on this, it would still be another codebase that would have to be maintained with every update. Hence my approach connecting directly to a server that is shipped with BMS anyway. -
FYI,
I’m planning to add a writeup about the RTTServer network protocol and message formats for you gents and add it as part of the upcoming U1.
-
Thanks dunc, that’s really good news.
I already started with a client using a very simple own server and protocol just to be able to transmit some images to test visualization on the client-side.
Knowing that the docs will be released, I will now just wait for the release and then start to port my protocol to the official one. -
I’ve finished the writeup of the RTT network formats, you’ll find it in the upcoming U1 in <bms>\Tools\RTTRemote*RTTNetData.h*</bms>.
However, since the info is neither secret nor new, RTTServer is released and available to be used, I can make it available here already:
Also, for the real nerds, RTTServer v3.3 will also support UDP multicast in addition to the RakNet protocol (however only for shared mem data, not RTT):
/* Dunc, January 2021, on behalf of Benchmark Sims * * * Upfront note: in *addition* to the regular RTTServer/RTTClient network * protocol described here, *custom* remote shared men clients may also receive * the data by means of UDP multicast on plain sockets independent of RakNet. * Please see the "MULTICAST.txt" document for further information. * * * This document defines/describes the network message format that RTTServer is * using to communicate with RTTClients. You can use this information to build * your own RTTClient receiver applications. * * The transport layer is implemented on top of RakNet 4.081, i.e. your client * must use RakNet to implement the network connectivity. The official source * repository is located here: * https://github.com/facebookarchive/RakNet * * However, it is strongly advised to use the "larku" fork as the basis for your * implementation, as it includes the most pressing bug fixes: * https://github.com/larku/RakNet * * * The following options need to be set in "RakNetDefinesOverrides.h" to make it * compatible at with RTTServer at compile time: * * #define RAKNET_SUPPORT_IPV6 1 * #define USE_SLIDING_WINDOW_CONGESTION_CONTROL 0 * * * Connect/disconnect handling should be purely handled by RakNet means, there * is no further RTT specific message handling needed (i.e. you can ignore the * message types MSG_CONNECT and MSG_DISCONNECT in the enum below, they are only * used internally in the native RTTServer/RTTClient). * * * After the initial network connection is established, the message flow is as * follows: * * 1) The client sends an initial MSG_HANDSHAKE message to tell the server what * data exactly it *requests*. * * 2) The server will send back an MSG_HANDSHAKE message with the *actual* data * that it will send to the client during the session. * * 3) The server will send out MSG_IMAGE (RTT) and/or MSG_DATA (SharedMem) * messages to the client continuously once it detects that BMS is providing * the data. This is the "main message loop" (albeit one-way only). Note that * IMAGE/DATA messages are sent unreliable, but ordered. That means that * while it is possible for the server to skip sending out messages (e.g. due * to network congestion), it is nevertheless ensured that each message the * client receives is always *newer* than the previous one. * * 4) If a client disconnects or loses connection, it has to start out at step 1 * again to re-initiate the message flow. * * * Since RakNet is used as the transport layer, all messages you receive are of * type "RakNet::Packet". Each RN packet holds its payload in an array of type * "unsigned char", called "data", which you must evaluate and handle * accordingly (this is not RTTServer/Client specific, but generic RakNet * functionality). * * In the following, I will describe the various data contents for each message * type. Note that I will only identify the *starting* fields in the data array, * you can deduce the length of the data from the data types itself. * * RakNet stores the message type in data[0]. It uses a sizable number of * internal message types that get evaluated by RN automatically. Other internal * RN message types need to be handled by *you* (e.g. connect/disconnect/already * connected/request accepted/denied...) in order to enable your client to track * the connection status to the server from a *business* point of view. Check * out the various RakNet examples in case you need more info on this. * * (Note: you will never see an actual IP address in our payload "data". If you * want to know where a packet came from, you need to evaluate the * "systemAddress" field in the "RakNet::Packet".) * * Last not least, it can hold custom message types, in our case MSG_HANDSHAKE, * MSG_DATA, MSG_IMAGE. * * * MSG_HANDSHAKE messages are composed as follows: * * data[0] = MSG_HANDSHAKE * data[sizeof(MESSAGE_TYPE)] = <struct handshake=""> * * The "struct HANDSHAKE" needs to hold the following info (check below): * * BYTE rttVersion = RTT_VERSION * (If there is a mismatch between the server version and the client version, * the server will not send out any data to the client.) * * BYTE fps = <how often="" will="" the="" image="" data="" be="" sent?=""> * (The client does not need to specify a value here. It is the server who * provides this info to the client in its MSG_HANDSHAKE response. This info can * be used by the client program to optimize message polling and or render * update cycles.) * * BYTE useData[DATA_NUM] = <which sharedmem="" areas="" are="" requested="" by="" the="" client?=""> * (The client needs to set a non-0 value to the fields in the array that * correspond to the data areas it wants to receive. Check the "enum SMEM_DATA" * below.) * * BYTE useDisp[DISP_NUM] = <which rtt="" images="" are="" requested="" by="" the="" client?=""> * (The client needs to set a non-0 value to the fields in the array that * correspond to the RTT images it wants to receive. Check the "enum RTT_DISP" * below.) * * Note that the MSG_HANDSHAKE response message from the server might send back * different info than originally set by the client. This is to inform the * client about the actual data it will be sending. For example, if the server * detects a version mismatch, it will set ALL useData/useDisp fields to 0, * because it will not send out any data at all. * * * MSG_DATA messages are composed as follows: * * data[0] = MSG_DATA * data[sizeof(MESSAGE_TYPE)] = <which sharedmem="" area="" are="" we="" sending?=""> * (Check the "enum SMEM_DATA" below for possible values.) * * data[sizeof(MESSAGE_TYPE) + sizeof(SMEM_DATA)] = <the actual="" sharedmem="" data=""> * (The actual SharedMem object names, their composition and sizes are defined * in the "FlightData.h" file in the BMS folder, e.g. in * "<bms>\Tools\SharedMem". If you use the C/C++ object names, that would be * FlightData, FlightData2, OSBData, IntellivibeData.) * * * MSG_IMAGE messages are composed as follows: * * data[0] = MSG_IMAGE * data[sizeof(MESSAGE_TYPE)] = <struct rtt_header=""> * data[sizeof(MESSAGE_TYPE)+sizeof(RTT_HEADER)] = <the actual="" image="" data=""> * * The "struct RTT_HEADER" needs to hold the following info (check below): * * RCV_MODE mode = <which image="" type="" is="" this?=""> * (Check the "enum RCV_MODE" below for possible values. The payload data after * RTT_HEADER will be binary data in the format specified here.) * * RTT_DISP disp = <which rtt="" image="" is="" this?=""> * (Check the "enum RTT_DISP" below for possible values.) * * WORD width = <the resolution="" width="" of="" the="" image=""> * WORD height = <the resolution="" height="" of="" the="" image=""> * * DWORD size = <the size="" of="" the="" binary="" image="" data="" after="" rtt_header="" in="" bytes=""> * * Note that for RCV_MODE "JPG" and "PNG", the binary data is just that, a * "proper" JPG or PNG image. It is up to your client to decode/render that * image. * * For RCV_MODE "RAW", you will receive a continuous array of individual "struct * COLOR_RGBA" (8 bit per color/alpha, see below) values, each image line simply * concatenated to the former one. So you need to chop them up according to the * RTT_HEADER "width" and reconstruct/render the image from the individual * COLOR_RGBA values (one value per pixel). * * For RCV_MODE "LZ4", you also will receive a continuous array of individual * "struct COLOR_RGBA" values like the "RAW" option, however, the data is * compressed with LZ4 and hence you need to unpack it before handling it in the * same way as you handled "RAW". * * * ...and that's all there is to it. How you actually use the received * image/shared mem data is completely up to your client application and your * imagination. * * I hope this info is helpful and concise enough to get you going. * * Cheers, * Dunc */ #pragma once #define RTT_VERSION 33U // v3.3 // data structures within "pragma pack" will be serialized over the wire #pragma pack(push,1) enum MESSAGE_TYPE : BYTE { MSG_CONNECT = 134U, //RakNet ID_USER_PACKET_ENUM MSG_DISCONNECT, MSG_HANDSHAKE, MSG_IMAGE, MSG_DATA }; enum SMEM_DATA : BYTE { F4 = 0U, //FalconSharedMemoryArea (FlightData) BMS, //FalconSharedMemoryArea2 (FlightData2) OSB, //FalconSharedOsbMemoryArea (OSBData) IVIBE, //FalconIntellivibeSharedMemoryArea (IntellivibeData) DATA_NUM }; enum RTT_DISP : BYTE { HUD = 0U, PFL, DED, RWR, MFDLEFT, MFDRIGHT, HMS, DISP_NUM }; enum RCV_MODE : BYTE { PNG = 0U, JPG, LZ4, RAW, MODE_NUM }; struct HANDSHAKE { BYTE rttVersion; BYTE fps; BYTE useData[DATA_NUM]; // 0 = false, other = true BYTE useDisp[DISP_NUM]; // 0 = false, other = true }; struct RTT_HEADER { RCV_MODE mode; RTT_DISP disp; WORD width; WORD height; DWORD size; }; struct COLOR_RGBA { BYTE r, g, b, a; COLOR_RGBA(BYTE r = 0U, BYTE g = 0U, BYTE b = 0U, BYTE a = 255U) : r(r), g(g), b(b), a(a) {} }; #pragma pack(pop)</the></the></the></which></which></the></struct></bms></the></which></which></which></how></struct>
Happy developing!
-
A quick follow-up:
The upcoming RTTServer/RTTClient v3.3 (which will be part of 4.35 U1) bumps the RTT_VERSION to 33U.
Before you ask, no, there will be no actual network message format changes, but that particular datum is used for other purposes as well and hence will be bumped with each major & minor version (however not micro & patch).
-
FYI, I’ve added some missing info about RakNetDefineOverrides.h at compile time to the post above, as well as UDP multicast info.
-
I finally managed to create an initial proof of concept of what I described previously in this thread.
This is an RTT Client running natively on a Raspberry PI 4 directly connected to the official RTT-Server.There are still some stability issues but at least it shows that what I was thinking of is possible in general…
-
I finally managed to create an initial proof of concept of what I described previously in this thread.
This is an RTT Client running natively on a Raspberry PI 4 directly connected to the official RTT-Server.There are still some stability issues but at least it shows that what I was thinking of is possible in general…
I have a couple of pi4. be happy to beta test it if you want. Let see if the pi can handle two monitors full of instruments
Sent from my SM-G960U using Tapatalk
-
Sure, appreciate your help. I just have to polish things just a little bit and get rid of the hardcoded IP address and so on so that it can be used in different environments. I’ll try to get this done this weekend…
But to make things clear: Initially, it was intended to extract the RTT stuff like MFDs, HUD, DED PFL, and RWR only. Currently, only the MFDs are working which is what I intended to use it for exclusively.
It’s not meant to be PI port of YAME. There are no plans (and time ;-)) to include this functionality - if that is what you meant with “full of instruments”. -
Yeah. I know some instruments are generated manually by the tools like tame and made. and some are extracted from shared texture.
Sent from my SM-G960U using Tapatalk
-
Here is an initial version that might be used to some extend in-flight. At the moment only left and right MFD is supported even if the config implies more.
There is still a problem in the ARM version where the display freezing after a couple of minutes if you stay to long in screens that require large data transmissions like TGP, A/G radar and thelike.
The windows version should work out of the box. There is a small readme that describes the system-requirements. I tested it only with the rPI 4 mentioned in the readme with a single screen.
If everything works as designed you should see something like that:Would be nice if you could update me with some test experiences from maybe different setups like rPI 3 oder rPI 4 dual screen.
-
There is still a problem in the ARM version where the display freezing after a couple of minutes if you stay to long in screens that require large data transmissions like TGP, A/G radar and thelike.
Are you sure it’s an ARM problem and not the current BMS Bug freezing / not rendering Textures to SharedTexMemory
-
yes, I’m pretty sure the problem I described is “self made” because this specific problem didn’t occur in the windows version yet, but tbh. while testing the windows version was running 5 to 10 minutes max. so I could be wrong.
-
yes, I’m pretty sure the problem I described is “self made” because this specific problem didn’t occur in the windows version yet, but tbh. while testing the windows version was running 5 to 10 minutes max. so I could be wrong.
A good repro case is using the TR_BMS_12_HARM Training mission in KTO. With that training mission running I could repro the BMS internal SharedMemory Freeze at no longer than 2 minutes