Port Forwarding
-
-
well sounds like hamachi …
Nonsense.
IIRC this was disastrous unless if all users are on this and then it’s different.
Nonsense.
And how stable r those software VPN thingies? like Open VPN?
Stable enough such-that my phone disables all non-vpn 3G comms.
Also as I read on it VPN requires even higher bandwidth cause data goes from vpnclient to vpnserver then they go their way… so this increases server bw demands to the max… right?
Slightly.
wow metrics seem to cause troubles and must be set on all clients on this solution… Headache nightmare for an ignorant like me…
Metrics are fine with openvpn.
-
many
what is this, 6 posts you’ve dodged answering? when I asked for an example, you said I misunderstood. when I asked for clarification, you backpedaled… and when I ask again, you affirm and try to drop it… leaving us where I asked for an example… where we started.
I will take that as a no, then.
-
ok blue3wolf if u can’t even understand what I’m saying then ok have your no.
Have u ever looked inside a full mission Raknet log? Have u ever looked at a BMS packet log?
If u had then u would understood what I’m telling u. I’m not trying to avoid the subject nor what u clearly imply!!!What I know is that when I experience abnormal behavior (as I described above) is cause of other then 2935 port users r connected.
I see connections redone. I see packet loss.
When they all r on 2935 I don’t see connections requests and disconnections and minor to none packet loss…Peace we don’t have a fortune to devide here. For my side is searching for actions and params to improve the experience, and not prove anything like I know , u don’t and an ego fight…
And to give u an oportunity to balance your negativity do u have something constructive to offer to the subject, or the leave it all a mess is ok with u?
-
Well correct me if I’m wrong but when the port changes maybe a re connection takes place inside racknet? have u seen the log with -mono? What if this takes place in a critical moment and suddenly there is a disturbance in the data flow?
Have a flight of 20+ ppl if 2-4 guys have this during the mission then flip flop???A change of port is not expected with an average router. They typically have a timeout stating how long a route should be maintained (in case of udp). BMS traffic is dense enough not to exceed such a timeout. Of course, if the router is buggy, anything is possible.
-
reconnection? udp is stateless
SCNR
-
staless???
SCNR??? -
reconnection? udp is stateless
UDP is stateless, but that is OSI layer 4. The ppl here are more concerned with RakNet, which is OSI layer 5. And this one is stateful. But, yes, the router only deals with layer 4…
The only thing I can add to this discussion: yes, there are routers out there which do indeed change ports “on their own” when configuring port forwarding. I simply consider these routers broken. Some routers can clearly distinguish between TCP and UDP, others can’t and only work properly if you forward both UDP and TCP for the port in question - which is technically absolute nonsense, but this is how they work.
I remember vividly that this discussion has been going on ever since Falcon existed. We had plenty of “good routers vs. bad routers” topics and lists in various forums over the last decade…
Personally, I never ran into problems with e.g. the AVM FritzBox! (simple) or any router capable of running a custom DD-WRT firmware (complicated).
-
Dunc, you’re misunderstanding.
The routers don’t change ports on inbound, they change outbound, when falcon client first connects to the server.
IOW, if you have two hypothetical UDP clients local:4242 -> remote:4242, they have to mangle local:4242 into local:4269/local:4237 or some other random gobblegook or else it has no way of distinguishing which is which.
-
Sorry, don’t have the time to read all threads in full length anymore…
Could you simply QUICKLY sum up what you want/need? If I understand it correctly, you want the “allow dubious connection” option to be less strict, i.e. allow clients/routers which do incoming port forwarding correctly, but which re-map source ports?
-
Yes.
Allow from-client to-server remapped, but check whether forwarded port is accessible.
-
UDP is stateless, but that is OSI layer 4. The ppl here are more concerned with RakNet, which is OSI layer 5. And this one is stateful. But, yes, the router only deals with layer 4…
The router is, but NAT isn’t IMHO. It puts a timer on a remap, and if that expires, or screws up in some way, the source port will be changed.
The only thing I can add to this discussion: yes, there are routers out there which do indeed change ports “on their own” when configuring port forwarding. I simply consider these routers broken.
Unfortunately, NAT is under-specified, and routers behaving in this way do not violate any specification. I encountered a couple of NATs behaving like this in my team. To make matters worse, some of these routers are mandatory for the ISP in use (typically fiber optics) , so it’s not a simple matter of swapping the router.
Allow from-client to-server remapped, but check whether forwarded port is accessible.
Traffic of each client arrives at the server with a possibly remapped source port. But the client informs the server (as part of the payload) of the source port it actually uses locally (typically 2935). If the server would reply not to the remapped source port, but the actual one, a client without proper port forwarding would be unable to join the server.
Personally, I see more in reporting the connection status of clients in the UI. If the names of the clients in the COMM window were colored, e.g. green for all p2p connections established, orange for at least one peer-to-peer connection routing through host, and red for failed connection with another client (if that is still possible), at least the users would know what’s wrong. All green would be perfect, some orange would be functionally okay although perhaps laggy, and one or more reds would be a no-go. Red users might even be refused to join a mission. Being unable to connect might be a mystery to most users.
-
Unfortunately, NAT is under-specified, and routers behaving in this way do not violate any specification.
It’s also valuable to note that reference-firewall/nat impls such as pf from openbsd behave in such a way.
-
Unfortunately, NAT is under-specified, and routers behaving in this way do not violate any specification.
It’s also valuable to note that reference-firewall/nat impls such as pf from openbsd behave in such a way.
I don’t see how a symmetric NAT could conserve the source port. So if conserving source ports was a requirement, symmetric NATs shouldn’t exist. Not a bad thing for Falcon, as symmetric NATs are impossible to negotiate with NAT traversal as far as I know, but not realistic in the business world.
-
Agree 142%
Glad that the issue was picked up to the devs.
-
Allow from-client to-server remapped, but check whether forwarded port is accessible.
Traffic of each client arrives at the server with a possibly remapped source port. But the client informs the server (as part of the payload) of the source port it actually uses locally (typically 2935). If the server would reply not to the remapped source port, but the actual one, a client without proper port forwarding would be unable to join the server.
Personally, I see more in reporting the connection status of clients in the UI. If the names of the clients in the COMM window were colored, e.g. green for all p2p connections established, orange for at least one peer-to-peer connection routing through host, and red for failed connection with another client (if that is still possible), at least the users would know what’s wrong. All green would be perfect, some orange would be functionally okay although perhaps laggy, and one or more reds would be a no-go. Red users might even be refused to join a mission. Being unable to connect might be a mystery to most users.
Could you do me a favor and summarize the desired behavior in a bullet-point list? However, both from a client to server and a server to client point of view. And - if needed - as well from a client to client point of view. All of these in combination with and without the “allow dubious” and “server hosts all” options set/not set.
I would forward the info to Mike, and then will try to work with him on the implementation.
Thanks a bunch!
-
Can try…
always:
ignore source ports when connection initiated by clientdubious:
ensure port forwarding presentnon-dubious:
enough for client to communicate with server and server only, though allow mesh participation
server hosts all:
disable c2c mesh (?) -
@sthalik: I expect you need to spend a few more words to convey your thoughts fully to a programmer.
I’d like to discuss things a bit more first.
The concept of the client initiating communications with the server using source port 2935, then that source port being remapped by NAT, say to 1234, then the server responding not to 1234, but to 2935, feels funny. Aren’t we violating some rules when we do this?
Also, it’s still no guarantee that things will be 100%. At least, I believe some NATs are so restricted, that when 2935 is used in conjunction with the server’s ip-address, it can’t be used by other ip-addresses (i.e. the other clients). However, this class of NATs will be a lot smaller than the “I have port forwarding for 2935, but will still remap outbound packets” class.
I’m not sure what the relevance is of Server Hosts All. I believe that has nothing to do with connectivity, just with who controls units in 3D?
Last but not least I wonder whether AllowDubiousConnections should be an issue. If we design a superior mechanism, can’t we forget about this option?
-
source port being remapped by NAT, say to 1234, then the server responding not to 1234, but to 2935, feels funny. Aren’t we violating some rules when we do this?
No. With dubious=0, it checks the port for equal to 2935, and disconnects otherwise. Of course, it -still- has nothing to do with TCP/IP by doing so.
Last but not least I wonder whether AllowDubiousConnections should be an issue. If we design a superior mechanism, can’t we forget about this option?
This requires BMS to be robust enough not to cause instability with severe network issues, such as hosting 50+ people on flaky connections, all flying one campaign and shooting at each other.
-
source port being remapped by NAT, say to 1234, then the server responding not to 1234, but to 2935, feels funny. Aren’t we violating some rules when we do this?
No. With dubious=0, it checks the port for equal to 2935, and disconnects otherwise. Of course, it -still- has nothing to do with TCP/IP by doing so.
I assume you mean the disconnecting based on source port (AllowDubious=0), when you state it has nothing to do with tcp/ip? Please keep in mind that this option wasn’t meant for public use IIRC. It was just implemented to ensure proper conditions for MP beta-testing.
The idea of having the server respond to 2935 instead of the possibly remapped source port ensures that 2935 is accessible to the server. However, it might still be inaccessible to other clients (i.e. symmetric NAT). The “ultimate” test would be to have all the clients report what the status of their part of the mesh network is, and display messages and/or kick users based on that information. The server could maintain an array of connection statuses of clients to other clients. It’s not trivial what to do with that information. E.g. when client A connects that can only connect to the server, and not to any of the existing clients, not only will this client A list all his statuses as “not connected”, or “routing through host”, all other clients will have one connection status read “not connected”, or “routing through host”, namely the connection to client A. Kick A because overall status was OK before he connected? What if A connected as the first client? No other clients could join then.
I think a system where everybody is able to log on to the server, but only well-connected clients are able to join a mission, is ideal. Of course, the UI should indicate in some way what the interconnection status of each client is.