this post was submitted on 02 Oct 2023
29 points (96.8% liked)

Asklemmy

43803 readers
751 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

Let me clarify: We have a certain amount of latency when streaming games from both local and internet servers. In either case, how do we improve that latency and what limits will we run in to as the technology progresses?

all 45 comments
sorted by: hot top controversial new old
[–] Kwdg@discuss.tchncs.de 18 points 1 year ago

Ignoring any computation, I guess rhe fastest would be dependent on the medium which transport the data and the limit there is the speed of light

[–] Blake@feddit.uk 14 points 1 year ago (1 children)

Theoretically, the latency between the streamer and viewers could be zero or near zero.

For playing games online, the minimum possible latency is the speed of light delay. We’re pretty much already at the limit for that one, and we’re even using a lot of pretty clever techniques to mitigate latency such as lag compensation.

[–] NotAnArdvark@lemmy.ca 6 points 1 year ago (2 children)

Ooh, we're not at the speed of light as a limit yet, are we? Do you mean "point A to point B" on fibre, or do you actually mean full on "routed-over-the-internet"? Even with fibre (which is slower than the speed of light), you're never going in a straight line. And, at least where I live, you're often back-tracking across the continent before your traffic makes it to the end destination, with ISPs caring more about saving money than routing traffic quickly.

[–] jamiehs@lemmy.ml 4 points 1 year ago (1 children)

For most of us, there is no difference though; you get what you get.

I live in a nice neighborhood but I won’t ever get fiber… we have underground utilities and this area is served by coaxial cable. There’s no way in hell they are digging up miles of streets to lay fiber; you get what you get.

My ISP latency is like 16-20ms but when sim racing it just depends on where the race server is (and where my competitors are). As someone on the US west coast, if I’m matched with folks in EU and some others in AUS/NZ, the server will likely be in EU and my ping will be > 200. My Aussie competitors will be dealing with 300-400.

It’s not impossible to share a track at those latencies, but for close racing or a competitive shooter… errrr that just doesn’t work.

The fact that I’m always at around 200ms for EU servers might be improved if we could run a single strand of fiber from my house to the EU sever (37ms!) but there would still be switching delays, etc. so yeah the speed of light is the limit, but to your point, there’s a lot of other stuff that adds overhead.

[–] Blake@feddit.uk 3 points 1 year ago (2 children)

Theoretically it doesn’t really matter whether your connection is fiber or copper. Electricity moves through copper roughly at the same speed as light moves through fiber. The advantages that fiber has over copper is that it can be run longer distances without needing boosting, and that you can run an absolute fuckton more end-to-end connections in the same diameter of cable. More connections means less contention - at least at one end of the pipe. The problem then moves to the ISP’s routers :)

I’d say that the chances are actually quite good that you’ll get fiber internet within the next 10 years. Whether or not it improves your internet connection is another question entirely!

[–] jamiehs@lemmy.ml 2 points 1 year ago

Right on man, thanks for the additional context/info. Much appreciated!

[–] Dark_Arc@social.packetloss.gg 1 points 1 year ago (1 children)

It needs less boosting, fiber still needs repeaters over sufficiently long spans.

Really the biggest advantage to fiber from a consumer perspective is that it's not subject to signal deformation and interference. You don't have nearly as many issues with fiber Internet as a result.

[–] Blake@feddit.uk 0 points 1 year ago (1 children)

Sorry, what I wrote here was unclear, I wrote it needs less boosting in another comment, but re-reading this one, it does sound like I’m claiming it needs no boosting over any distance - that’s not what I meant though! I just meant that you can run an equivalent link without any boosting further than you could with copper.

Interference isn’t actually that big of a deal for Ethernet over copper, unless the installer does something silly like run UTP alongside high power electrical lines, or next to a diesel generator, or something. Between shielding, the use of balanced signals, and the design of twisted pair, most interference is eliminated.

[–] Dark_Arc@social.packetloss.gg 1 points 1 year ago* (last edited 1 year ago)

Interference isn’t actually that big of a deal for Ethernet over copper, unless the installer does something silly like run UTP alongside high power electrical lines, or next to a diesel generator, or something. Between shielding, the use of balanced signals, and the design of twisted pair, most interference is eliminated.

This should be true, but in practice ... there are a lot more environmental factors that can and do impact a copper cables (which can result in some really wacky situations to diagnose, "this only happens on hot days when XYZ part of the line to your house expands"), and more installation errors (e.g., not grounding the wire). That doesn't matter much for TCP applications/protocols but for UDP applications/protocols that can all add up to be something that's observable in the real world.

You get a lot closer to "all" interference being removed with fiber ... and for most gamers at least, that's probably the most noticeable improvement on fiber vs "cable" service (other than perhaps a download/upload speed bump). Pings are in my experience roughly the same, though the fiber networks tend to fair a bit better (probably just from newer hardware backing the network).

It's becoming more of an issue too (in Ohio at least) because more and more ISPs are locking folks out of their modem's diagnostics, so they can't actually see that the modem is detecting signal quality issues coming into the house... I almost always recommend folks just go with fiber all the way into their house if they have the option, unless they just use the web and watch videos (in which case who cares, TCP will make it so you don't care unless it's really bad, and the really bad cases are typically fixed the first time the tech is out).

It's one of those things where there's not much of a benefit for consumers on paper (theoretically -- as you say -- you could have copper service that's just as good and fast as fiber) ... but in practice, fiber just saves a lot of headaches for all parties because of its resistance to interference and simpler installation.

[–] Blake@feddit.uk 0 points 1 year ago* (last edited 1 year ago) (1 children)

Even with fibre (which is slower than the speed of light)

This makes no sense. Are you referring to the speed of light in a vacuum? Fiber transmits data using photons which travel at the speed of light. While, yes, there is often some slowing of signals depending on whether the fiber is single-mode or multi-mode and whether the fiber has intentionally been doped, it’s close enough to the theoretical maximum speed that it’s not really worth splitting hairs (heh) over

There are additionally some delays added during signal processing (modulation and demodulation from the carrier to layer 3) but again this is so fast at this point it’s not really conceivably going to get much faster.

The bottleneck really is contention vs. throughput, rather than the media or modulation/demodulation slash encoding/decoding.

At least to the best of my knowledge!

you’re often back-tracking across the continent before your traffic makes it to the end destination, with ISPs caring more about saving money than routing traffic quickly

That’s generally not how routing works - your packets might take different routes depending on different conditions. Just like how you might take a different road home if you know that there’s roadworks or if the schools are on holiday, it can be genuinely much faster for your packets to take a diversion to avoid, say, a router that’s having a bad day.

Routing protocols are very advanced and capable, taking many metrics into consideration for how traffic is routed. Under ideal conditions, yes, they’d take the physically shortest route possible, but in most cases, because electricity moves so fast, it’s better to take a route that’s hundreds of miles longer to avoid some router that got hacked and is currently participating in some DDoS attack.

[–] Dark_Arc@social.packetloss.gg 1 points 1 year ago (1 children)

That’s generally not how routing works

It is how it works ... mostly because what they're talking about is the fact that the Internet (at least in the US) is not really set up like a mesh at the ISP level. It's somewhere between "mesh " and "hub and spoke" where lots of parties that could talk directly to each other don't (because nobody ever put down the lines and setup the routing equipment to connect two smaller ISPs or customers directly).

https://www.smithsonianmag.com/smart-news/first-detailed-public-map-us-internet-infrastructure-180956701/

[–] Blake@feddit.uk -1 points 1 year ago (1 children)

There’s absolutely nothing wrong with that topology - the fact that you seem to think that the design is a bad thing really demonstrates your lack of understanding here.

For example, have you never wondered why we don’t just connect every device in a network all together like a big daisy chain? Or why we don’t use a mesh network? There is a large number of reasons why we don’t really use those topologies anymore.

I don’t want to get into the specifics, but in general, the more networks a router is connected to, the less efficient it is overall.

The propagation delay is pretty insignificant for most routers. Carrier grade routers like those at the core of the internet can handle up to 43 billion packets per second, another hop is absolutely nothing in terms of delay.

[–] Dark_Arc@social.packetloss.gg 1 points 1 year ago* (last edited 1 year ago) (1 children)

For example, have you never wondered why we don’t just connect every device in a network all together like a big daisy chain? Or why we don’t use a mesh network? There is a large number of reasons why we don’t really use those topologies anymore.

Well daisy chaining would be outright insanity ... I'm not even sure why you'd jump to something that insane ... my internet connection doesn't need to depend on the guy down the street.

Making an optimally dense mesh network (and to be clear, I mean a partially connected mesh topology with more density than the current situation ... which at a high level is already a partially connected mesh topology) would not be optimally cost effective ... that's it.

the more networks a router is connected to, the less efficient it is overall. another hop is absolutely nothing in terms of delay.

Do you not see how these are contradictory statements?

Yeah, you'd need more routers, you have more lines. But you could route more directly between various points. e.g., there could be at least one major transmission line between each state and its adjacent states to minimize the distance a packet has to physically travel and increase redundancy. It's just more expensive and there's typically not a need.

This stuff happens in more population dense areas because there's more data, and more people, direct connections make more sense. It's just money, it's not that somehow not having fewer lines through the great plains makes the internet faster... Your argument and your attitude is something else. I suspect we're just talking past each other, but w/e.

[–] Blake@feddit.uk 1 points 1 year ago (1 children)

I’m becoming more and more convinced that you don’t really know what you’re talking about. Are you a professional network engineer or are you just a hobbyist?

[–] Dark_Arc@social.packetloss.gg 1 points 1 year ago (1 children)

I wear a lot of hats professionally; mostly programming. I don't do networking on a day-to-day basis though if that's what you're asking.

If you've got something actually substantive to back up your claim that (if money was no object) the current topology is totally optimal for traffic from an arbitrary point A <-> B on that map though... have at it.

This all started with:

you’re often back-tracking across the continent before your traffic makes it to the end destination, with ISPs caring more about saving money than routing traffic quickly

And that's absolutely true ... depending on your location, you will travel an unnecessary distance to get to your destination ... because there just aren't wires connecting A <-> B. Just like a GPS will take you on a non-direct path to your destination because there's not a road directly to it.

A very simple example where the current topology results in routing all the way out to Seattle only to backtrack: https://geotraceroute.com/?node=0&amp;host=umt.edu#

[–] Blake@feddit.uk 1 points 1 year ago (1 children)

The problem that I’m having (and why I asked that) is because I was assuming that you would have some knowledge which you don’t seem to have with a lot of my comments. I’m really not trying to be rude, but it makes it a lot more difficult to explain the flaws in your reasoning when you’re talking about topics that are beyond your knowledge as if you know them well.

I have explained the realities of the situation to you, if you don’t want to accept them, that’s fine, but you’re basically arguing with an expert about something you don’t really understand very well. I’m happy to explain stuff but you should just ask rather than assume you know better because it makes it much more difficult for me to understand the gaps in your understanding/knowledge.

So ultimately, for routers, we have a number of limited resources. Firstly, yes, interfaces, but also the usual stuff - CPU, RAM, etc.

Now, I mentioned before that routing protocols are very complex - they have many metrics which are taken into account to determine what path is ultimately best for each packet. This is a process which can be quite intensive on CPU and RAM - because the router needs to “remember” all of the possible routes/destinations a packet can travel, as well as all of the metrics for each destination - distance, delays, administrative distance, TTL, dropped packets, etc. and then make a decision about processing it. And it needs to make these decisions billions of times a second. Slowing it down, even a tiny bit, can hugely impact the total throughout of the router.

When you add another connection to a router, you’re not just increasing the load for that one router, but for the routers which connect to the routers which connect to those routers which route to the routers that route to that router… you get the idea. It increases the number of options available, and so it places additional burden on memory and processing. When the ultimate difference in distance even an extra 100 miles, that’s less than a millisecond of travelling time. It’s not worth the added complexity.

That’s what I meant when I said that an extra hop isn’t worth worrying about, but adding additional connections is inefficient.

[–] Dark_Arc@social.packetloss.gg 1 points 1 year ago* (last edited 1 year ago) (1 children)

but you’re basically arguing with an expert about something you don’t really understand very well. I’m happy to explain stuff but you should just ask rather than assume you know better because it makes it much more difficult for me to understand the gaps in your understanding/knowledge.

Okay, I'll apologize... For context though, in general, it's the internet and it's hard to take "expert" at its word (and even outside of an online context, "expert" is a title I'm often skeptical of ... even when it's assigned to me :) ). I've argued with plenty of people (more so on Reddit) that are CS students... It's just the price of being on the internet I guess, ha

I'm still not sure I agree with your conclusions, but that's mostly healthy skepticism... because your argument isn't tracking with ... well ... physics or distributed computing... more direct "routes" and taking load off "routes" that aren't the optimal route typically is a great way to speed up a system. It's definitely true that doing that adds overhead vs just having a few "better" systems do the work (at least from some perspectives), but it's hard for me to imagine that with sufficient funds it truly makes it worse to give routing algorithms more direct options and/or cut out unnecessary hops entirely.

Reducing "hops" and travel time is kind of the bread and butter of performance work when it comes to all kinds of optimizations in software engineering..

If you want me to ask a question ... what's your explanation for why there are so many more connections in the north east and west coast if more connections slows the whole system down? Why not just have a handful of routes?

[–] Blake@feddit.uk 1 points 1 year ago* (last edited 1 year ago)

You can’t really compare small-scale clusters of highly available services with the scale of the entire Internet, it’s just an entirely different ballgame. Though even in small scale setups, there is always a sweet spot between too many paths and not enough paths - VRRP (which is the protocol usually used for high availability) actually has quite a big overhead, you can’t have too many connections on the same network or it causes lots of problems.

Internet scale routing usually uses BGP, which also has quite a heavy overhead.

I guess all you need to understand is that routing isn’t free, and the more routes, the more overhead. So there’s always going to be a point where adding more routes just makes things slower rather than faster. And BGP… is just a bit of a mess, right now, honestly. The BGP table has grown so big that a lot of older devices can’t keep it in fast memory anymore, so they either have to be replaced with newer hardware or use slow memory (and therefore slow processing of packets). So it’s not really in everyone’s best interests to just keep adding more routes. It’s harder and harder to justify.

why there are so many more connections in the north east and west coast if more connections slows the whole system down

I’m not from the US, so at best it would be an educated guess.

Firstly, it’s not as simple as just “more connections is more slow”, it means there’s a greater overhead. If the improvement from adding another line is greater than the overhead, then it can be worthwhile. For example, imagine a simple network with three routers, A, B and C, where A is connected only to B, and C is connected only to B, meaning that B is connected to both A and C. If there is a large amount of traffic between A and C, it may be worth adding a direct connection between them. If there isn’t, then it’s probably not worth doing.

I guess it’s a bit like adding a new road between two existing roads. Is it worth adding a junction and a set of traffic lights to some existing roads, or would that slow down traffic enough not to be worth doing?

Maybe, since you work with software more, it would make sense to put it this way: why don’t you create an index for every single possible column and table in SQL?

Or just look at it like premature optimisation. There’s a saying about premature optimisation in software engineering! ;-)

Another thing to keep in mind though is that there’s definitely still quite a few bad decisions still kicking around from when the internet was new. It takes time and effort to get rid of the legacy junk, same as in programming.

[–] squirrel@discuss.tchncs.de 13 points 1 year ago (3 children)

I played on Google Stadia from day 1 until it got shut down. I mainly played racing games like F1 and GRID, with the occasional session in RDR2 or The Division 2. Latency was never a problem for me.

The main problem that occured over and over in the community was people's slow or broken internet connection at home or their WiFi setup.

I would say the technology for cloud gaming is here today, but the home internet connections of a lot of people aren't ready yet.

[–] sxan@midwest.social 13 points 1 year ago (2 children)

Many people don't understand the continued importance of a home wired LAN. WiFi is, and probably always will be, a fraction of the performance of an ethernet connection.

[–] squirrel@discuss.tchncs.de 3 points 1 year ago (1 children)

Yes, I should have mentioned that I've always played via wired ethernet connection.

[–] sxan@midwest.social 1 points 1 year ago

Of course! But anyone on WiFi are going to be subject to more lag, like you said.

[–] Blake@feddit.uk 2 points 1 year ago (2 children)

WiFi is, and probably always will be, a fraction of the performance of an ethernet connection

In terms of bandwidth, sure, but not in terms of latency, in fact, theoretically, WiFi could be faster than Ethernet. WiFi uses radio waves, which travel faster in air than electrons do in copper and photons do in glass.

The limitation for WiFi is really at the physical layer - i.e. encoding/decoding. With that said, we do already have WiFi with transcoding fast enough to give sufficient performance for fast-paced gaming. While you’re totally correct that, at the moment, Ethernet is more capable in terms of bandwidth and latency, that’s not necessarily going to be true forever, and WiFi is good enough for any purpose at home use. The biggest issues are interference and attenuation - e.g. thick walls, sources of electromagnetic interference

[–] sxan@midwest.social 3 points 1 year ago (1 children)

Sure, good points. Even with in-home fiber (very unusual), latency of the medium is so equivalent as to be practically unmeasureable. I think, however, that the bigger factor is that it's cheaper and easier to get a fast ethernet switch than a fast WiFi router; most WiFi routers don't have particularly fast CPUs, or high-performance buses.

Honestly, though, I'm just guessing; I doubt any of this has as much of a latency impact as WAN factors. Bandwidth is where you'll notice WiFi affects, and this can present as latency issues as systems struggle to get updates over a (relatively) narrow pipe.

[–] Blake@feddit.uk 0 points 1 year ago* (last edited 1 year ago)

Thanks for the response, it's nice to chat with you :)

latency of the medium is so equivalent as to be practically unmeasureable

More or less, yup. There are some cool uses of RF to achieve very high bandwidth, low latency connections (5G as a common example, but Wi-Fi 7 has a theoretical maximum speed of 46Gbps - while this is still far behind the maximum speed of Ethernet (We have 400Gbps Ethernet in use, with 800Gbps in development), it's catching up very fast - and since most households and businesses with copper cabling will be using mostly CAT5e or 6a Ethernet (1Gbps/100m and 10Gbps/100m respectively), Wi-Fi will soon likely be faster than most copper Ethernet networks. It's also very likely that 5G internet will all but supplant ADSL and VDSL connections in the coming years. I think twisted-pair copper cabling is following in the footsteps of coax :)

Even with in-home fiber

The minimum latency of a connection through fiber is about the same (actually, slightly less, but not enough to matter) than the same connection made through copper. Signal propagation speed is not a benefit of fiber over copper - the benefits of fiber are that you can have many, many more connections in the same diameter of cable than with copper, it's immune to electromagnetic interference, and it can run much further distances without needing signal boosting.

most WiFi routers don’t have particularly fast CPUs, or high-performance buses.

That's one of the main issues, yeah - consumer grade electronics are usually total junk, especially the free routers provided by ISPs, but I'm also thinking of those absolutely horrible "gaming" Wi-Fi routers provided by the likes of ASUS - they have decent specs, but they're just absolutely overloaded with features that gobble RAM and CPU. Dear consumer electronics manufacturers, please just let the router be a router, and let the Wi-Fi APs be Wi-Fi APs. Combine the router and the Wi-Fi AP if you must, but absolutely please stop suggesting that people can run a hundred services from routers. You should totally upsell that feature in a separate node appliance or something! Sorry, I got distracted.

it’s cheaper and easier to get a fast ethernet switch than a fast WiFi router

I agree, but I also don't - most consumers don't really know what a switch is or why they might need one. Most switches found in houses are either integrated with a router, power line adapter, or Wi-Fi access point. While a good switch is absolutely going to be much cheaper than a good Wi-Fi AP, most people wouldn't really look to buy one. They might search for "Ethernet hub" on Amazon and luck into buying a decent switch, but I think most people think in terms of Wi-Fi these days, so it's probably easier to get a Wi-Fi AP than a switch.

Also, just a minor nitpick: "fast Ethernet" is a little confusing, as terminology, because that's the marketing name used to refer to 100mbps Ethernet connections (often indicated on network devices as FE) - so named because it was the successor to 10mbps (regular) Ethernet. (damn you, marketing people! I blame y'all for what you did to USB) When we discuss this kind of thing, it's clearer to refer to 'high speed Ethernet' or refer specifically to line speed (e.g. 10GbE) - unless we're talking about 100mbps Ethernet! Although, even then, it's probably a bit confusing these days - I'd call it 10/100 Ethernet usually, rather than fast Ethernet, unless I was being really lazy ("yeah just stick it in the f/e port")

I doubt any of this has as much of a latency impact as WAN factors

It definitely can do, but in a properly functioning network, I'd agree. If you have a faulty connection or significant source of interference or impedance, then that would be much more of an issue than anything else - otherwise, yeah, it's going to be the Internet where most of the latency comes in to play. I would estimate that probably 75% of people could get big improvements to their online experience by making changes to their home network, but at a certain point, yes, contention becomes the bottleneck, which is not so easily solved :)

[–] Dark_Arc@social.packetloss.gg 1 points 1 year ago (1 children)

Interference is a big issue for Wi-Fi as well.

You may be able to get the latency and the throughput, but if you're dropping packets because of some noise in the air, that's not good for gaming.

I also used stadia and have a different setup now... neither one worked very well over WiFi despite some pretty high end networking. I'd still get the occasional blip where everything would get super blurry because ... 🤷‍♂️

Part of that I think is the Wi-Fi chipset in my computer misbehaving, but I could never reproduce that in testing, just in practice I'd run into an issue for a few seconds everytime ... which doesn't seem like much until you lose a game or you're about to beat some important challenge and then mAlFunCTion.

[–] Blake@feddit.uk 0 points 1 year ago (1 children)

Yep, I mean, the comment you’re replying to literally contains the phrase, “the biggest issues are interference…” haha

Likewise, it’s something that’s likely to improve as we tend to move away from the 2.4GHz band.

Dropping packets is definitely more of a problem for streaming in particular, rather than anything else, because like you said, if you drop packets you’re going to get degraded quality video. If you were gaming locally, it wouldn’t really affect you as much. Online games have extremely good, well designed methods of compensating for dropped packets in a way that streaming will never be able to match.

[–] Dark_Arc@social.packetloss.gg 1 points 1 year ago

Yep, I mean, the comment you’re replying to literally contains the phrase, “the biggest issues are interference…” haha

Oops, yup, read that one wrong.

Likewise, it’s something that’s likely to improve as we tend to move away from the 2.4GHz band.

I'm not so sure. We've been on 5GHz for a while ... even on there or as recently as WiFi 6 (which I forgot the exact band), there are still lots of problems.

Dropping packets is definitely more of a problem for streaming in particular, rather than anything else, because like you said, if you drop packets you’re going to get degraded quality video. If you were gaming locally, it wouldn’t really affect you as much. Online games have extremely good, well designed methods of compensating for dropped packets in a way that streaming will never be able to match.

Yes and no; dropping packets can still really badly impact competitive games. Casual games that use client authoritatively movement there for sure aren't issues with though.

[–] NotAnArdvark@lemmy.ca 3 points 1 year ago (1 children)

I would say the technology for cloud gaming is here today, but the home internet connections of a lot of people aren't ready yet.

You witness this a lot with video conferencing. People tell one person their audio/video is shitty, and that person just shrugs and says "yeah, I have bad internet." In my head I'm screaming "Well, what have you tried?!" or "I see you sitting beside the refrigerator there!"

[–] Dark_Arc@social.packetloss.gg 1 points 1 year ago

Yeah... or microphones... I really wish they'd start putting the noise cancelling as an option on the receiving end... lots of people don't care to set up their audio right and then you get god awful static, crunching, or breathing in your ears.

It's especially prevalent in gaming where headset mics dominate. 🙃

[–] Blake@feddit.uk 1 points 1 year ago (1 children)

Those games are quite well matched with cloud streaming. An example of a game which isn’t suitable for cloud gaming would be competitive FPS games such as rainbow 6 siege, where the additional delay imposed by connection between the player and the game can be quite a significant disadvantage. The only way that this would be low enough to become acceptable would be if you live close enough to the host device that the latency is very low, or or the host device is very close to the game server itself.

[–] kambusha@feddit.ch 2 points 1 year ago

I had Stadia too and played a lot of Destiny 2. I must say that I was highly impressed by the low latency. I literally couldn't notice that I wasn't playing locally, unless my internet went down.

Only when I took Stadia with me to a random airbnb did I start noticing any type of latency, and then we just played Mortal Kombat or other fighting games where you can just mash the buttons.

[–] CanadaPlus@lemmy.sdf.org 9 points 1 year ago* (last edited 1 year ago) (1 children)

The speed of light, so 50ms or so assuming locations on Earth. In practice a bit more because you have to go around it rather than through the core. Servers already have to make retroactive calls, which is why it looks like you hit but then you didn't sometimes.

Interestingly enough, Starlink has lower latency than wire despite the longer path because light travels slower than c through glass fiber.

[–] Dark_Arc@social.packetloss.gg 1 points 1 year ago (2 children)

Where are you getting 50ms? The speed of light is a LOT faster than 50ms?

[–] rasensprenger@feddit.de 2 points 1 year ago (1 children)

No, it seems to be in the right order of magnitude

https://www.wolframalpha.com/input/?i=circumference+of+earth+%2F+speed+of+light

Obviously light doesn't have to travel quite as far, but 50ms is not a bad estimation for a worst case. Also you have to add processing delays at each router, which makes everything far slower.

[–] Dark_Arc@social.packetloss.gg 2 points 1 year ago* (last edited 1 year ago) (1 children)

But this is the limit of cloud gaming. Cloud gaming in no design goes all the way around the earth, or half way around the earth. Stadia used regional data centers, as does GeforceNOW, as does Shadow.

50ms seems really arbitrary.

[–] rasensprenger@feddit.de 2 points 1 year ago

I also think 50ms is a bit pessimistic, but there are locations which are far off of googles datacenters (at least until they finish their Johannesburg location, south africa seems very isolated) and you're never directly connected via as-the-bird-flies fibre connections, actual path length will be longer than just drawing a line on a map.

This can all be mitigated by just building more and closer edge servers, of course, but at some point you just have a computer in your room again.

[–] CanadaPlus@lemmy.sdf.org 0 points 1 year ago

Milliseconds. That's roughly the diameter of the Earth divided by the speed of light.

[–] Nighed@sffa.community 5 points 1 year ago* (last edited 1 year ago)

The base limit is the speed of light/electricity it takes X time for a signal to travel. This is your base latency. For example it takes about 70ms for light to travel half way round the world (it has to go round, not through). This can be improved by talking to servers that are closer to you and by taking links that are direct. But can't be improved beyond the rules of physics.

On top of this you get really small amounts of processing delays as data is passed through various routers/computers on the way to the destination.

The real problem comes from congestion - if there is a lot of data being transferred between two destinations, the infrastructure between them might not be able to cope. This may result in messages being queued (causing a delay) or dropped (your controls don't make it to the server!) To avoid this, the network will route your message via somewhere else with less demand, increasing the distance and delay (but spreading the load)

Unfortunately, if that overloaded cable is the one bringing data into your neighborhood, then there likely isn't an alternative route. In the UK at least, we are (finally) building out a fiver to the premises internet network that effectively fixes any local bottlenecks.

If you want to see where your latency is coming from, you can run a trace route using various applications (or even directly in windows). This will show you the latency between each router that your data is traveling through on its route to it's destination.

Edit addition: for game streaming the network delays are added onto the natural delays of running the game (controls -> computer -> processing -> display/speakers).

The other big additional delay for streaming is that in order to reduce the network load of streaming the game the image is compressed and encoded to be sent to you (much more than is done for your monitor cable).

This is a computationaly intensive operation that can take a good few ms. The better the computers at either end, the faster this can be done. However the big way forward here is hardware encoding/decoding. By using hardware that is made to just do encoding/decoding and nothing else this can be done much faster.

These encoders are commonly on graphics cards, and the graphics parts of CPUs. As newer encoding formats are created and hardware encoders created (and actually included) this area will becomeuch faster.

Source: programmer with a computer science degree and a vague interest in networking.

On mobile, so sorry for bad editing.

[–] MrFunnyMoustache@lemmy.ml 4 points 1 year ago

The lag has several components. Input lag between the peripherals and your computer, the network transmissions to the server, the regular rendering of the game, live transcoding the game, the network again, decoding the stream on your device. The rest are pretty much insignificant.

The biggest way to reduce lag I can think of is if the server is literally in your city, and the connection between it and you have the least amount of nodes between you and the server. Some video streaming services will partner with ISPs to put their servers in the same place to reduce overhead and improve the user experience. I'd assume that gaming would benefit from that too, but this is harder to implement since.

Another way to improve networking lag is by prioritising game streaming data over other data, QoS (quality of service), is really important both for the home network and on the ISP side.

This should be obvious, but don't use a VPN.

For the video transcoding, it can be pretty quick, but having dedicated hardware like NVENC would be faster than using the CPU, not just in terms of FPS, but also in latency if given the same FPS (through FPS cap).

Higher FPS. The more frames per second, the lower the input lag, though it only matters if you eliminate network lag first.

I should mention that I have never used any game streaming service, and I don't have the equipment to test lag either.

[–] Platform27@lemmy.ml 3 points 1 year ago* (last edited 1 year ago)

I think we are constantly progressing in that field. One issue for latency was that controllers used to contact your device, and then the server. Now they can connect directly to the server. Things will improve, like it or not.

For right now, I think the biggest hurdle is with ISPs.

  1. Data caps can be quite common, in many countries. Essentially creating a huge limit on how much you can (if at all) play.
  2. Most people’s router, and access point hardware needs upgrading. A lot of the stock router AIOs from ISPs are really bad. Creating a bottleneck before the data even reaches the servers.

Another hurdle I can see is companies profit sharing. Everyone wants a large cut, so I’d expect multiple streaming options… and many failures, like what we’re seeing on the movies/series streaming model… just with games it’ll be soooo much worse.