this post was submitted on 02 Oct 2023
29 points (96.8% liked)

Asklemmy

43803 readers
751 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

Let me clarify: We have a certain amount of latency when streaming games from both local and internet servers. In either case, how do we improve that latency and what limits will we run in to as the technology progresses?

you are viewing a single comment's thread
view the rest of the comments
[–] Dark_Arc@social.packetloss.gg 1 points 1 year ago (1 children)

That’s generally not how routing works

It is how it works ... mostly because what they're talking about is the fact that the Internet (at least in the US) is not really set up like a mesh at the ISP level. It's somewhere between "mesh " and "hub and spoke" where lots of parties that could talk directly to each other don't (because nobody ever put down the lines and setup the routing equipment to connect two smaller ISPs or customers directly).

https://www.smithsonianmag.com/smart-news/first-detailed-public-map-us-internet-infrastructure-180956701/

[–] Blake@feddit.uk -1 points 1 year ago (1 children)

There’s absolutely nothing wrong with that topology - the fact that you seem to think that the design is a bad thing really demonstrates your lack of understanding here.

For example, have you never wondered why we don’t just connect every device in a network all together like a big daisy chain? Or why we don’t use a mesh network? There is a large number of reasons why we don’t really use those topologies anymore.

I don’t want to get into the specifics, but in general, the more networks a router is connected to, the less efficient it is overall.

The propagation delay is pretty insignificant for most routers. Carrier grade routers like those at the core of the internet can handle up to 43 billion packets per second, another hop is absolutely nothing in terms of delay.

[–] Dark_Arc@social.packetloss.gg 1 points 1 year ago* (last edited 1 year ago) (1 children)

For example, have you never wondered why we don’t just connect every device in a network all together like a big daisy chain? Or why we don’t use a mesh network? There is a large number of reasons why we don’t really use those topologies anymore.

Well daisy chaining would be outright insanity ... I'm not even sure why you'd jump to something that insane ... my internet connection doesn't need to depend on the guy down the street.

Making an optimally dense mesh network (and to be clear, I mean a partially connected mesh topology with more density than the current situation ... which at a high level is already a partially connected mesh topology) would not be optimally cost effective ... that's it.

the more networks a router is connected to, the less efficient it is overall. another hop is absolutely nothing in terms of delay.

Do you not see how these are contradictory statements?

Yeah, you'd need more routers, you have more lines. But you could route more directly between various points. e.g., there could be at least one major transmission line between each state and its adjacent states to minimize the distance a packet has to physically travel and increase redundancy. It's just more expensive and there's typically not a need.

This stuff happens in more population dense areas because there's more data, and more people, direct connections make more sense. It's just money, it's not that somehow not having fewer lines through the great plains makes the internet faster... Your argument and your attitude is something else. I suspect we're just talking past each other, but w/e.

[–] Blake@feddit.uk 1 points 1 year ago (1 children)

I’m becoming more and more convinced that you don’t really know what you’re talking about. Are you a professional network engineer or are you just a hobbyist?

[–] Dark_Arc@social.packetloss.gg 1 points 1 year ago (1 children)

I wear a lot of hats professionally; mostly programming. I don't do networking on a day-to-day basis though if that's what you're asking.

If you've got something actually substantive to back up your claim that (if money was no object) the current topology is totally optimal for traffic from an arbitrary point A <-> B on that map though... have at it.

This all started with:

you’re often back-tracking across the continent before your traffic makes it to the end destination, with ISPs caring more about saving money than routing traffic quickly

And that's absolutely true ... depending on your location, you will travel an unnecessary distance to get to your destination ... because there just aren't wires connecting A <-> B. Just like a GPS will take you on a non-direct path to your destination because there's not a road directly to it.

A very simple example where the current topology results in routing all the way out to Seattle only to backtrack: https://geotraceroute.com/?node=0&amp;host=umt.edu#

[–] Blake@feddit.uk 1 points 1 year ago (1 children)

The problem that I’m having (and why I asked that) is because I was assuming that you would have some knowledge which you don’t seem to have with a lot of my comments. I’m really not trying to be rude, but it makes it a lot more difficult to explain the flaws in your reasoning when you’re talking about topics that are beyond your knowledge as if you know them well.

I have explained the realities of the situation to you, if you don’t want to accept them, that’s fine, but you’re basically arguing with an expert about something you don’t really understand very well. I’m happy to explain stuff but you should just ask rather than assume you know better because it makes it much more difficult for me to understand the gaps in your understanding/knowledge.

So ultimately, for routers, we have a number of limited resources. Firstly, yes, interfaces, but also the usual stuff - CPU, RAM, etc.

Now, I mentioned before that routing protocols are very complex - they have many metrics which are taken into account to determine what path is ultimately best for each packet. This is a process which can be quite intensive on CPU and RAM - because the router needs to “remember” all of the possible routes/destinations a packet can travel, as well as all of the metrics for each destination - distance, delays, administrative distance, TTL, dropped packets, etc. and then make a decision about processing it. And it needs to make these decisions billions of times a second. Slowing it down, even a tiny bit, can hugely impact the total throughout of the router.

When you add another connection to a router, you’re not just increasing the load for that one router, but for the routers which connect to the routers which connect to those routers which route to the routers that route to that router… you get the idea. It increases the number of options available, and so it places additional burden on memory and processing. When the ultimate difference in distance even an extra 100 miles, that’s less than a millisecond of travelling time. It’s not worth the added complexity.

That’s what I meant when I said that an extra hop isn’t worth worrying about, but adding additional connections is inefficient.

[–] Dark_Arc@social.packetloss.gg 1 points 1 year ago* (last edited 1 year ago) (1 children)

but you’re basically arguing with an expert about something you don’t really understand very well. I’m happy to explain stuff but you should just ask rather than assume you know better because it makes it much more difficult for me to understand the gaps in your understanding/knowledge.

Okay, I'll apologize... For context though, in general, it's the internet and it's hard to take "expert" at its word (and even outside of an online context, "expert" is a title I'm often skeptical of ... even when it's assigned to me :) ). I've argued with plenty of people (more so on Reddit) that are CS students... It's just the price of being on the internet I guess, ha

I'm still not sure I agree with your conclusions, but that's mostly healthy skepticism... because your argument isn't tracking with ... well ... physics or distributed computing... more direct "routes" and taking load off "routes" that aren't the optimal route typically is a great way to speed up a system. It's definitely true that doing that adds overhead vs just having a few "better" systems do the work (at least from some perspectives), but it's hard for me to imagine that with sufficient funds it truly makes it worse to give routing algorithms more direct options and/or cut out unnecessary hops entirely.

Reducing "hops" and travel time is kind of the bread and butter of performance work when it comes to all kinds of optimizations in software engineering..

If you want me to ask a question ... what's your explanation for why there are so many more connections in the north east and west coast if more connections slows the whole system down? Why not just have a handful of routes?

[–] Blake@feddit.uk 1 points 1 year ago* (last edited 1 year ago)

You can’t really compare small-scale clusters of highly available services with the scale of the entire Internet, it’s just an entirely different ballgame. Though even in small scale setups, there is always a sweet spot between too many paths and not enough paths - VRRP (which is the protocol usually used for high availability) actually has quite a big overhead, you can’t have too many connections on the same network or it causes lots of problems.

Internet scale routing usually uses BGP, which also has quite a heavy overhead.

I guess all you need to understand is that routing isn’t free, and the more routes, the more overhead. So there’s always going to be a point where adding more routes just makes things slower rather than faster. And BGP… is just a bit of a mess, right now, honestly. The BGP table has grown so big that a lot of older devices can’t keep it in fast memory anymore, so they either have to be replaced with newer hardware or use slow memory (and therefore slow processing of packets). So it’s not really in everyone’s best interests to just keep adding more routes. It’s harder and harder to justify.

why there are so many more connections in the north east and west coast if more connections slows the whole system down

I’m not from the US, so at best it would be an educated guess.

Firstly, it’s not as simple as just “more connections is more slow”, it means there’s a greater overhead. If the improvement from adding another line is greater than the overhead, then it can be worthwhile. For example, imagine a simple network with three routers, A, B and C, where A is connected only to B, and C is connected only to B, meaning that B is connected to both A and C. If there is a large amount of traffic between A and C, it may be worth adding a direct connection between them. If there isn’t, then it’s probably not worth doing.

I guess it’s a bit like adding a new road between two existing roads. Is it worth adding a junction and a set of traffic lights to some existing roads, or would that slow down traffic enough not to be worth doing?

Maybe, since you work with software more, it would make sense to put it this way: why don’t you create an index for every single possible column and table in SQL?

Or just look at it like premature optimisation. There’s a saying about premature optimisation in software engineering! ;-)

Another thing to keep in mind though is that there’s definitely still quite a few bad decisions still kicking around from when the internet was new. It takes time and effort to get rid of the legacy junk, same as in programming.