this post was submitted on 07 Dec 2023
652 points (99.5% liked)

Technology

59392 readers
2527 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Speedy thing goes in, speedy thing comes out.

top 50 comments
sorted by: hot top controversial new old
[–] Gork@lemm.ee 117 points 11 months ago (3 children)

Yay I can't wait for Comcast to implement this so you can blow through your 1.2 TB data cap in a second so they can charge you $10 per every 50 GB that it goes over.

[–] Ab_intra@lemmy.world 58 points 11 months ago (7 children)

It still shocks me that they cap usage. There is no reason at all to do this. Why are they doing it?

[–] DoomBot5@lemmy.world 49 points 11 months ago (1 children)

Their network is under provisioned. They sell an apartment building 300mbps connections to all 8 tenants, but only have a 1Gb connection. To make sure that link isn't always saturated, they impose a data cap to make you not want to use the bandwidth you're paying for. On top of that everyone's connection is crippled during hours like the evening when everyone is using it. As a bonus, they can sell you cable TV on top, so you don't hit your data cap watching shows.

[–] deadbeef@lemmy.nz 22 points 11 months ago (2 children)

I build ISP and private data networks for a living.

A contention ratio for residential circuits of 3 to 1 isn't bad at all. You'd have to get pretty unlucky with your neighbors being raging pirates to be able to tell that was contended at all. Any data cap should scare the worst of the pirates away, so you probably won't be in that situation.

If you can feel the circuit getting worse at different times of the day then the effective contention ( taking into account further upstream ) is probably more like 30 to 1 than 3 to 1.

[–] onion@feddit.de 15 points 11 months ago (4 children)

Wouldn't two Steam users downloading a game be enough to notice?

[–] c0mbatbag3l@lemmy.world 6 points 11 months ago

QoS is a thing, so it depends.

[–] Cort@lemmy.world 5 points 11 months ago (1 children)

Yeah, stream is faster than most Linux torrents in my experience

[–] deadbeef@lemmy.nz 6 points 11 months ago

Steam can do pretty well filling a tail circuit, probably better on average. But a torrent of a large file with a ton of peers when your client has the port forward back into the client absolutely puts more pressure on a tail circuit. More flows makes the shaping work harder.

Sometimes we see an outlier in our reporting and it's not obvious if a customer has a torrent or a DDoS directed at them for the first few minutes.

load more comments (2 replies)
[–] datelmd5sum@lemmy.world 10 points 11 months ago

I cringe every time I hear people choosing LTE / 5G for home connection over DSL / fiber. Here ISP's can't legally have a mimimum bandwidth less than 70% of the nominal bandwidth for fiber / copper described in the contract.

But they can sell as many mobile subscriptions as they please and they sure like selling them.

[–] prole@sh.itjust.works 31 points 11 months ago* (last edited 11 months ago) (1 children)

Are you kidding? Lol. It's money. The answer is always money.

[–] DAMunzy@lemmy.dbzer0.com 5 points 11 months ago

Queue nip flaps and rubbing.

[–] Amends1782@lemmy.ca 15 points 11 months ago

The only reason its ever been, money

[–] Signtist@lemm.ee 6 points 11 months ago* (last edited 11 months ago)

Because businesses exist to make money, so they have to balance charging as much money to the customers as they can without losing them to a competitive company. That used to mean that they had to treat customers with respect and make them want to stay with the business, but now they've realized that they can just pay lawmakers to let them have a monopoly, allowing them to charge as much money to the customers as they want without worrying that they'll leave, since there's either no competition for them to leave to, or the competition is using the same strategy, so leaving wouldn't fix anything anyway. Free market, baby!

[–] wmassingham@lemmy.world 5 points 11 months ago (1 children)

ISP shittiness aside, ISPs do actually pay for Internet backbone access by the byte. Usually there are peering agreements saying "you take 1tb of traffic from us, and we'll take 1tb of traffic from you", whether that traffic is destined for one of their customers (someone on Comcast scrolling Instagram), or they're just providing the link to the next major node (Comcast being the link between AT&T's segment of the US backbone and Big Mike's Internet out in podunk Nebraska).

And normally that works pretty well, until power users start moving huge amounts of data and unbalancing the traffic.

[–] Kazumara@feddit.de 5 points 11 months ago* (last edited 11 months ago)

That depends on where those bytes go, though. There is also the concept of "settlement-free peering" and content caches that are located in the ISP network.

For example we have a Google Global Cache instance in our network, so most Google traffic is served from there and we don't pay anyone per byte, we only pay for the power and space. Same for Akamai. Then for Microsoft, Cloudflare and Facebook we have peering links, where we can send and receive data related to their services freely, without balance requirements.

Of course this is only possible for larger networks (peering with everyone is not feasible) and we still pay for the other traffic, but it takes care of a lot of the volume.

load more comments (1 replies)
[–] CPMSP@midwest.social 3 points 11 months ago* (last edited 11 months ago)
load more comments (1 replies)
[–] IHeartBadCode@kbin.social 76 points 11 months ago (1 children)

For those wanting a bit of a summary.

transmitting up to 22.9 petabits per second (Pb/s) through a single optic cable composed of multiple fibers

The breakthrough isn’t things moving faster but more fibers per cable. So you can transfer more bits in parallel.

That’s still a good breakthrough because, for lots of reasons, packing more fibers in isn’t as straight forward as one would think.

[–] Kazumara@feddit.de 86 points 11 months ago* (last edited 11 months ago) (3 children)

The breakthrough isn’t things moving faster but more fibers per cable.

No, it's actually more cores per fiber, and using those very well for space division multiplexing on top of the normal wavelength division multiplexing. They are talking about 22.9 Pb/s per fiber, not cable, the Tom's Hardware article is just wrong.

Cables can already contain hundreds of fibers, for example 576 here or into the thousands if you use stacks of ribbon cables in the subunits, for example 3456 here

[–] bassomitron@lemmy.world 67 points 11 months ago (1 children)

Here's a source that backs up what you're talking about and proving that the TomsHardware article is wrong: https://www.nict.go.jp/en/press/2023/11/30-1.html

[–] Kazumara@feddit.de 22 points 11 months ago (1 children)
[–] DaMonsterKnees@lemmy.world 22 points 11 months ago

Yes, thanks to all for contributing and assisting. I am grateful for the clarification and leg work. Folks say reddit had this, and lemmy has less, so every time I see it, I make sure to appreciate it.

[–] neidu@feddit.nl 8 points 11 months ago (1 children)

Am I to understand that the cable use has multiple cores within a single cladding? Interesting approach..

Now we get to classify them as singlemode, multimode, and multiestmode.

load more comments (1 replies)
load more comments (1 replies)
[–] Fraylor@lemm.ee 76 points 11 months ago

This'll bring their fax machines up to the current century for sure.

[–] thiccckk@lemmy.world 34 points 11 months ago (4 children)

What's the use of high speed when videos are pixeleted 😅😅😅😅😅

load more comments (4 replies)
[–] Corkyskog@sh.itjust.works 33 points 11 months ago (2 children)

Wallstreet just put in a bulk order.

[–] Kazumara@feddit.de 29 points 11 months ago (6 children)

The financial types are generally more interested in hollow core fiber, to get their latencies even further down for high frequency trading. Because light travels at almost c in hollow core but only at 2/3 c in fiber core.

load more comments (6 replies)
[–] PhlubbaDubba@lemm.ee 17 points 11 months ago

Actually Wall Street intentionally increases their latency

Some guy figured out that trades were getting sniped due to some locations having more latency than others relative to the trade location, so he developed a solution that intentionally lags the connection on different wires so that everyone gets their trade updates simultaneously and can't snipe each other to up the prices on other people's buys.

[–] ConstipatedWatson@lemmy.world 20 points 11 months ago (4 children)

Is this a Portal reference? I remember hearing it from GLADOS!

[–] Ab_intra@lemmy.world 11 points 11 months ago (1 children)
load more comments (1 replies)
load more comments (3 replies)
[–] kewwwi@lemmy.world 18 points 11 months ago

that's a lot of floppies

[–] r00ty@kbin.life 13 points 11 months ago (7 children)
[–] umulu@lemmy.world 6 points 11 months ago

You might have it in 20 years The question is... When will we get servers that support that speed? XD

load more comments (6 replies)
[–] FlyingSquid@lemmy.world 12 points 11 months ago (1 children)

This is just what I need for my goal of backing up both the Internet Archive and Wikipedia on local storage every day.

[–] bassomitron@lemmy.world 8 points 11 months ago

If you really are, then you should be doing daily incrementals and fulls every couple of weeks. I can't imagine the incrementals for those are more than a few dozen GB, but I guess I'm not familiar with the size of Internet Archive.

[–] Grass@sh.itjust.works 10 points 11 months ago (2 children)

Isn't optical just as much about the end points as the cables?

[–] Kazumara@feddit.de 6 points 11 months ago (1 children)

Yes, to get their speeds they used the usual wavelength division multiplexing, except over an insane 750 wavelength channels, space division multiplexing over the 38 corse with 3 modes, and 256 QAM with dual-polarization in each

[–] Lemminary@lemmy.world 6 points 11 months ago (1 children)

Yeah... mhmm... right. I know some of these words!

load more comments (1 replies)
load more comments (1 replies)
[–] NeoNachtwaechter@lemmy.world 6 points 11 months ago (2 children)

optical fiber speed record

Isn't that simply the speed of light, always? ;-)

[–] ThankYouVeryMuch@kbin.social 12 points 11 months ago (1 children)

Nope, if we are talking about the actual speed of the signal optical fiber is relatively slow at ~1/3 c, compared to air or copper where it's almost c. They're using 'speed' meaning bandwidth. A van full of sd cards would have a massive bandwidth, but a very slow actual speed

[–] Kazumara@feddit.de 12 points 11 months ago* (last edited 11 months ago) (1 children)

Actually it's about 2/3 c, the refractive index of normal telco fibers (G.652 and G.655) is around 1.47

[–] DaMonsterKnees@lemmy.world 3 points 11 months ago

Cool information, thanks!

[–] snooggums@kbin.social 5 points 11 months ago (1 children)

It is pretty confusing that we refer to the volume of data as speed in networks.

[–] the_tab_key@lemmy.world 16 points 11 months ago

We don't. The measure is bits/s, which is a speed because it's measured relative to time. 1 TB is a volume/amount, 1TB/s is a speed.

load more comments
view more: next ›