this post was submitted on 21 Jan 2025
246 points (98.4% liked)

Technology

60787 readers
4599 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] ByteOnBikes@slrpnk.net 5 points 2 days ago (3 children)

Ignoring the Seagate part, which makes sense... Is there a reason with 36TB?

I recall IT people losing their minds when we hit the 1TB, when the average hard drive was like 80GB.

So this growth seems right.

[–] schizo@forum.uncomfortable.business 6 points 2 days ago (1 children)

It's raid rebuild times.

The bigger the drive, the longer the time.

The longer the time, the more likely the rebuild will fail.

That said, modern raid is much more robust against this kind of fault, but still: if you have one parity drive, one dead drive, and a raid rebuild, if you lose another drive you're fucked.

[–] notfromhere@lemmy.ml 1 points 2 days ago (1 children)

Just rebuilt onto Ceph and it’s a game changer. Drive fails? Who cares, replace it with a bigger drive and go about your day. If total drive count is large enough, and depends if using EC or replication, it could mean pulling data from tons of drives instead of a handful.

[–] GamingChairModel@lemmy.world 2 points 2 days ago (1 children)

It's still the same issue, RAID or Ceph. If a physical drive can only write 100 MB/s, a 36TB drive will take 360,000 seconds (6000 minutes or 100 hours) to write. During the 100-hour window, you'll be down a drive, and be vulnerable to a second failure. Both RAID and Ceph can be configured for more redundancy at the cost of less storage capacity, but even Ceph fails (down to read only mode, or data loss) if too many physical drives fail.

[–] notfromhere@lemmy.ml 1 points 2 days ago

While true, it can fill the drive replacement with data spread from way more number of drives than raid can, so the point I was trying to make is that a second failure due to resilvering cam be greatly mitigated by using a Ceph setup.

[–] cupcakezealot@lemmy.blahaj.zone 5 points 2 days ago (1 children)

I recall IT people losing their minds when we hit the 1TB

1TB? I remember when my first computer had a state of the art 200MB hard drive.

[–] somenonewho@feddit.org 1 points 1 day ago (1 children)

I remember first hearing about 1TB and thinking (who needs that much storage?) wasn't an IT person then just a regular nerd but am now and it took me a while to ever fill up my first 1TB HDD (steam folder) now I have a 2TB NVME in my desktop and a 4TB NVME in my server (for my Linux ISOs ;))

Remembering when Zip drives sounded so big!

[–] thirteene@lemmy.world 2 points 2 days ago (1 children)

It's so consistent it has a name: Moore's law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years. https://en.m.wikipedia.org/wiki/Moore%27s_law

I heard that we were at the theoretical limit but apparently there's been a break through: https://phys.org/news/2020-09-bits-atom.html

[–] Keelhaul@sh.itjust.works 9 points 2 days ago

Quick note, HDD storage is not using transistors to store the data, so is not really directly related to Moore's law. SSDs do use transistors/nano structures (NAND) for storage and it's storage capacity is more related to Moore's law.