this post was submitted on 25 Feb 2025
637 points (98.5% liked)

Technology

64937 readers
4408 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] slacktoid@lemmy.ml 29 points 1 week ago (3 children)

I understand the memory constraints but it does feel weird for framework, is all I have to say. But that's also the general trajectory of computing from what it seems. I really want lpcamm to catch on!

[–] Scholars_Mate@lemmy.world 34 points 1 week ago (1 children)

Apparently Framework did try to get AMD to use LPCAMM, but it just didn't work from a signal integrity standpoint at the kind of speeds they need to run the memory at.

[–] grue@lemmy.world 19 points 1 week ago (2 children)

Sounds like it doesn't bode well for the future of DIMMs at all, TBH.

[–] SpaceNoodle@lemmy.world 33 points 1 week ago

You have a DIMM view of the future.

[–] avidamoeba@lemmy.ca 9 points 1 week ago* (last edited 1 week ago)

My AM5 system doesn't post with 128GB of 5600 DDR5 at higher than 4400 at JEDEC timings and voltage. 2 DIMMs are fine. 4 DIMMs... rip. So I'd say the present of DIMMs is already a bit shaky. DIMMs are great for lots of cheap RAM. I paid a lot less than what I'd have to pay for the equivalent size of RAM in a Framework desktop.

[–] brucethemoose@lemmy.world 23 points 1 week ago* (last edited 1 week ago) (2 children)

Eventually most system RAM will have to be packaged anyway. Physics dictates that one pays a penalty going over pins and mobo traces, and it gets more severe with every advancement.

It's possible that external RAM will eventually evolve into a "2nd tier" of system memory, for background processes, spillover, inactive programs/data, things like that.

[–] slacktoid@lemmy.ml 1 points 1 day ago (1 children)

That would be fine. But as long as it can use it as RAM and not just a staging ground.

[–] brucethemoose@lemmy.world 1 points 1 day ago

Keep in mind that it would be pretty slow, as it doesn’t make sense to burn power and die area on a wide secondary bus.

[–] leisesprecher@feddit.org 15 points 1 week ago (1 children)

It's already fourth tier after L1, L2, L3 caches.

Maybe something like optane will make a comeback. Having 16gb of soldered RAM and 500gb of relatively slow, but inexpensive optane RAM would be great.

[–] brucethemoose@lemmy.world 6 points 1 week ago

DRAM is so cheap and ubiquitous that they will probably keep using that, barring any massive breakthroughs. The "persistence after power-off" is nice to have, but not strictly needed.