this post was submitted on 21 Jan 2024
249 points (97.7% liked)

Technology

59087 readers
3433 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Computer RAM gets biggest upgrade in 25 years but it may be too little, too late — LPCAMM2 won't stop Apple, Intel and AMD from integrating memory directly on the CPU::LPCAMM2 is a revolution in RAM, but it faces an uphill struggle

top 50 comments
sorted by: hot top controversial new old
[–] 7heo@lemmy.ml 82 points 9 months ago* (last edited 9 months ago) (3 children)

Also, lots of users aren't gonna want the main system memory on the CPU die. Aside from the fact that it creates a clear path for vendors to artificially inflate prices through pretended scarcity via product segmentation and bundles, it also prevents the end users from upgrading the machines.

I'm pretty sure this even goes against the stated goals of the EU in terms of reduction of e-waste.

I have no doubt that a handful of vendors cooperating could restrict their offer and force the hand of end users, but I don't think this would be here to stay. Unless it provides such a drastic performance boost (like 2x or more) that it could be enough of an incentive to convince the masses.

[–] eek2121@lemmy.world 74 points 9 months ago (6 children)

Outside of DIY, end users don’t care. See: Apple.

Also, if you have a laptop with LPDDR5, it is soldered. If it has DDR5 or some variant of DDR4, it is likely also soldered as most OEMs did away with DIMM slots.

I don’t like or agree with the practice.

[–] logicbomb@lemmy.world 33 points 9 months ago (1 children)

Even people who build their own computers usually buy all the RAM they want at the time that they're building it.

The biggest difference to them is likely the feeling that they're losing their ability to upgrade, more than the actual upgrade itself. I still think that feeling is an important factor, though.

[–] menemen@lemmy.world 3 points 9 months ago (1 children)

Biggest difference is that defective RAM can cost you a lot more imo.

load more comments (1 replies)
[–] mipadaitu@lemmy.world 15 points 9 months ago (1 children)

Frame.work laptops have non soldered, upgradable DDR5 memory. In fact, you can buy a laptop with no memory and just buy it somewhere else and install it yourself.

[–] eek2121@lemmy.world 15 points 9 months ago

Yeah, but it is regular DDR5, which is less power efficient.

I do love Framework, however. They are at the top of my list when I eventually upgrade my laptop.

Hopefully they give us CAMM2 modules with LPDDR5 at that point.

[–] menemen@lemmy.world 3 points 9 months ago* (last edited 9 months ago) (1 children)

I always think of my old Asus eePc netbook from 2010 that had a special compartment that was accessible from outside without opening up the notebook itself, just so that users would be able to upgrade their RAM. How did times change from "help the user to get what he needs" to "help the user get what we need". Personally I blame Apple for this tbh.

This is how this looked: http://images.bit-tech.net/content_images/2007/12/add_more_storage_space_to_your_asus_eee_pc/panel.jpg

And the best part: My son is using this netbook now with a light weight linux. I actually switched the RAM 2 month ago. It even plays Minecraft and he draws on it with my drawing tablet.

[–] eek2121@lemmy.world 2 points 9 months ago

In the case of LPDDR5, we don’t have removable memory due to tight signaling requirements and the fact that the DIMM slots themselves take up too much space when populated.

LPCAMM2 solves this, so I hope it is widely adopted going forward because LPDDRR5 offers a huge upgrade over previous gen.

[–] NightAuthor@lemmy.world 2 points 9 months ago (1 children)

But even soldered ram isn’t as bad as in-cpu ram. Soldered ram can be replaced/upgraded by skilled technicians. I don’t think that’s possible at all with in-cpu ram.

[–] the_post_of_tom_joad@sh.itjust.works 11 points 9 months ago* (last edited 9 months ago) (2 children)

Soldered ram can be replaced/upgraded by skilled technicians.

Ok i know it isn't the point of your comment and i agree with the whole premise but who, i say who is soldering their own ram? I admit that it should be possible but the limited upgradeability imitations not to mention the skill you'd need... I say it puts soldered ram into the same echelon of "not upgradeable"

Can anyone speak to this? Am i wrong about the difficulty and hardware limits?

[–] Corgana@startrek.website 5 points 9 months ago* (last edited 9 months ago)

Exactly. Few people are willing to deal with the adhesive used in Macs and smartphones. Even fewer will deal with solder.

[–] NightAuthor@lemmy.world 3 points 9 months ago (1 children)

Yeah, I agree.

As for who those few are, well, I wouldn’t myself… probably… but I’d definitely like the option of taking my laptop to someone like Louis Rossmann who can do such work. He’s even shown that sometimes the ram gets destroyed by apples weird circuit designs and if it was just soldered on, the laptop and all your data would actually be salvageable.

load more comments (1 replies)
load more comments (2 replies)
[–] QuarterSwede@lemmy.world 15 points 9 months ago (2 children)

On CPU RAM does provide much faster performance. That’s the reason they are going that route.

[–] SnotFlickerman@lemmy.blahaj.zone 10 points 9 months ago (1 children)

It's part of the reason why RAM was always placed close to the CPU on the motherboard anyway. The farther they are apart, the more time and energy is used to transfer data and instructions between them.

[–] QuarterSwede@lemmy.world 9 points 9 months ago (2 children)

Right, it s a physics issue, not greed. I mean, they’re going to make a margin off of it for sure but that’s not the sole reason to do this.

[–] Plopp@lemmy.world 4 points 9 months ago

Greed might not be the main driving force, but it's absolutely there too. I predict on-cpu ram costing more than it should in the future due to lack of competition. (yes I know there aren't that many manufacturers of the actual chips even today when the consumers can choose from many brands of ram sticks)

[–] SnotFlickerman@lemmy.blahaj.zone 3 points 9 months ago* (last edited 9 months ago)

I'm imagining a world with desktops and laptops that have On-CPU-RAM and On-Motherboard-RAM with the traditionally slotted RAM acting as a swap for the On-CPU-RAM.

I mean, isn't that in principle how old swaps traditionally work? They take up some space on your slower disk drive to "swap" data from RAM onto when out of RAM. On-Motherboard-RAM, since it's slower than On-CPU-RAM, could achieve the same purpose, meaning limited On-CPU-RAM wouldn't be as impactful.

[–] Treczoks@lemmy.world 3 points 9 months ago

Which makes a lot of sense as RAM speed is the one big bottleneck.

[–] lolcatnip@reddthat.com 10 points 9 months ago (1 children)

People who really care about computers buy handmade artisanal transistors.

[–] 7heo@lemmy.ml 4 points 9 months ago (1 children)
[–] A_Random_Idiot@lemmy.world 2 points 9 months ago (1 children)
load more comments (1 replies)
[–] SnotFlickerman@lemmy.blahaj.zone 72 points 9 months ago* (last edited 9 months ago) (1 children)

On CPU is definitely superior for performance, and what I'm not seeing people consider here is a future where you have On-CPU-RAM and On-Motherboard-RAM. CPU RAM for intense CPU functions, and traditionally seated RAM to be more like a modern "swap" I suppose, but instead of using the slower disks for swap, you're just using slower RAM.

I could especially see this in Enterprise level hardware. I'm just saying, don't throw the baby out with the bathwater. Por Que No Los Dos?

I know, I know, you can't expect corporations to do squat to benefit the consumer, but one can hope.

[–] 4am@lemm.ee 31 points 9 months ago (2 children)

Yeah, there is no way they’re gonna put 1TB of RAM on a CPU die anytime soon.

Does that mean that consumer hardware will include expandable RAM though? I feel like for the average person, that option still has a very high chance of disappearing on a lot of machines.

[–] SnotFlickerman@lemmy.blahaj.zone 15 points 9 months ago* (last edited 9 months ago) (1 children)

Oh yeah, a very high chance of disappearing. The unfortunate reality is probably 80% of people never upgrade their laptops or desktops. Building and maintaining your own PC has become more en vogue in recent years, but the vast majority of average consumers just don't take part in the practice. Thus, it will not be prioritized by the industry. Why spend money on making your machines upgrade-able if the majority of users don't ever take advantage of the feature?

I don't like why it will happen, but I understand the economics of it.

[–] Telodzrum@lemmy.world 7 points 9 months ago

Bro, it’s way higher than 80%.

[–] monkeyman512@lemmy.world 3 points 9 months ago

I think most people don't know the difference between "on-die" and "on-package". This may be what they mean: https://beebom.com/intel-meteor-lake-cpu-on-chip-ram/

[–] originalucifer@moist.catsweat.com 16 points 9 months ago (2 children)

both techniques will obviously need to coexist for some time. they dont have logistics on large memory near the processor,. quite yet, so there is still a place for ram.

[–] You999@sh.itjust.works 4 points 9 months ago (2 children)

I'd argue that they do have the logistics down pretty well at this point as HBM3E can squeeze 144Gb onto a package.

load more comments (2 replies)
[–] Mango@lemmy.world 14 points 9 months ago (7 children)

Wouldn't RAM on die mean lower wafer yield?

load more comments (7 replies)
[–] WaterWaiver@aussie.zone 14 points 9 months ago

We already have memory wafers glued to our CPU wafers in the form of L3 cache. It's lower latency, higher throughput, up to a few hundred MiB in bigger models and can potentially be used without external RAM sticks (but I've not heard of using that feature outside of BIOS firmware early boot -- that's probably the only change we'll see). Sometimes it's DRAM, sometimes it's SRAM, its size varies quite a bit.

[–] Brokkr@lemmy.world 5 points 9 months ago* (last edited 9 months ago) (1 children)

I get that this was primarily created to benefit laptops, but would it provide any advantage for desktops?

[–] Telodzrum@lemmy.world 3 points 9 months ago (1 children)

Yes, the laws of physics require that on-die RAM is markedly faster.

[–] Brokkr@lemmy.world 2 points 9 months ago

I get that, but is this on-die? It says that it is modular, so I interpreted that to mean that it was not on-die.

[–] bruhduh@lemmy.world 4 points 9 months ago

3d vcache would like to say hi

[–] dukatos@lemm.ee 3 points 9 months ago

This is the future of PC, but with soldered RAM: https://en.m.wikipedia.org/wiki/Slot_1

[–] SanndyTheManndy@lemmy.world 3 points 9 months ago (6 children)

I already use a processor with integrated graphics

load more comments (6 replies)
[–] captain_aggravated@sh.itjust.works 3 points 9 months ago (3 children)

Question: modern systems can mount hundreds of GB or even terabytes of RAM, right? At this point, why not mount non-volatile storage as RAM? Performance should increase since data wouldn't have to be loaded.

[–] Mortoc@lemmy.world 24 points 9 months ago* (last edited 9 months ago) (6 children)

What you’re describing is the holy grail of computer memory technology. If we had nonvolatile memory as fast as RAM, we would absolutely be using it instead. Unfortunately even the fastest SSD today would be a significant drop in speed from modern RAM.

load more comments (6 replies)
[–] carpelbridgesyndrome@sh.itjust.works 6 points 9 months ago* (last edited 9 months ago)

From the perspective of a computer engineer SSDs are painfully slow. Waiting for data on disk is slow enough that it is typically done by asking the OS for the data and having the OS schedule another process onto the CPU while it waits. RAM is also slow although not nearly as slow. Ideally you want your data in the L1 cache which is fast enough to minimally stall the CPU. The L2 and L3 caches are slower but larger and more likely to have the data you want. If the caches are empty and you have to read RAM your CPU will either do a lot of speculative execution or more likely stall.

Speculative execution on CPUs is a desperate attempt to deal with the fact that all memory access is slow by just continuing through the code as if you know what is in memory. If the speculative execution is wrong a lot of work gets thrown out (hopefully nothing unsound happens) and the delay is more noticable.

Bluntly an SSD only system would probably be an order of magnitude slower. I'm also not sure switching to a new process (or even thread) to load from SSD would be viable without RAM as it would likely invalidate a lot of cache triggering more loads.

[–] ___@lemm.ee 4 points 9 months ago

Essentially we do. If you run out of RAM, you get pages from disk. You would know this if you ever used Windows ME.

load more comments
view more: next ›