this post was submitted on 05 Apr 2025
407 points (99.0% liked)
Linux Gaming
17935 readers
248 users here now
Discussions and news about gaming on the GNU/Linux family of operating systems (including the Steam Deck). Potentially a $HOME
away from home for disgruntled /r/linux_gaming denizens of the redditarian demesne.
This page can be subscribed to via RSS.
Original /r/linux_gaming pengwing by uoou.
No memes/shitposts/low-effort posts, please.
Resources
WWW:
Discord:
IRC:
Matrix:
Telegram:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I was hoping there would be something about resource usage. Curse you, steamwebhelper
Oh yeah. If you leave a steam page open, it'll create a very slow memory leak. Left store page open for about a week, came back to 6gb steamwebhelper xd
Wait really?! Feel like that's a bug that really needs addressing
what kind of sane person is gonna leave a steam page open for over a week, though?
I can see it easily being accidentally left open if you have a busy desktop and don't regularly restart your PC
In my case, i use multiple workspaces. Had a workspace for gaming set up and left the window on the store page. Had a busy week so I didn't game. I usually don't turn off my computer because I contribute to Folding@Home at night. Week flies by and I start to wonder why my ram is full and investigate :D
I mean.. I often open steam for something and then kind of forget about it.
What kind of sane person is gonna debug and track down a memory leak, though? Just buy more ram
What kind of sane person is going to buy more RAM? Just download it
Just mount your swap partition on google drive. xd
people who like having money?????
memory leak's at a rate that doesn't matter (~30mb/hour). That makes it hard to track down & reproduce. Also, solution would be to just navigate to your steam library or just not leave the window open like that :p
no it doesnt?
It's a web browser, it's only going to come from one place lmao.
The Web.
yeah, and the answer is electron, it's electron causing the memory leak, the solution is to not use it.
I completely forgot Steam used electron.
i haven't because it's been a completely unusable buggy mess ever since i've installed it, and it transitioned to electron, routinely uses 1gb of ram, 2gb on bad days. That's 2 whole USD wasted, and that's the price of CHEAP ram.
Graphics just don't work, that might be an nvidia problem to be fair, menus are broken, buttons haven't worked, refactoring the UI seems to make it slower. Scrolling a literal single web page is practically unusable due to lag and stuttering. If you use proton, and auto shader compilation, it's useless because you can't even configure how you want it to be run. Don't want to compile 12gb of shaders for a game that's 200GB? That you play 2 times a year? Get fucked.
You can skip shader compilation. Also, it sounds like a driver issue. Electron has gpu suppport by default so it shouldn't be laggy.
on a per game basis? Last i checked you could only enable it for all games, or no games. I don't want to manually skip compilations, i want to select a list of games that automatically compile shaders, i don't want beamng to compile 70gb of shaders because it got an update this week, unless i'm going to play it, but something i play more frequently like factorio, i would like for that to be a regular function.
I'm fairly sure it's driver related, but steam is built on electron, and chrome/firefox work perfectly fine, and so does discord electron, so my only guess is that nvidia driver gaming is happening. Or steam has the single most incompetent installation of electron across any software i use.
I have no idea why only steam would be affected. That just doesn't make any sense.
doesn't seem to matter whether it's enabled or not, it performs like shit, idk why. Again, firefox and chrome are fine, discord is fine (it still runs like shit, but it's discord, that's normal)
That honestly sounds like a good feature to request imo.
Are you on the steam beta? I've generally had a better experience there.
You could also try the low performance setting in steam. (steam settings > library > low performance mode)
it would be a great feature, and considering i own like 10 games, and there are people out there who own like, 200, i'm honestly shocked this wasn't immediately included? I feel like this is such an obvious thing. i would be surprised if somebody hasn't already requested it.
no, i'm on stable, because i like my software to work, though maybe i should fuck around with my steam install sometime.
i could, but im running a 1070 and a 5900x, im pretty sure it's not a hardware limitation. But i might mess with that later.
Good luck brother :D
hopefully it improves. I'm honestly waiting for heroic to implement steam, or a decent CLI implementation to come about, SteamCMD does exist, but it's meant for server hosting, theoretically you could use it for client use, but i don't think it recommends you do that, for several reasons.
Also not a fan of downloads eating a core.
Isn’t that just because they download a compressed format?
Encrypted and compressed. But it's way beyond what I expect that to take.
it depends on your network speed also.
My internet isn't that fast (100mbps), and it was still maxing one core.
how old/new is your cpu? CPUs recently, in the last 5-8 years have gotten really fast.
It's a bit old, it's a Ryzen 3500U (laptop from 2017/2018 ish), so at the older end of your range. I'm still maxing my internet speed, it just kicks the fan on.
I haven't checked my desktop (Ryzen 5600) because I don't hear the fan when the CPU gets pegged (never thought to check), but maybe I will the next time I download a game.
that's definitely not in the range of like, super old cpus, but it's also not super fast either. Modern cpus should be like 20-30% faster i think, in single core, which is what compression uses.
Realistically compression should be as aggressive as possible, because it saves bandwidth, and it's basically a free resource,
Sure, and I have no issues with compression or encryption on the device. In fact, I used full-disk encryption for a few years and had zero issues, and I've done plenty of compression and decompression as well (in fact, I think my new OS uses compression at the FS layer). Most of the time, that stuff is completely a non-issue since CPUs have special instructions for common algorithms, I think they're just using something fancy that doesn't have hardware acceleration on my CPU or something.
I'm planning to replace it, but it still works well for what I need it for: Minecraft for the kids, Rust Dev for me, and indie games and videos every so often. I'm on integrated graphics and it's still holding up well.
it's my understanding that on disk compression is different from networked compression, usually networked compression uses Gzip iirc, where as on disk tends to use something like LZ, file downloads are generally less important than a file system, so you can trivially get away with costly and expensive compression.
Yeah, because you're optimizing for different things.
the server isn't live compressing it, it's pre compressed binaries being shipped hundreds of thousands of times over, in most cases. Compression is primarily to minimize bandwidth (and also speed up downloads, since the network speed is usually the bottleneck) you can either cache the compressed files, or do a gated download, based on decompression speed.
Usually, most disks are faster than any network connection available, so it's pretty hard to hit that bottleneck these days. HDDs included, unless you're using SMR drives in a specific use case, and definitely not an SSD ever.
Although on the FS side, you would optimize for minimum latency, latency really fucks up a file system, that and corrupt data, so if you can ensure a minimal latency impact, as well as a reliable compression/decompression algorithm, you can get a decent trade off of some size optimization, for a bit of latency, and CPU time.
Whether or not fs based compression is good, i'm not quite sure yet, i'm bigger on de-duplication personally.
It is for generated data, like a JSON API. Static content is often pre-compressed though, since there's no reason to do that every request if it can be done once. Compression formats is largely limited to whatever the client supports, and gzip works pretty much everywhere, so it's generally preferred.
At least that's my understanding. Every project I've worked on has a pretty small userbase, so something like 50-100 concurrent users (mostly B2B projects), meaning we didn't have the same problems as something like a CDN might have.
I'm not really sure how latency is related for FS operations. Are you saying if the CPU is lagging behind the read speed, it'll mess up the stream? Or are you saying something else? I'm not an expert on filesystems.
yeah, if the page is dynamically generated, you would likely live compress it, but it would also be so little data that the cpu overhead would be pretty minimal, and you could still cache the compressed data if you tried hard enough. For something like steam where you're effectively shipping and unpacking a 50-200gb zip file, there's no reason not to statically compress it.
it's important because the entire system is based on a filesystem, if you're doing regular calls to a drive, in high quantity latency is going to start being a bottleneck pretty quickly. Obviously it doesn't matter much on certain things, but after a certain point it can start being problematic. There's practically no chance of corruption or anything like that, unless you have a dysfunctional compression/decompression algorithm, but you would likely expect system performance to be noticeably slower in disk benchmarks specifically. Especially if you're running really fast drives like gen 4 NVME ssds. Ideally, it shouldn't be a huge thing, but it's something to consider if you care about it.
There are two primary things to consider when making a functional file system, one is atomicity, because you want to be able to write data, and be certain that it was written correctly (to prevent corruption) and you want to maximize performance. File IO is always one of the slowest forms of interaction, it's why you have ram, yes, and it's why your CPU has cache, but optimizing drive performance in software, is still free performance gains. That's an improvement that can make heavy read/write operations faster, more efficient, and more scalable. Which in the world of super fast modern NVMEs, is something we're all thankful for. If you remember the switch from spinning rust, to solid state storage for operating systems, you'll see a similar improvement. HDDs necessarily have really bad random IOPs performance, they literally physically find the data on the disk, and read it back, it's mechanically limited, this increases latency considerably. And SSDs don't have this problem, because they're a matrix of registers, so you can get MASSIVELY uplifted random IOPs performance from an SSD compared to a hdd. And that's still true today.