this post was submitted on 30 Mar 2025
149 points (97.5% liked)

Linux

52706 readers
456 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

The diversity of Linux distributions is one of its strengths, but it can also be challenging for app and game development. Where do we need more standards? For example, package management, graphics APIs, or other aspects of the ecosystem? Would such increased standards encourage broader adoption of the Linux ecosystem by developers?

(page 2) 50 comments
sorted by: hot top controversial new old
[–] dosse91@lemmy.trippy.pizza 69 points 3 days ago* (last edited 3 days ago) (10 children)

Generally speaking, Linux needs better binary compatibility.

Currently, if you compile something, it's usually dynamically linked against dozens of libraries that are present on your system, but if you give the executable to someone else with a different distro, they may not have those libraries or their version may be too old or incompatible.

Statically linking programs is often impossible and generally discouraged, making software distribution a nightmare. Flatpak and similar systems made things easier, but it's such a crap solution and basically involves having an entire separate OS installed in parallel, with its own problems like having a version of Mesa that's too old for a new GPU and stuff like that. Applications must be able to be packaged with everything they need with them, there is no reason for dynamic linking to be so important in Linux these days.

I'm not in favor of proprietary software, but better binary compatibility is a necessity for Linux to succeed, and I'm saying this as someone who's been using Linux for over a decade and who refuses to install any proprietary software. Sometimes I find myself using apps and games in Wine even when a native version is available just to avoid the hassle of having to find and probably compile libobsoletecrap-5.so

[–] beyond@linkage.ds8.zone 9 points 2 days ago* (last edited 2 days ago) (1 children)

Disagree - making it harder to ship proprietary blob crap "for Linux" is a feature, not a bug.

load more comments (1 replies)
[–] pr06lefs@lemmy.ml 23 points 3 days ago (2 children)

nix can deal with this kind of problem. Does take disk space if you're going to have radically different deps for different apps. But you can 100% install firefox from 4 years ago and new firefox on the same system and they each have the deps they need.

[–] AtariDump@lemmy.world 1 points 1 day ago (1 children)
load more comments (1 replies)
[–] PopeRigby@beehaw.org 8 points 2 days ago

Someone managed to install Firefox from 2008 on a modern system using Nix. Crazy cool: https://blinry.org/nix-time-travel/

[–] apt_install_coffee@lemmy.ml 5 points 2 days ago

Statically linking is absolutely a tool we should use far more often, and one we should get better at supporting.

[–] catloaf@lemm.ee 8 points 3 days ago

I don't think static linking is that difficult. But for sure it's discouraged, because I can't easily replace a statically-linked library, in case of vulnerabilities, for example.

You can always bundle the dynamic libs in your package and put the whole thing under /opt, if you don't play well with others.

[–] MyNameIsRichard@lemmy.ml 9 points 3 days ago (1 children)

You'll never get perfect binary compatibility because different distros use different versions of libraries. Consider Debian and Arch which are at the opposite ends of the scale.

[–] 2xsaiko@discuss.tchncs.de 28 points 3 days ago (4 children)

And yet, ancient Windows binaries will still (mostly) run and macOS allows you to compile for older system version compatibility level to some extent (something glibc alone desperately needs!). This is definitely a solvable problem.

Linus keeps saying “you never break userspace” wrt the kernel, but userspace breaks userspace all the time and all people say is that there’s no other way.

[–] DarkMetatron@feddit.org 7 points 3 days ago (1 children)

It works under Windows because the windows binaries come with all their dependency .dll (and/or they need some ancient visual runtime installed).

This is more or less the Flatpack way, with bundling all dependencies into the package

Just use Linux the Linux way and install your program via the package manager (including Flatpack) and let that handle the dependencies.

I run Linux for over 25 years now and had maybe a handful cases where the Userland did break and that was because I didn't followed what I was told during package upgrade.

The amount of time that I had to get out of .dll-hell on Windows on the other hand. The Linux way is better and way more stable.

[–] 2xsaiko@discuss.tchncs.de 4 points 2 days ago (2 children)

I'm primarily talking about Win32 API when I talk about Windows, and for Mac primarily Foundation/AppKit (Cocoa) and other system frameworks. What third-party libraries do or don't do is their own thing.

There's also nothing wrong with bundling specialized dependencies in principle if you provide precompiled binaries. If it's shipped via the system package manager, that can manage the library versions and in fact it should do that as far as possible. Where this does become a problem is when you start shipping stuff like entire GUI toolkits (hello bundled Qt which breaks Plasma's style plugins every time because those are not ABI-compatible either).

The amount of time that I had to get out of .dll-hell on Windows on the other hand. The Linux way is better and way more stable.

Try running an old precompiled Linux game (say Unreal Tournament 2004 for example). They can be a pain to get working. This is not just some "ooooh gotcha" case, this is an important thing that's missing for software preservation and cross-compatibility, because not everything can be compiled from source by distro packagers, and not every unmaintained open-source software can be compiled on modern systems (and porting it might not be easy because of the same problem).

I suppose what Linux is severely lacking is a comprehensive upwards-compatible system API (such as Win32 or Cocoa) which reduces the churn between distros and between version releases. Something that is more than just libc.

We could maybe have had this with GNUstep, for example (and it would have solved a bunch of other stuff too). But it looks like nobody cares about GNUstep and instead it seems like people are more interested in sidestepping the problem with questionably designed systems like Flatpak.

[–] nawordar@lemmy.ml 1 points 1 day ago

There was the Linux Standard Base project, but there were multiple issues with it and finally it got abandoned. Some distributions still have a /etc/lsb-release file for compatibility.

[–] DarkMetatron@feddit.org 3 points 2 days ago* (last edited 2 days ago)

Unreal Tournament 2004 depends on SDL 1.3 when I recall correctly, and SDL is neither on Linux nor on any other OS a core system library.

Binary only programs are foreign to Linux, so yes you will get issues with integrating them. Linux works best when everyone plays by the same rules and for Linux that means sources available.

Linux in its core is highly modifiable, besides the Kernel (and nowadays maybe systemd), there is no core system that could be used to define a API against. Linux on a Home theater PC has a different system then Linux on a Server then Linux on a gaming PC then Linux on a smartphone.

You can boot the Kernel and a tiny shell as init and have a valid, but very limited, Linux system.

Linux has its own set of rules and his own way to do things and trying to force it to be something else can not and will not work.

[–] Giooschi@lemmy.world 7 points 3 days ago (1 children)

Linus got it right, it's just that other userspace fundamental utilities didn't.

[–] 2xsaiko@discuss.tchncs.de 5 points 3 days ago

Yeah, that's what I mean.

load more comments (2 replies)
load more comments (5 replies)
[–] kibiz0r@midwest.social 44 points 3 days ago (3 children)

ARM support. Every SoC is a new horror.

Armbian does great work, but if you want another distro you’re gonna have to go on a lil adventure.

load more comments (3 replies)
[–] Mio@feddit.nu 27 points 3 days ago (2 children)

Configuration gui standard. Usually there is a config file that I am suppose to edit as root and usually done in the terminal.

There should be a general gui tool that read those files and obey another file with the rules. Lets say it is if you enable this feature then you can't have this on at the same time. Or the number has to be between 1 and 5. Not more or less on the number. Basic validation. And run the program with --validation to let itself decide if it looks good or not.

[–] lime@feddit.nu 13 points 3 days ago (1 children)
[–] original_reader@lemm.ee 11 points 2 days ago

I agree. OpenSuse should set the standards in this.

Tbf, they really need a designer to upgrade this visually a bit. It exudes its strong "Sys Admin only" vibes a bit much. In my opinion. 🙂

load more comments (1 replies)
[–] LovableSidekick@lemmy.world 10 points 2 days ago* (last edited 2 days ago)

Small thing about filesystem dialogs. In file open/save dialogs some apps group directories at the top and others mix them in alphabetically with files. My preference is for them to be grouped, but being consistent either way would be nice.

[–] SwingingTheLamp@midwest.social 55 points 3 days ago (2 children)

One that Linux should've had 30 years ago is a standard, fully-featured dynamic library system. Its shared libraries are more akin to static libraries, just linked at runtime by ld.so instead of ld. That means that executables are tied to particular versions of shared libraries, and all of them must be present for the executable to load, leading to the dependecy hell that package managers were developed, in part, to address. The dynamically-loaded libraries that exist are generally non-standard plug-in systems.

A proper dynamic library system (like in Darwin) would allow libraries to declare what API level they're backwards-compatible with, so new versions don't necessarily break old executables. (It would ensure ABI compatibility, of course.) It would also allow processes to start running even if libraries declared by the program as optional weren't present, allowing programs to drop certain features gracefully, so we wouldn't need different executable versions of the same programs with different library support compiled in. If it were standard, compilers could more easily provide integrated language support for the system, too.

Dependency hell was one of the main obstacles to packaging Linux applications for years, until Flatpak, Snap, etc. came along to brute-force away the issue by just piling everything the application needs into a giant blob.

[–] steeznson@lemmy.world 3 points 2 days ago* (last edited 1 day ago)

I find the Darwin approach to dynamic linking too restrictive. Sometimes there needs to be a new release which is not backwards compatible or you end up with Windows weirdness. It is also too restrictive on volunteer developers giving their time to open source.

At the same time, containerization where we throw every library - and the kitchen sink - at an executable to get it to run does not seem like progress to me. It's like the meme where the dude is standing on a huge horizontal pile of ladders to look over a small wall.

At the moment you can choose to use a distro which follows a particular approach to this problem; one which enthuses its developers, giving some guarantee of long term support. This free market of distros that we have at the moment is ideal in my opinion.

[–] LovableSidekick@lemmy.world 3 points 2 days ago

The term "dependency hell" reminds me of "DLL hell" Windows devs used to refer to. Something must have changed around 2000 because I remember an article announcing, "No more DLL hell." but I don't remember what the change was.

[–] irotsoma@lemmy.blahaj.zone 12 points 3 days ago

Not offering a solution here exactly, but as a software engineer and architect, this is not a Linux only problem. This problem exists across all software. There are very few applications that are fully self contained these days because it's too complex to build everything from scratch every time. And a lot of software depends on the way that some poorly documented feature worked at the time that was actually a bug and was eventually fixed and then breaks the applications that depended on it, etc. Also, any time improvements are made in a library application it has potential to break your application, and most developers don't get time to test the every newer version.

The real solution would be better CI/CD build systems that automatically test the applications with newer versions of libraries and report dependencies better. But so many applications are short on automated unit and integration tests because it's tedious and so many companies and younger developers consider it a waste of time/money. So it would only work in well maintained and managed open source types of applications really. But who has time for all that?

Anyway, it's something I've been thinking about a lot at my current job as an architect for a major corporation. I've had to do a lot of side work to get things even part of the way there. And I don't have to deal with multiple OSes and architectures. But I think it's an underserved area of software development and distribution that is just not "fun" enough to get much attention. I'd love to see it at all levels of software.

[–] asudox@lemmy.asudox.dev 22 points 3 days ago (2 children)

Flatpak with more improvements to size and sandboxing could be accepted as the standard packaging format in a few years. I think sandboxing is a very important factor as Linux distros become more popular.

load more comments (2 replies)
[–] enumerator4829@sh.itjust.works 17 points 3 days ago (2 children)

Stability and standardisation within the kernel for kernel modules. There are plenty of commercial products that use proprietary kernel modules that basically only work on a very specific kernel version, preventing upgrades.

Or they could just open source and inline their garbage kernel modules…

[–] fxdave@lemmy.ml 1 points 2 days ago (1 children)

I don't use any of these, but I'm curious. Could you please write some examples?

It mostly affects people working with ”fun” enterprise hardware or special purpose things.

But to take one example, proprietary drivers for high performance network cards, most likely from Nvidia.

load more comments (1 replies)
[–] Mihies@programming.dev 22 points 3 days ago (5 children)

I'd say games. I'd that really takes off, Linux would replace Windows and all other standards will follow.

[–] Overspark@feddit.nl 59 points 3 days ago (2 children)

That already happened though. Tens of thousands of games on Steam can be played by hitting the install and then the play button. Only a few "competitive multiplayer" holdouts with rootkits and an irrational hatred of Linux don't work.

[–] Fecundpossum@lemmy.world 20 points 3 days ago (1 children)

Yep. Two solid years of steady gaming on various Linux distributions. No issues aside from no more pubg, no more valorant. Oh wait, that’s not an issue at all. Fuck their rootkits.

load more comments (1 replies)
load more comments (1 replies)
[–] verdigris@lemmy.ml 14 points 3 days ago (1 children)

Have you tried recently? We've been pretty much at parity for years now. Almost every game that doesn't run is because the devs are choosing to make it that way.

load more comments (1 replies)
[–] teawrecks@sopuli.xyz 10 points 3 days ago

It did really take off about 5 years ago.

load more comments (2 replies)
[–] smiletolerantly@awful.systems 18 points 3 days ago

At this point, package management is the main differentiating factor between distro (families). Personally, I'm vehemently opposed to erasing those differences.

The "just use flatpak!" crowd is kind of correct when we're talking solely about Linux newcomers, but if you are at all comfortable with light troubleshooting if/when something breaks, each package manager has something unique und useful to offer. Pacman and the AUR a a good example, but personally, you can wring nixpkgs Fron my cold dead hands.

And so you will never get people to agree on one "standard" way of packaging, because doing your own thing is kind of the spirit of open source software.

But even more importantly, this should not matter to developers. It's not really their job to package the software, for reasons including that it's just not reasonable to expect them to cater to all package managers. Let distro maintainers take care of that.

load more comments
view more: ‹ prev next ›