Open Source
All about open source! Feel free to ask questions, and share news, and interesting stuff!
Useful Links
- Open Source Initiative
- Free Software Foundation
- Electronic Frontier Foundation
- Software Freedom Conservancy
- It's FOSS
- Android FOSS Apps Megathread
Rules
- Posts must be relevant to the open source ideology
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
- !libre_culture@lemmy.ml
- !libre_software@lemmy.ml
- !libre_hardware@lemmy.ml
- !linux@lemmy.ml
- !technology@lemmy.ml
Community icon from opensource.org, but we are not affiliated with them.
Imagine finding a backdoor within 45 day of it's release into a supply chain instead of months after infection. This is a most astoundingly rapid discovery.
Fedora 41 and rawhide, Arch, a few testing and unstable debian distributions and some apps like HomeBrew were affected. Not including Microsoft and other corporations who don't disclose their stack.
What a time to be alive.
Arch was never affected, as described in their news post about it. Arch users had malicious code on their hard disks, but not the part that would have called into it.
Before resting on our laurels, we should consider it's possible it's more widespread but just not being disclosed until after it's patched.
It would be wise to be on the lookout for security patches for the next few days.
Consider this the exception to the rule. There's no reason we should assume this timeline is the norm.
Disguising the virus as a corrupted test file then 'uncorrupting' it is crazy
Pretty bad is also that it intersects with another problem: Bus factor.
Having just one person as maintainer of a library is pretty bad. All it takes is one accident and no one knows how to maintain it.
So, you're encouraged to add more maintainers to your project.
But yeah, who do you add, if it's a security-critical project? Unless you happen to have a friend that wants to get in on it, you're basically always picking a stranger.
Unless you happen to have a friend that wants to get in on it, you’re basically always picking a stranger.
At risk of sounding tone deaf to the situation that caused this: that's what community is all about. The likelihood you know the neighbors you've talked to for years is practically nil. Your boss, your co-workers, your best friend and everyone you know, has some facet to them you have never seen. The unknown is the heart of what makes something strange.
We must all trust someone, or we are alone.
Finding strangers to collaborate with, who share your passions, is what makes society work. The internet allows you ever greater access to people you would otherwise never have met, both good and bad.
Everyone you've ever met was once a stranger. To make them known, extend blind trust, then quietly verify.
honestly these people should be getting paid if a corporation wants to use a small one-man foss project for their own multibillion software. the lawyer types in foss could put that in GPLv5 or something whenever we feel like doing it.
also hire more devs to help out!
If you think people are going to be trustworthy just because they are getting paid you are naive.
not trustworthy per se but maybe less overworked and inclined to review code more hastily, or less tired and inclined to have the worse judgement that makes such a project more vulnerable to stuff like this.
these people maintain the basis of our entire software infrastructure thanklessly for us in between the full time jobs they need to survive, this has to change.
as for trust in foss projects, the community will often notice bad faith code just like they just did (and very quickly this time, i might add!)
I think bus factor would be a lot easier to cope with than a slowly progressing, semi-abandoned project and a White Knight saviour.
In a complete loss of a sole maintainer, then it should be possible to fork and continue a project. That does require a number of things, not least a reliable person who understands the codebase and is willing to undertake it. Then the distros need to approve and change potentially thousands of packages that rely upon the project as a dependency.
Maybe, before a library or any software gets accepted into a distro, that distro does more due diligence to ensure it's a sustainable project and meets requirements like a solid ownership?
The inherited debt from existing projects would be massive, and perhaps this is largely covered already - I've never tried to get a distro to accept my software.
Nothing I've seen would completely avoid risk. Blackmail upon an existing developer is not impossible to imagine. Even in this case, perhaps the new developer in xz started with pure intentions and they got personally compromised later? (I don't seriously think that is the case here though - this feels very much state sponsored and very well planned)
It's good we're asking these questions. None of them are new, but the importance is ever increasing.
Maybe, before a library or any software gets accepted into a distro, that distro does more due diligence to ensure it’s a sustainable project and meets requirements like a solid ownership?
And who is supposed to do that work? How do you know you can trust them?
- Careful choice of program to infect the whole Linux ecosystem
- Time it took to gain trust
- Level of sophistication in introducing backdoor in open source product
All of these are signs of persistent threat actors aka State sponsor hacker. Though the real motive we would never know as it's now a failed project.
imagine how pissed they are. or maybe they silently alerted the microsoft guy themselves as they only did it for cash and theyd been paid
I am sure most super powers in the world can easily sink 2 years to maintain an obscure project in order to break system as important as openssh.
I doubt they will be pissed for one failure, and we can only hope there isn't more vulnerable projects out there (spoiler alert: probably many).
Hopefully shows why you should never trust closed source software
If the world didn’t have source access then we would have never found it
And if they do find it, it'll all be kept hush hush, they'll force an update on everyone with no explanation, some people will do everything in their power to refuse because they need to keep their legacy software running, and the exploit stays alive in the wild.
open source software getting backdoored by nefarious committers is not an indictment on closed source software in any way. this was discovered by a microsoft employee due to its effect on cpu usage and its introduction of faults in valgrind, neither of which required the source to discover.
the only thing this proves is that you should never fully trust any external dependencies.
The difference here is that if a state actor wants a backdoor in closed source software they just ask/pay for it, while they have to con their way in for half a decade to touch open source software.
How many state assets might be working for Microsoft right now, and we don't get to vet their code?
"Paid for by a state actor" Yes, who knows.
-
Could be a lone "black hat" or a group of "black hats". Who knows.
-
Could be the result of a lot of public criticism in the news regarding Pegasus spyware. Who knows.
-
Could be paid by companies without any state actors involved. Who knows.
-
Could be a lone programmer who wants power or is seeking revenge for some heated mailing list discussion. Who knows.
The question of trust has been mentioned in this case of a sole maintainer with health problems. What I asked myself is : How did this trust develop years ago ? People trusted Linus Torvalds and used the Linux kernel to build Linux distributions with to the point that the Linux kernel became from a tiny hobby thing a giant project. At some point compiling from source code became less fashionable and most people downloaded and installed binaries. New projects started and instead of tar and gzip things like xz and zstd were embraced. When do you trust a person or a project, and who else gets on board of a project ? Nowadays something like :
curl -sSL https://yadayada-flintstones-revival.com | bash
is considered perfectly normal as the default installation of some software. Open source software is cool and has kind of produced a sort of revolution in technology but there is still a lot of work to do.
Strongly doubt it's a lone actor for the reasons already given.
Boostrapping a full distribution from a 357-byte seed file is possible in GUIX:
If that seed is compromised, then the whole software stack just won't build.
It's an answer to the "Trusting Trust" problem outlined by Ken Thompson in 1984.
Reading a bit into this https://guix.gnu.org/manual/en/html_node/Binary-Installation.html The irony!
The only requirement is to have GNU tar and Xz.
Hahaha! Oh dear
That's cool. Thank you.
Any speculations on the target(s) of the attack? With stuxnet the US and Israel were willing to to infect the the whole world to target a few nuclear centrifuges in Iran.
Definitely state sponsored attack. It could be any nation - US to North Korea, and any other nation in between.
There is some indication based on commit times and the VPN used that it's somewhere in Asia. Really interesting detail in this write up.
The timezone bit is near the end iirc.
Good writeup.
The use of ephemeral third party accounts to "vouch" for the maintainer seems like one of those things that isn't easy to catch in the moment (when an account is new, it's hard to distinguish between a new account that will be used going forward versus an alt account created for just one purpose), but leaves a paper trail for an audit at any given time.
I would think that Western state sponsored hackers would be a little more careful about leaving that trail of crumbs that becomes obvious in an after-the-fact investigation. So that would seem to weigh against Western governments being behind this.
Also, the last bit about all three names seeming like three different systems of Romanization of three different dialects of Chinese is curious. If it is a mistake (and I don't know enough about Chinese to know whether having three different dialects in the same name is completely implausible), that would seem to suggest that the sponsors behind the attack aren't that familiar with Chinese names (which weighs against the Chinese government being behind it).
Interesting stuff, lots of unanswered questions still.
Stuxnet was an extremely focused attack, targeting specific software on specific PLCs in a specific way to prevent them mixing up nuclear batter into a boom boom cake. Even if it managed to affect the whole world, it would be a laser compared to this wide-net.
Given how low level it is and the timespan involved, there probably wasn't a specific use in mind. Just adding capability for a future attack to be determined later.
I had assumed it was probably a state sponsored attack. This looks like it was planned from the beginning, and any cyber attack that had years of planning and waiting strikes me as state-sponsored.
Historically there have been several instances of anarcho-communist organizations and social movements flourishing.
Most of them were sabotaged by plutocrat agents invoking violence or mischief. Often just giving an angry militants in the region some materiel support and bad intel.
What if the unexpected SSH latency hadn’t been introduced, this backdoor would live?
I wonder how many OSS projects include backdoors that doesn't appear in performance checks
~~Linux~~ Unix since 1979: upon booting, the kernel shall run a single "init" process with unlimited permissions. Said process should be as small and simple as humanly possible and its only duty will be to spawn other, more restricted processes.
Linux since 2010: let's write an enormous, complex system(d) that does everything from launching processes to maintaining user login sessions to DNS caching to device mounting to running daemons and monitoring daemons. All we need to do is write flawless code with no security issues.
Linux since 2015: We should patch unrelated packages so they send notifications to our humongous system manager whether they're still running properly. It's totally fine to make a bridge from a process that accepts data from outside before even logging in and our absolutely secure system manager.
Excuse the cheap systemd trolling, yes, it is actually splitting into several, less-privileged processes, but I do consider the entire design unsound. Not least because it creates a single, large provider of connection points that becomes ever more difficult to replace or create alternatives to (similarly to web standard if only a single browser implementation existed).
Yes, I remember Linux in 1979...
Linus was a child prodigy.
And so the microkernal vs monolithic kernal debate continues...
its only duty will be to spawn other, more restricted processes.
Perhaps I'm misremembering things, but I'm pretty sure the SysVinit didn't run any "more restricted processes". It ran a bunch of bash scripts as root. Said bash scripts were often absolutely terrible.
I'm curious to know about the distro maintainers that were running bleeding edge with this exploit present. How do we know the bad actors didn't compromise their systems in the interim ?
The potential of this would have been catastrophic had it made its way into the stable versions, they could have for example accessed the build server for tor or tails or signal and targeted the build processes . not to mention banks and governments and who knows what else... Scary.
I'm hoping things change and we start looking at improving processes in the whole chain. I'd be interested to see discussions in this area.
I think the fact they targeted this package means that other similar packages will be attacked. A good first step would be identifying those packages used by many projects and with one or very few devs even more so if it has root access. More Devs means chances of scrutiny so they would likely go for packages with one or few devs to improve the odds of success.
I also think there needs to be an audit of every package shipped in the distros. A huge undertaking , perhaps it can be crowdsourced and the big companies FAAGMN etc should heavily step up here and set up a fund for audits .
What do you think could be done to mitigate or prevent this in future ?
Interesting to hear and it wouldn't surprise me either tbh. At least none of my systems were vulnerable apparently, which is good because I am running the latest Ubuntu LTS and latest Proxmox - if those were affected then wow this would have affected so many more people.