this post was submitted on 07 Jul 2023
1821 points (99.7% liked)
Memes
45537 readers
190 users here now
Rules:
- Be civil and nice.
- Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That's exactly the problem with many open source projects.
I recently experienced this first hand when submitting some pull requests to Jerboa and following the devs: As long as there is no money funding the project the devs are trying to support the project in their free time which means little to no time for quality control. Mistakes happen... most of them are uncritical but as long as there's little to no time and expertise to audit code meaningfully and systematically, there will be bugs and these bugs may be critical and security relevant.
Even when you do have time. There have been “researchers” submitting malicious prs and when caught just act like it’s no big deal. Even had an entire institution banned from submitting prs to the Linux kernel.
https://www.bleepingcomputer.com/news/security/linux-bans-university-of-minnesota-for-committing-malicious-code/
Well, i think in most of those big incidents, people got caught. That means the concept kinda works well?
Regarding the earlier comment: I think companies just started to figure that out. They/You can't just take free libraries databases etc... If you're big tech company you better pay a few developers or an audit to make those libraries safe. This is your way of contributing. Otherwise your big platform will get hacked because you just took some 15 year olds open source code.
Selection bias though. We don't know how many have not yet been caught.
agree. Hell i wouldnt be shocked if some corporations or even nation-state (ie: NSA) actors do this, in a much better/more professional manner to ensure things like....backdoor access.
No hypothesis needed https://en.wikipedia.org/wiki/EternalBlue can't have been a one-off either.
Yeha that was my though. But more a dedicated program to do similar with large FOSS projects.
They also have hardware/supply chain intercept programs to install back doors in closed source appliances (ie: Cisco firewalls)
So something similar but dedicated to open source PRs.
At least there have been attempts to subvert open standards for cryptography through the standards process. And occasional suspicious pull requests in critical places - I assume those are done through cut-out proxies so we don't know who tried.
We definately know of some. NSA tried to slip a faulty rng algo into rsa a while back
https://blog.cloudflare.com/how-the-nsa-may-have-put-a-backdoor-in-rsas-cryptography-a-technical-primer/
Like others have said. It’s a survivorship bias. So the meme has some weight. But it doesn’t make Foss any less secure than closed source. If anything it’s better to allow anyone to examine it. Similar to how secrets can’t be kept when large numbers of folks know, the same goes here I guess.
Yeah. I think the discussion is kind of nonsensical and a tautology. Nothing in life is 100% safe, if foss or not. And we don't know what we don't know. We got a few cases where we know something got intercepted after people tried to do malicious PRs or intercepted network equipment.
I think the more interesting question has long been: what's (or who is) your threat? Against a sufficiently motivated and resourced adversary, there are few real obstacles. Conversely, some people are just not interesting because there's little or nothing to gain from attacking them.
Exactly. I just wanted to point out that most of the people here honestly have no idea what they're talking about.
If people had read the articles about that 'study' if malicious pull requests got accepted... and the aftermath... If they had read the articles how the NSA(?) helped(?!) with the mathematical constants of elliptic curve encryption... How cisco networking equipment got intercepted... If you knew how the internet and freedom worked... You'd know it's not that easy. Every 'simple' answer is just plain wrong. It depends... What is the thread model, what are you able and willing to invest, what are you trying to achieve? Sometimes you don't even know who's friend or foe.
Idk why people want to piss on open source software. It's a fact that one can have a look at open source software and not at closed source. And don't tell me nobody does, because i know i do. And millions of github users contribute code and read some code here and there. And i know a few tech blogs who like to check apps and see if they respect privacy and so on. ... And that's not everything as we pointed out earlier. If this helps you, depends on your own goals and thread model.
I really enjoy the discussion here. Refreshing! Most of the time I as a relative non-expert have no idea what I'm doing, but I do read things as much as I can. Otherwise I'm a fallen sysadmin who got a job managing cyber because bills need to be paid.
Open, closed, it's all object code in the end which can be examined in disassembly, or the behaviours observed during runtime. Open makes some processes easier in this area. I think the real strengths in this have been beyond security, to enhance cooperation and reuse so we don't waste time constantly reinventing.
Have you ever had a look at source code or disassembly? The first is like reading a book where somebody gives the computer instructions. It's kinda readable (if you learned it) and you can figure out with 'little' effort what it's supposed to do and actually doing. Disassembly is like opening the maintenance door of a strange machine and you just see millions of moving cogs and wheels. Sure you can figure out what a single cog is for, or how a part of the machine works. But you'd have to trace thousands of movements by hand, sometimes while running. And it takes you days, sometimes weeks to do that. Even with help of quite sophisticated tools.
You're right there is a difference in effort. That said source code can also be obscure if you are trying to hide something. Behavioural analysis at runtime is effective no matter what, but it typically doesn't tell anything about code coverage.
Sure. You can try to sneak something in that isn't obvious. But you can also try to evade behavioural analysis. Not load load your malicious code if you detect you're running inside a virtual machine. Stop sending packets if some sniffer software is installed, only send data every 2 months, etc... It's an arms race, either way.
Regarding 'a difference in effort': Idk. It's a pretty big difference. You could also call taking a plane to fly to hawaii for two weeks or swimming there - a difference in effort. And while there might be one or two outliers with obscure code, the majority will be kind of readable. But i agree. You have to be intelligent, pay close attention if somebody tries to sneak something in in plain sight, know how you could be tricked and use multiple tools and approaches simultaneously, to be effective.
For the human-hours of work that's put into it it's very expensive. I put in translations, highlighted bugs, put in a Jerboa fork to help mitigate issues with the 0.18 Lemmy upgrade... if I were to do this kind of thing for work I'd bill 25CAD per hour at the very minimum.