this post was submitted on 02 May 2024
101 points (98.1% liked)
Cybersecurity
5687 readers
59 users here now
c/cybersecurity is a community centered on the cybersecurity and information security profession. You can come here to discuss news, post something interesting, or just chat with others.
THE RULES
Instance Rules
- Be respectful. Everyone should feel welcome here.
- No bigotry - including racism, sexism, ableism, homophobia, transphobia, or xenophobia.
- No Ads / Spamming.
- No pornography.
Community Rules
- Idk, keep it semi-professional?
- Nothing illegal. We're all ethical here.
- Rules will be added/redefined as necessary.
If you ask someone to hack your "friends" socials you're just going to get banned so don't do that.
Learn about hacking
Other security-related communities !databreaches@lemmy.zip !netsec@lemmy.world !cybersecurity@lemmy.capebreton.social !securitynews@infosec.pub !netsec@links.hackliberty.org !cybersecurity@infosec.pub !pulse_of_truth@infosec.pub
Notable mention to !cybersecuritymemes@lemmy.world
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I said it's not a Docker problem in the same way that installing a malware-laden executable isn't an OS problem. Yes, you can install malware through Docker if you opt-in, the trick is to not opt-in. Check where they're coming from, look at multiple examples online and see if they're all using the same image, etc. Do your due diligence, especially if you're not a developer and thus looking at the Dockerfiles is impractical.
But unless there's an actual vulnerability in Docker (i.e. exploit in the daemon, credential breach of Docker hub, etc), the finger should be pointed elsewhere. Maybe that's the kernel (i.e. breaking out of the sandbox), but it's likely blog posts and users that are at fault.
I looked through those links, and here are my thoughts:
Yes, there are plenty of opportunities for exploits in the Docker ecosystem, but I think the risk is pretty low given Docker has it's own sandbox, which adds a layer of complexity to exploits. The focus shouldn't necessarily be on something Docker should be doing, but teaching users how to configure containers properly. That means:
No platform is perfect, and practicing the above is probably easier than implementing non-root containers, and non-root itself isn't a cure-all. Do the above first, and then consider non-root containers in production.
except no one is doing that. Every major distro hast mechanisms for software supply chain security and reproducible builds.
You're on to something here. If you automate that process, you end up with something we call a package manager.
Exactly. And sincer reviewing Dockerfiles is impractical, there's no way docker prevents you from shooting your own foot. Distros learned that long ago: Insecure default configs or injected dependencies are a thing of the past there. With docker, those get reintroduced.
Given the number of posts I've seen online and adding PPAs or RPM repos, or installing things from source, I'd say that number is a lot higher than 0.
Docker contains that nonsense in a way that's easy to update. And it allows developers to test production dependencies on a system without those production libs.
Package managers don't provide a sandbox. You can get one with SELinux, but that requires proper configuration, and then we're back to the original issue.
Docker provides a decent default configuration. So I think the average user who doesn't run updates consistently, may add sketchy dependencies, and doesn't audit things would be better off with Docker.
So yeah, you may have more vulnerabilities with Docker, but they're less likely to cause widespread issues since each is in its own sandbox. And due to the way its designed, it's often easier to do things "the right way" (separate concerns) than the wrong way (relax security until it works).
Nothing wrong with that. Unlike docker that's cryptographically protected toolchain/buildchain/depchain. Thus, a PPA owner is much less likely to get compromised.
Installing things from source in a secure environment is about as safe as you can get, when obtaining the source securely.
Really? Ist there already a builtin way to update all installed docker containers?
What's uneasy about
apt full-upgrade
?I didn't say that.
That's false.
Also false. Sandbox evasion is very easy and the next local PE kernel vulnerability only weeks away. Also VM evasion is a thing.
Basically one compromised container giving local execution is enough to pwn your complete host.
Again, I have yet to see evidence of a Docker repo being compromised.
Repos are almost never compromised. Most issues are from users making things insecure because it's easier that way. As in:
And so on. Docker makes it really easy to throw away the entire VPS and redeploy, which means those bad configs are less likely to persist in production.
Sure, but that's only true from the moment you release.
Source-compiled packages rarely get updated, at least in my experience. However, putting source packages in a Docker container increases those chances because it's really easy to roll back if something goes wrong, and if you upgrade the base, it rebuilds the source with those new libraries, even if statically compiled.
If you're talking about repository packages built from source (I'm not, I'm talking about side-loading packages), that's not really a thing anymore with reproducible builds.
In our CI pipeline, yes, I just run a "rebuild all" script and it's ready for the next deploy to production. Or the script run could be totally automated (I'm currently fighting our devOPs to enable it). We do ":-" pattern, so worse case, we update from "bullseye" to "bookworm" or whatever separately from the version and ship that. Total process is like 20s per repo (edit Dockerfile out docker-compose.yml, make PR, then ship), then whatever time it takes to build and deploy.
But I don't necessarily want to upgrade everything. If I upgrade the OS version, I need to check that each process I use is compatible, so I'm more likely to delay doing it. With Docker, I can update part of my stack and it won't impact anything else. At work, we have dozens of containers, all at various stages of updates (process A is blocked by dependency X, B is blocked by Y, etc), so we're able to get most of them updated independently and let the others lag until we can fix whatever the issues are.
When installing to the host, I'd need to do all of it at once, which means I'm more likely to stay behind on updates.
Source? I'm coming from ~15 years of experience, and I can say that servers rarely see updates. Maybe it happens in larger firms, but not in smaller shops. But then, larger firms can also run security audits of docker images and whatnot.
Maybe. That depends on if the attacker is able to get out of the sandbox. If it was a vulnerability in a process on the host, there's no sandbox to escape.
So while your Docker containers will probably lag a bit, they come with a second layer of protection. If your host lags a bit, there's probably no second layer of protection.
*ouch*