moonpiedumplings

joined 2 years ago
[–] moonpiedumplings@programming.dev 2 points 2 days ago* (last edited 2 days ago) (1 children)

So Soatok advocates for signal as pretty much the "gold standard" of e2ee apps, but it has a pretty big problem.

  1. Having signal be the distributor of the app, sorta breaks the threat model where you trust the app to encrypt data and hide it from the sever

  2. Signal is hostile to third parties packaging and distributing signal

The combination of these problems is suppose to be fixed with reproducible builds, where you can ensure that any user who builds the code will get the same binaries and outputs. Soatok mentions reproducible builds and the problems they solve on another blogpost

But signal's reproducible builds are broken.

The problem is that the answer to Soatok's second question "Can you accidentally/maliciously turn it off" is YES if you are using packages directly from the developer without signing to verify their identity and reproducible builds. They could put a backdoor in there, and you would have no way to tell. It's not fair to pretend that signal doesn't have that flaw, while dissing OMEMO

To understand why this is true, you only need check whether OMEMO is on by default (it isn’t), or whether OMEMO can be turned off even if your client supports it (it can)

(Although there is an argument to be made that having e2ee always on by default would minimize user error in improperly configuring it).

Now, I still think signal is a great software choice for many things. It's basically the best choice as a replacement to text messaging, universally.

But some people need something more secure than that, if you're seriously concerned about certain entities compromising the signal project, than you must have the ability to install clients from third party distributors and developers, even though they can have security issues, which Soatok notes in a post about Matrix (see the heading "Wasn’t libolm deprecated in May 2022?").

I thought the whole point of choosing Matrix over something like Signal is to be federated, and run your own third-party clients?

Yes Soatok. Depending on your threat model you may need to be able to choose from more than client implementation, even if all of them are trash except for 3. (Although I wouldn't recommend Matrix as a private messeger due to metadata like users/groups being public, but it's shaping up to be a great discord clone with PM feature. Is the crytography as secure as signals? No. But it checks the box of "Discord but doesn't sell my data" (yet ofc, Matrix is VC funded).).

Anyway, it's frustrating how he seems to have become more of a hardliner about this. It used to be that these were the bar to clear to become a signal competitor. Now these standards are the bar to clear to be recommended entirely (see the main section about "How do experts recommend secure messaging apps"), even though Signal itself doesn't clear them.

dev can keep using bash

I don't want "devs to keep using bash". My security problems are with the developer distributions of these softwares themselves, rather than bash. Even if developers offered a rust binary as an installer (or a setup.exe), I would still be miffed and disappointed with them for doing things like vendoring CVE's into their software!

Simply having this discussion brings attention to the issue, and to alternatives for getting packages onto the users machine, thereby increasing their security. There's a reason why it's a hot topic whenever it's brought up.

[–] moonpiedumplings@programming.dev 0 points 2 days ago (9 children)

I think that distributing general software via curl | sh is pretty bad for all the reasons that curl sh is bad and frustrating.

But I do make an exception for "platforms" and package managers. The question I ask myself is: "Does this software enable me to install more software from a variety of programming languages?"

If the answer to that question is yes, which is is for k3s, then I think it's an acceptable exception. curl | sh is okay for bootstrapping things like Nix on non Nix systems, because then you get a package manager to install various versions of tools that would normally try to get you to install themselves with curl | bash but then you can use Nix instead.

K3s is pretty similar, because Kubernetes is a whole platform, with it's own package manager (helm), and applications you can install. It's especially difficult to get the latest versions of Kubernetes on stable release distros, as they don't package it at all, so getting it from the developers is kinda the only way to get it installed.

Relevant discussion on another thread: https://programming.dev/post/33626778/18025432

One of my frustrations that I express in the linked discussion is that it's "developers" who are making bash scripts to install. But k3s is not just developers, it's made by Suse who has their own distro, OpenSuse, using OpenSuse tooling. It's "packagers" making k3s and it's install script, and that's another reason why I find it more acceptable.

[–] moonpiedumplings@programming.dev 2 points 2 days ago (2 children)

don’t understand why you treat it as all or nothing problem. It’s clearly not

There are clear alternatives to using developer install scripts to install software though: package managers

And they are not using package managers because clearly they don’t meet their needs.

Developers incorrectly believe that they need to vendor dependencies or control the way software is installed, which package managers of distros do not offer them. So they don't mention the way that their software (deno, rust) is packaged in nixpkgs, and instead mention the install script. Actually Deno mentions nixpkgs, and Rust mentions apt on their less immediately visible docs, but the first recommendation is to use the install script.

The core problem mentioned here is one of packager control vs developer control. With an install script that downloads a binary (usually vendored) the developer has control over things like: the version of the software, how it is installed, and what libraries it uses. They like this for a variety of reasons, but it often comes to the detriment of user security for the reasons I have mentioned above. Please, please read the blog post about static linking or look into my cargo audit. Developers are not security experts and should not be allowed to install software, even though they want to and continue to do this.

One the other hand, with package maintainers, they value the security of users more than things like getting a new version out. With package maintainers however, they take control over how packages are installed, often using older versions to dodge new security vulnerabilities, at the cost of keeping the same set of non-security related bugs, and sometimes the developers whine about this, like when the Bottles devs tried to get unofficial versions of bottles taken down. Bottles even intentionally broke non-flatpak builds.

But I don't care about developer control. I don't care about the newest version. I don't care about the latest features. I don't care about the non-security bugs not getting ironed out until the next stable release. Developers care about these things.

But I care only about the security of the users. And that means stable release. That means package managers. That means developers not installing software.

[–] moonpiedumplings@programming.dev 3 points 2 days ago (4 children)

It’s just a way to make bash installers more secure.

bash installers from the developers, and vendored/pinned dependencies in general will never be secure for the reasons I mentioned above. Developers are bad at security. Developers should not be installing software on people's machines.

[–] moonpiedumplings@programming.dev 4 points 2 days ago (6 children)

I said that the tool would have to be installed by default on the main distros. I would be a single binary and a man page. I don’t think it would be very difficult to get it included.

It is very difficult. The core problem here is the philosophy of the distros will cause them to avoid this tool for various reasons. Minimalist distros, like Arch, will avoid this by default because they are minimal. On the other hand, Debian really dislikes users not using packages to install things, for a variety of reasons that could be their own post, but the short version is that they also won't package this tool. A gentoo developer explains some of this, but also why staticly compiled (single binary) setups are disliked by distro packages as well.

It's a very long post, but to paraphrase a common opinion from it: Developers are often bad at actually installing software and cannot really be trusted to manage their own installer, and the dependencies of the software they create. For example, here is a pastebin of me running cargo-audit on Deno. Just in that pastebin, there are two CVE's, one is 5.9, and also an unmaintained package. Except, one of the CVE's has a patch available. But, in the Cargo.lock:

[[package]]
name = "hickory-proto"
version = "0.25.0-alpha.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d063c0692ee669aa6d261988aa19ca5510f1cc40e4f211024f50c888499a35d7"

They have "vendored" and "pinned" the package, meaning that it is essentially stuck on an insecure version. Although I'm sure that this version will be updated shortly, what sometimes happens is that a non-backwards compatible update that includes a security fix is released, and lazy developers, instead of updating their software, will pin the insecure version.

In a distro's package manager, the distro would step up to patch vulnerabilities like that one, or do security maintenance for unsupported packages. Although Debian's extremely slow movement is frustrating, they are a particularly excellent example of this because they maintain packages in such a way that all their packages are backwards compatible for the duration of their lifecycle in the stable release, meaning that a developer making a package for Debian would have no need to pin the version, but they would still get security updates for the libraries they are using for 6 years.

Deno is an extremely popular package, and thankfully it has very few issues, but I have seen much worse than this, and it's because of issues like these that I am generally opposed to developers being package maintainers, and I think that should be left up to distro maintainers or package maintainers.

There’s 0 security. Even tarballs are usually provided with MD5 checksum that you can verify client side. With bash there’s nothing

MD5 hashes are not enough. Modern packaging systems, like Debian's or Arch's have developers sign the packages to ensure that it was the real developer (or at least someone on the real developers computer...) who uploaded the package. Even with MD5 hashes, there is no such verification.

The other step needed is reproducible builds: If multiple people build a package, they will have the same output. I can verify the XZ tarball and see that the MD5 hash matches, but it's meaningless when that tarball has a backdoor in it, because they added something when they compiled it on their own machine (real story btw, also the xz backdoor didn't make it into Debian stable because of Debian's slow release policy and the fact that they essentially maintain and build forks of their own packages).

If the rust binary is not being built reproducibly, then it's meaningless to verify the MD5 hash.

that all those CD tools were specifically tailored to run as workers in a deployment pipeline

That's CI 🙃

Confusing terms, but yeah. With ArgoCD and FluxCD, they just read from a git repo and apply it to the cluster. In my linked git repo, flux is used to install "helmreleases" but argo has something similar.

[–] moonpiedumplings@programming.dev 5 points 2 days ago* (last edited 2 days ago) (8 children)

But all the website already use bash scripts.

I mentioned an alternative to the what these websites do, using a package manager to install these instead of their bash scripts.

It’s not a package manager based on bash.

Both of the bash scripts you mentioned as an example are being used to install software. If you have examples of bash scripts that do things other than install software, then it's worth discussing how to handle those.

However, the reason why bash is so popular for usecases like configuration scripts or an Arch install script though, is because no other software besides wget/curl and bash is required to get it. Having to get an extra tool on the Arch install iso just to run an install script in bash, or to run a script that installs tools on a fresh, clean install, somewhat defeats the point of the script being written in bash imo.

It’s secure way to distribute bash scripts that are already being distributed in a insecure way.

Bash is inherently insecure. I consider security not just issues with malice, but also footguns like the steam issues mentioned above. Centralizing all the bash scripts to a "repo" doesn't fix the issues with arbitrary bash scripts.

And if you are concerned about malice, then the bash scripts almost always download a binary that does further arbitrary code execution and cannot be audited. What's the difference between a bash script from the developers website and a binary from the developers website?

[–] moonpiedumplings@programming.dev 7 points 2 days ago (2 children)

There is also no way to verify that the software that is being installed is not going to do anything bad. If you trust the software then why not trust the installation scripts by the same authors

Just because I trust the authors to write good software in a popular programming language, doesn't mean I trust them to write shell scripts in a language known for footguns.

[–] moonpiedumplings@programming.dev 8 points 2 days ago* (last edited 2 days ago) (13 children)

The problem with a central script repository is that bash scripts are difficult to audit, both for malicious activity, but also for bad practices and user errors.

A steam bug in their bash script once deleted a user's home repository.

Even though the AUR is "basically" bash scripts, it's acceptable because they use their own format that calls other scripts other the hood, and the standardized format makes it easier to audit. Although I have heard a few stories of issues with this, like one poorly made AUR package moving someone's /bin to /opt and breaking everything.

So in my opinion, a package manager based on bash basically doesn't work because of these issues. All modern packaging uses some kind of actual standardized format, to make it easier to audit and develop, and to either mitigate package maintainer/creator error, or to prevent it entirely.

If you want to install tools on another distro that doesn't package them currently, I think nix, Junest, or distrobox are good solutions, because they essentially give you access to the package managers of other distros. Nix in particular has the most packages out of any distro, even more than the AUR and arch repos combined.

[–] moonpiedumplings@programming.dev 2 points 2 days ago (2 children)

garden seems similar to GitOps solutions like ArgoCD or FluxCD for deploying helm charts.

Here is an example of authentik deployed using helm and fluxcd.

 

I find this hilarious. Is this an easter egg? When shaking my mouse cursor, I can get it to take up the whole screens height.

This is KDE Plasma 6.

 

I find this hilarious. Is this an easter egg? When shaking my mouse cursor, I can get it to take up the whole screens height.

This is KDE Plasma 6.

 

I find this hilarious. Is this an easter egg? When shaking my mouse cursor, I can get it to take up the whole screens height.

This is KDE Plasma 6.

 

Incus is a virtual machine platform, similar to Proxmox, but with some big upsides, like being packaged on Debian and Ubuntu as well, and more features.

https://github.com/lxc/incus

Incus was forked from LXD after Canonical implemented a Contributor License Agreement, allowing them to distribute LXD as proprietary software.

This youtuber, Zabbly, is the primary developer of Incus, and they livestream lots of their work on youtube.

 

This card game looks really good. There also seems to be a big, open source server: https://github.com/cuttle-cards/cuttle

 

Source: https://0x2121.com/7/Lost_in_Translation/

Alt Text: (For searchability): 3 part comic, drawn in a simple style. The first, leftmost panel has one character yelling at another: "@+_$^P&%!. The second comic has them continue yelling, with their hands in an exasperated position: "$#*@F% $$#!". In the third comic, the character who was previously yelling has their hands on their head in frustration, to which the previously silent character responds: "Sorry, I don't speak Perl".

Also relevant: 93% of paint splatters are valid perl programs

 

https://security-tracker.debian.org/tracker/CVE-2024-47176, archive

As of 10/1/24 3:52 UTC time, Trixie/Debian testing does not have a fix for the severe cupsd security vulnerability that was recently announced, despite Debian Stable and Unstable having a fix.

Debian Testing is intended for testing, and not really for production usage.

https://tracker.debian.org/pkg/cups-filters, archive

So the way Debian Unstable/Testing works is that packages go into unstable/ for a bit, and then are migrated into testing/trixie.

Issues preventing migration: ∙ ∙ Too young, only 3 of 5 days old

Basically, security vulnerabilities are not really a priority in testing, and everything waits for a bit before it updates.

I recently saw some people recommending Trixie for a "debian but not as unstable as sid and newer packages than stable", which is a pretty bad idea. Trixie/testing is not really intended for production use.

If you want newer, but still stable packages from the same repositories, then I recommend (not an exhaustive list, of course).:

  • Opensuse Leap (Tumbleweed works too but secure boot was borked when I used it)
  • Fedora

If you are willing to mix and match sources for packages:

  • Flatpaks
  • distrobox — run other distros in docker/podman containers and use apps through those
  • Nix

Can get you newer packages on a more stable distros safely.

 

cross-posted from: https://programming.dev/post/18069168

I couldn't get any of the OS images to load on any of the browsers I tested, but they loaded for other people I tested it with. I think I'm just unlucky. > > Linux emulation isn't too polished.

 

I couldn't get any of the OS images to load on any of the browsers I tested, but they loaded for other people I tested it with. I think I'm just unlucky.

Linux emulation isn't too polished.

 

According to the archwiki article on a swapfile on btrfs: https://wiki.archlinux.org/title/Btrfs#Swap_file

Tip: Consider creating the subvolume directly below the top-level subvolume, e.g. @swap. Then, make sure the subvolume is mounted to /swap (or any other accessible location).

But... why? I've been researching for a bit now, and I still don't understand the benefit of a subvolume directly below the top level subvolume, as opposed to a nested subvolume.

At first I thought this might be because nested subvolumes are included in snapshots, but that doesn't seem to be the case, according to a reddit post... but I can't find anything about this on the arch wiki, gentoo wiki, or the btrfs readthedocs page.

Any ideas? I feel like the tip wouldn't just be there just because.

view more: ‹ prev next ›