this post was submitted on 17 Mar 2024
54 points (82.1% liked)

Selfhosted

40198 readers
730 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I see Docker mentioned every other thread and was wondering how useful it is for non development things, and if so what they are.

top 42 comments
sorted by: hot top controversial new old
[–] poVoq@slrpnk.net 129 points 8 months ago
[–] technohacker@programming.dev 47 points 8 months ago

Containers, the concept that Docker implements, lets app developers give a self-contained environment for distribution. For devs that means consistency in deployments across environments, which in turn means sysadmins can deploy each of these apps as fully isolated units.

With that, you get really clean installs/updates/uninstalls, and your deployments get done with a well-defined, declarative definition file which can also handle multi service dependencies (a la Docker Compose/K8s)

[–] JVT038@feddit.nl 23 points 8 months ago (2 children)

Docker is a container manager, but that doesn't say anything if you don't know what containers are.

Containers are basically isolated apps. For example, take something like Nextcloud. Nextcloud can run in a Docker container, which means that it runs in an isolated environment completely separated from the user's system. If Nextcloud breaks, the user's server won't be affected at all, because it's running isolated.

Why is this useful? Well, it's useful because dependencies and such automatically update. Nextcloud for example, is dependent on PHP and if you install Nextcloud directly on your server, you'll need to ensure that PHP 8 has been installed and set up properly. If PHP (or the required PHP extensions) aren't properly installed, Nextcloud won't work. Or, maybe if there's a Nextcloud update that requires a new version of PHP (PHP 9 or 10 in the future), you'll have to manually update PHP to the newer version.

All that dependency management is completely gone with containers. The container itself automatically installs and sets up a proper environment for the app that's running. So in the case of Nextcloud, the PHP binaries, extensions, and all the other stuff is all automatically included without the developer having to do anything at all. Just run one command and your entire Nextcloud instance is automatically updated.

[–] tal 3 points 8 months ago (1 children)

Also, if server software running in a container gets compromised, hopefully the container can contain the compromise from spreading to the rest of the system.

[–] JVT038@feddit.nl 1 points 8 months ago (1 children)

Depends.

If there are no external volumes and the container is in its own network without any other containers, then any malware in the container shouldn't be able to reach / affect the host server, because it's isolated.

[–] evranch@lemmy.ca 1 points 8 months ago

Even with external volumes, I don't think there should be any mechanism where a container can escape a bind mount to affect the rest of the host fs? I use bind mounts all the time, far more than docker volumes.

[–] clay_pidgin@sh.itjust.works 1 points 8 months ago (2 children)

How does the container know what's safe to update? Nextcloud (in this example) may need to stay on a specific version of some package and updating everything would break it.

[–] atzanteol@sh.itjust.works 7 points 8 months ago

The Dockerfile used to build the container controls what is in the container. It's "infrastructure as code"-like. You create a script that builds the environment the application needs.

If you need a newer version of PHP you update the Dockerfile to include the new version. Then you publish the new container.

[–] brewery@lemmy.world 3 points 8 months ago

I only use docker images supplied by the devs themselves or community maintained (e.g. Linux server.io) so they essentially tell docker what needs to be installed in the container, not me. It takes the hassle out of trying to figure out what I need to do to get the service running. If they update their app, they'll probably know best what else needs to be updated and will do that in the image. I guess you are relying on them to keep everything updated but they are way more knowledgeable than me and if there is a vulnerability, it is only in that container and not your other services.

[–] frozen@lemmy.frozeninferno.xyz 20 points 8 months ago* (last edited 8 months ago) (1 children)

I could go in-depth, but really, the best way I can describe my docker usage is as a simple and agnostic service manager. Let me explain.

Docker is a container system. A container is essentially an operating system installation in a box. It's not really a full installation, but it's close enough that understanding it like that is fine.

So what the service devs do is build a container (operating system image) with their service and all the required dependencies - and essentially nothing else (in order to keep the image as small as possible). A user can then use Docker to run this image on their system and have a running service in just a few terminal commands. It works the same across all distributions. So I can install whatever distro I need on the server for whatever purpose and not have to worry that it won't run my Docker services. This also means I can test services locally on my desktop without messing with my server environment. If it works on my local Docker, it will work on my server Docker.

There are a lot of other uses for it, like isolated development environments and testing applications using other Linux distro libraries, to name a couple, but again, I personally mostly just use it as a simple service manager.

tldr + eli5 - App devs said "works on my machine", so Docker lets them ship their machine.

[–] princessnorah@lemmy.blahaj.zone 6 points 8 months ago* (last edited 8 months ago)

So I can install whatever distro I need on the server for whatever purpose and not have to worry that it won't run my Docker services.

The one caveat to that is switching between something ARM-based like a Pi and an x86 server. Many popular services have ARM versions but not all do.

Edit: In saying that, building your own image from source isn’t too complicated most of the time.

[–] GravitySpoiled@lemmy.ml 17 points 8 months ago (1 children)

It's useful for every service you want to host (on a server).

[–] Awe@lemmy.ml 18 points 8 months ago

It's so useful you see it mentioned on every other thread

[–] StrawberryPigtails@lemmy.sdf.org 13 points 8 months ago

For me the advantage of Docker is that a random update to my system is unlikely to crash my self hosted services. It simplifies setting up the services as well but the biggest advantage is that it is generally more stable.

[–] umbrella@lemmy.ml 12 points 8 months ago* (last edited 8 months ago)

its a container system that saves you from dealing with interactions between server software, config files scattered everywhere and is even more secure and more portable.

it helps you use 1 server for many services without issues, being able to redeploy a given service without issues whenever needed.

its a bit counter intuitive to learn, but makes it plain easier and almost maintenance free to run a server if you set up things right.

[–] JoeCoT@fedia.io 9 points 8 months ago (1 children)

So it's always going to be used for technical things, but not necessarily development things. I use it for both.

For my home server setup I have docker setup like this:

  1. A VPN docker container
  2. A transmission (bittorrent client) container, using the VPN's network
  3. An nginx (web server) container, which provides access to the transmission container
  4. A 3proxy socks proxy container, using the VPN's network
  5. A tor client container
  6. A 3proxy socks proxy container, using the tor container's network

Usually it's pretty hard to say "these specific programs and only these should run over my VPN". Docker makes that easy. I can just attach containers to the same network as my VPN container, and their traffic will all go over the VPN. And then with my socks proxies I can selectively put my browser traffic over either the VPN or Tor, using extensions like FoxyProxy. I watch wrestling through my vpn because it's cheaper overseas and has better streaming options, so I have those specific sites set to route through my VPN socks proxy. And I have all onion links set to go through my Tor proxy.

[–] Amongussussyballs100@sh.itjust.works 3 points 8 months ago (2 children)

This looks like an interesting project. Can the vpn container only route traffic that are in other containers, or can regular applications get their traffic routed by the vpn container too?

[–] JoeCoT@fedia.io 3 points 8 months ago

I don't know of a good way to route other application's traffic through the VPN container with them being in docker containers, unless you use some intermediary setup. That's why I have socks proxies routed through the VPN, so I can selectively put traffic through it. If the app supports a socks proxy you could do it that way. At the least you could use Proxychains to do so if the program does TCP networking.

[–] clmbmb@lemmy.dbzer0.com 1 points 8 months ago* (last edited 8 months ago)

The answer is yes in both cases.

  1. Docker has an internal networking setup. You can create a "network" and all containers in that network communicate with each other, but not with other containers in other networks. So you can set up a VPN container in a network and all containers in that netowrk could use the VPN to route their traffic through.
  2. You can configure your VPN container to expose some ports that it uses to communicate, and then the "regular applications" can make use of those ports to connect through the VPN.
[–] rentar42@kbin.social 8 points 8 months ago

https://lemmy.world/post/12995686 was a recent question and most of the answers will basically be duplicates of that.

One slight addition I want to add: "Docker" is just one implementation of "OCI containers". It's the one that broke through initially in the hype, but you can just as easily use any other (podman being a popular one) and basically all of the benefits that people ascribe to "docker" can be applied to.

So you might (as I do) have some dislike for docker (the product) and still enjoy running containers.

[–] xlash123@sh.itjust.works 7 points 8 months ago

In simple terms, it's like a VM for an application. You set it up with the right dependencies and your application will "just work" on it, without having to deal with other applications existing alongside it.

What makes it better than a VM is that it is much faster. It interfaces with kernel features that help isolate the processes and files from the rest of the system. It is not virtualization, rather it is namespacing.

Docker also provides a bunch of tools that help with creating this environment automatically and allowing for some escaping into the host, such as binding ports and sharing data with the host's file system.

Once this environment is created, it can be shared with uses as a single downloadable bundle, called an image. This makes it really easy to download and run an application without having to prepare your system with the right dependencies and files.

Nothing is free though, and the cost here is more disk space and some performance overhead, although it is close to native speed.

[–] sabreW4K3@lazysoci.al 6 points 8 months ago

The thing with self hosting is that you want in most cases, to set and forget and that means you want as little going wrong as possible. To ensure that, you need to find a way that other things can't fuck with what you're hosting. This is called a container. The trade off is disk space, but that's okay because it's a server, unlike on a computer, but let me not start my rant about the stupidity of Snap and Flatpak. Anyway... Thanks to containers, you don't have any external factors and basically everything runs in its own world. Which means you can always delete, restore and edit without anything else being affected.

[–] lemmylem@lemm.ee 4 points 8 months ago (1 children)

Wondering too, since Docker has a non-root mode, is there a reason to use Podman?

[–] domi@lemmy.secnd.me 3 points 8 months ago

They have a different architecture so it comes down to preference.

Docker runs a daemon that you talk to to deploy your services. podman does not have a daemon, you either directly use the podman command to deploy services or use systemd to integrate them into your system.

[–] Decronym@lemmy.decronym.xyz 4 points 8 months ago* (last edited 8 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
HTTP Hypertext Transfer Protocol, the Web
IP Internet Protocol
TCP Transmission Control Protocol, most often over IP
VPN Virtual Private Network
VPS Virtual Private Server (opposed to shared hosting)
k8s Kubernetes container management package
nginx Popular HTTP server

5 acronyms in this thread; the most compressed thread commented on today has 7 acronyms.

[Thread #610 for this sub, first seen 17th Mar 2024, 20:35] [FAQ] [Full list] [Contact] [Source code]

[–] multicolorKnight@lemmy.world 3 points 8 months ago

Two things, one you care about and one you might not. The one you care about: you can set up a service in isolation. You can then test it, make sure it works, and switch over to it once you are sure, with almost no downtime. This is important for things you actually need to use. Once you do something like breaking your primary email server, you will understand. Also, less important, you can set up a service on, say, a VM at home, and move it to a VPS, without having to transfer the entire image, and it will work the same. The one you don't care about. That last bit about moving servers around is important for cloud providers who turn these things on and off all the time.

[–] CbtB@lemmynsfw.com 2 points 8 months ago

In the context of self-hosted it means easier cleaner installs and avoiding different poorly packaged projects from interfering with each other.

[–] CTDummy@lemm.ee 2 points 8 months ago

Docker is great because you can install something and all the shit it needs is installed and runs in that container. It’s good for a multitude of reasons mine are:

  1. No more installing a dependency, tool or library alongside a program that fucks up something else. No more shit breaking because you installed the latest python but some other program breaks if you move beyond 3.10 (and you forgot to use venv I guess).
  2. Somewhat a follow on from 1 but this makes for great functionality with self hosting. I can run a couple docker compose/build command and build/rebuild the containers anywhere I need them. I can test a container on a windows computer to see if it does what I want and works as intended and then spin the some container up on my media server, even if it’s a different OS. I have a bunch of them on my home server and it and it’s great being able to just plug in the port number of the other containers they need to talk to, if any, and that’s all. One container breaking doesn’t break everything else.
[–] 0p3r470r@lemm.ee 2 points 8 months ago

My company deploys a lot of cell modems. Some of them support containers. It’s really nice to deploy everything we need in one piece of equipment, as opposed to 2 or more, for a very simple application.

Several other pieces of network equipment support it now as well. A SIEM can run a remote node directly on a switch.

[–] hayalci@fstab.sh 1 points 8 months ago

Check out this previous comment

https://lemmy.ml/comment/9168742

[–] CyberPingU@lemmy.cyberveins.eu 1 points 8 months ago* (last edited 8 months ago)

I don't get the question... Docker is awesome for developing, but to put things on production too. It just avoids you the hassle of configuring a virtual machine / server from scratch since you can use prebuilt minimal images of the software you need. If you get in trouble you can restore things easier than on a whole compromised system. An update consists in the vast majority of times in changing a tag inside a docker-compose.yaml file. You have resource optimisation vs virutal machines, and so on. I don't use docker to develop at all, I use it for production. And when you don't need the service you installed anymore, you can just delete it and the system stays clean wihtout orphan files.

[–] jws_shadotak@sh.itjust.works 1 points 8 months ago

Aside from the technical explanation that others have given, here's how I use Docker:

MeTube to rip videos and stuff easily. Just plug in a link and most times it'll work. Here's a list of all the supported sites.

I use Sonarr/Radarr and qBittorrent with gluetun to search for and download TV and movies that I watch on Plex.

I host my own Immich server that will automatically back up my photos from my phone just like Google Photos, except I own it all and it's all kept private. It has its own machine learning and facial recognition, so I can search for "dog" and get all the pictures of my dogs, or I can search by person.

I use Docker for all this because the images come in little prepackaged containers. It's super easy to get into once you figure out some of the basics.

Another great benefit of these containers is that you can transfer it to another system if needed. Just copy the config and data over to the new system and point the container in the right direction and it'll pick up where it left off.

[–] Gooey0210@sh.itjust.works -2 points 8 months ago
  1. When you're prohibited from using nixos
  2. When there's no package for it in nixos, and you're lazy to package it yourself