this post was submitted on 22 Mar 2024
48 points (96.2% liked)

Selfhosted

40246 readers
872 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I have many services running on my server and about half of them use postgres. As long as I installed them manually I would always create a new database and reuse the same postgres instance for each service, which seems to me quite logical. The least amount of overhead, fast boot, etc.

But since I started to use docker, most of the docker-compose files come with their own instance of postgres. Until now I just let them do it and were running a couple of instances of postgres. But it's kind of getting rediciolous how many postgres instances I run on one server.

Do you guys run several dockerized instances of postgres or do you rewrite the docker compose files to give access to your one central postgres instance? And are there usually any problems with that like version incompatibilities, etc.?

all 44 comments
sorted by: hot top controversial new old
[–] ad_on_is@lemmy.world 20 points 8 months ago

I use the provided databases in the docker-compose file, since some services require a specific version and I'm too lazy to investigate whether it works on my existing service or not.

[–] andrewalker@feddit.nl 19 points 8 months ago (1 children)

I have a single big Postgres instance, shared among immich, paperless, lldap, grafana, and others. I only use the provided docker-compose as inspiration and do my own thing. It's nicer to back up a single database (plus additional volumes, but still).

[–] atzanteol@sh.itjust.works 13 points 8 months ago (1 children)

I only use the provided docker-compose as inspiration and do my own thing

This is the correct way to look at it. Most applications that provide a docker compose do so as a convenience to get started quickly. It's not necessarily what you should run.

[–] seang96@spgrn.com 5 points 8 months ago* (last edited 8 months ago) (1 children)

It is recommended to run postgres for each service though since they may have completely different needs / configurations for the queries to be optimal. For self hosting Lemmy and matrix would be the big concerns here.

[–] atzanteol@sh.itjust.works 2 points 8 months ago* (last edited 8 months ago) (1 children)

It is recommended to run postgres for each service

Absolute sentences like this are rarely true. Sometimes it does make sense and sometimes it doesn't. One database is often quite capable of supporting the needs of many applications. And sometimes you need to fine-tune things for a specific application.

[–] seang96@spgrn.com 2 points 8 months ago (1 children)

Say what you want it's a recommendation and it's documented in quite a few deployment methods. The only benefit of centralizing it is if you are managing postres without other tools since it'd be a pain in the butt. You'll still run into apps that doesn't run on later versions and others that require later versions though.

An example of a very popular one:

How many databases should be hosted in a single PostgreSQL instance?

Our recommendation is to dedicate a single PostgreSQL cluster (intended as primary and multiple standby servers) to a single database, entirely managed by a single microservice application. However, by leveraging the "postgres" superuser, it is possible to create as many users and databases as desired (subject to the available resources).

The reason for this recommendation lies in the Cloud Native concept, based on microservices. In a pure microservice architecture, the microservice itself should own the data it manages exclusively. These could be flat files, queues, key-value stores, or, in our case, a PostgreSQL relational database containing both structured and unstructured data. The general idea is that only the microservice can access the database, including schema management and migrations.

CloudNativePG has been designed to work this way out of the box, by default creating an application user and an application database owned by the aforementioned application user.

Reserving a PostgreSQL instance to a single microservice owned database, enhances:

resource management: in PostgreSQL, CPU, and memory constrained resources are generally handled at the instance level, not the database level, making it easier to integrate it with Kubernetes resource management policies at the pod level
physical continuous backup and Point-In-Time-Recovery (PITR): given that PostgreSQL handles continuous backup and recovery at the instance level, having one database per instance simplifies PITR operations, differentiates retention policy management, and increases data protection of backups
application updates: enable each application to decide their update policies without impacting other databases owned by different applications
database updates: each application can decide which PostgreSQL version to use, and independently, when to upgrade to a different major version of PostgreSQL and at what conditions (e.g., cutover time)
[–] atzanteol@sh.itjust.works 1 points 8 months ago (1 children)

You're talking about a microservices architecture running in a kubernetes cluster? FFS.... 🙄

That's a ridiculous recommendation for a home-gamer. It's all up to how you want to manage dependencies, backups, performance, etc. If one is happy to have a single instance then there's nothing wrong with that. If one wants multiple instances for other reasons that's fine too. There are pros and cons to each approach. Your "I saw somebody recommend it on the internets" notwithstanding.

[–] seang96@spgrn.com 2 points 8 months ago (1 children)

It's the one I'm using but it's not just running in a cluster. Even some applications recommend running separately like matrix. You can't run everything on the same.versiom all the time anyways.

[–] atzanteol@sh.itjust.works 1 points 8 months ago

You can’t run everything on the same.versiom all the time anyways.

Unless you're doing something very specific with the database - yes you can. Most applications are fine with pretty generic SQL. For those that have specific requirements, well then give them their own instance. Or use that version for the ones that don't much care...

[–] nik282000@lemmy.ca 19 points 8 months ago

I keep each service separate as far as DBs, if something breaks or get a major upgrade I don't have to worry about other containers.

[–] matto@lemm.ee 14 points 8 months ago (1 children)

Not so long ago I had the same question myself, and I ended up setting 1 Postgress instance and 1 MySQL instance for all services to share. In the long run, I had so many version and settings incompatibilities across services that moved back to one DB per service that is tuned specifically for it. Also, I add a backup app to all my docker compose files that have a DB in it. This way, backups happen periodically and automatically.

[–] richmondez@lemmy.world 6 points 8 months ago (2 children)

Which db backup app do you use if you don't mind me asking?

[–] tritonium@midwest.social 3 points 8 months ago* (last edited 8 months ago)

You don't need a db backup app... bind mount the data to a location then just stop the container and have borg take the backups. You can do this with all your containers.

/docker/postgres

/docker/postgres/data

/docker/postgres/compose.yml

And do that with every container. Easy as fuck to backup and restore them.

[–] matto@lemm.ee 1 points 7 months ago
[–] sunaurus@lemm.ee 11 points 8 months ago

If I have several backends that more or less depend on each other anyway (for example: Lemmy + pict-rs), then I will create separate databases for them within a single postgres - reason being, if something bad happens to the database for one of them, then it affects the other one as well anyway, so there isn't much to gain from isolating the databases.

Conversely, for completely unrelated services, I will always set up separate postgres instances, for full isolation.

[–] ptz@dubvee.org 4 points 8 months ago

I used to, but now I just have one big one (still in Docker) that's sized and tuned to handle all of my applications.

I've only had one version compatibility issue, but that was because I was on pgSQL 13 and the updated version of one application needed 15. Upgrading that didn't affect any other applications. If it had, I would have just broken that one application out to its own stack-local Postgres.

I have a separate network for postgres. Every service which needs a DB is attached to it. I use a single postgres container with several DBs and finetuned with PGTune.

The most important thing as always: proper backups ☝🏻

The only service with extra DB container is Immich, since it uses a custom variant and I am too lazy to modify the existing container 😁

[–] redxef@scribe.disroot.org 2 points 8 months ago

In theorey lots of people recommend having everything in a single docker-compose file for easier transfer and separation, though I have so much running, that it's grouped by purpose. One of those is data storage. So I have a single server with all the databases (as far as compatibility goes). I would like to some day have a highly available postgres cluster with automatic failover and failback. But that needs a lot of testing and I'm no postgres admin, so also a lot of time to research how to do it properly.

[–] bjoern_tantau@swg-empire.de 2 points 8 months ago

Asked the same question a little while ago. See https://swg-empire.de/post/625121 for more opinions.

I ended up putting Nextcloud and Lemmy on one service. No problems so far, resource usage and performance are great. But I've not been running it for very long.

[–] DeltaTangoLima@reddrefuge.com 2 points 8 months ago* (last edited 8 months ago)

I run Proxmox with a few nodes, and each of my services are (usually) dockerized, each running in a Proxmox Linux container.

As I like to keep things segregated as much as possible, I really only have one shared Postgres, for the stuff I don't really care about (ie. if it goes down, I honestly don't care about the services it takes with it, or the time it'll take me to get them back).

My main Postgres instances are below - there's probably others, but these are the ones I backup religiously, and test the backups frequently.

  1. RADIUS database: for wireless auth
  2. paperless-ngx: document management indexing & data
  3. Immich: because Immich has a very specific set of Postgres requirements
  4. Shared: 2 x Sonarr, 3 x Radarr, 1 x Lidarr, a few others
[–] MrMcGasion@lemmy.world -1 points 8 months ago (3 children)

That's a big reason I actively avoid docker on my servers, I don't like running a dozen instances of my database software, and considering how much work it would take to go through and configure each docker container to use an external database, to me it's just as easy to learn to configure each piece of software for yourself and know what's going on under the hood, rather than relying on a bunch of defaults made by whoever made the docker image.

I hope a good amount of my issues with docker have been solved since I last seriously tried to use docker (which was back when they were literally giving away free tee shirts to get people to try it). But the times I've peeked at it since, to me it seems that docker gets in the way more often than it solves problems.

I don't mean to yuck other people's yum though, so if you like docker, and it works for you, don't let me stop you from enjoying it. I just can't justify the overhead for myself (both at the system resource level, and personal time level of inserting an additional layer of configuration between me and my software).

[–] Lifebandit666@feddit.uk 3 points 8 months ago

I agree to a certain extent and I'm actively using Docker.

What I've done is made an Ubuntu VM, put Docker on it and booted a Portainer client container on it, then made that into a container template, so I can just give it an IP address and boot it up, then add it to Portainer in 3 clicks.

It's great for just having a go on something and seeing if I wanna pursue it.

But so far I've tried to boot and run Arr and Plex, and more recently Logitech Media Server and it's just been hard work.

I've found I'm making more VMs than I thought I would and just putting things together in them, rather than trying to run stacks of Docker together.

That said, it looks like it is awesome when you know what you're doing.

[–] summerof69@lemm.ee 3 points 7 months ago

What overhead are you talking about? You don't need a dozen of instances of a database. You can create one, with or without docker, and configure any service to use it. The idea of docker and docker compose is that you can easily start up the whole env. But you don't have to.

[–] sardaukar@lemmy.world 3 points 7 months ago (3 children)

It's kinda weird to see the Docker scepticism around here. I run 40ish services on my server, all with a single docker-compose YAML file. It just works.

Comparing it to manually tweaking every project I run seems impossibly time-draining in comparison. I don't care about the underlying mechanics, just want shit to work.

[–] skittlebrau@lemmy.world 2 points 7 months ago

I care about the underlying tech in the things I deploy, but the reality is that I lack the time to actively do this in practice.

Ideally I would set everything up manually, but it’s just too hard to keep up with 30+ projects and remembering how/why I set everything up, even with documentation. Docker Compose makes my homelab hobby more manageable.

[–] MrMcGasion@lemmy.world 1 points 7 months ago (1 children)

I think that my skepticism and desire to have docker get out of my way, has more to do with already knowing the underlying mechanics, being used to managing services before docker was a thing, and then docker coming along and saying "just learn docker instead." Which is fine, if it didn't mean not only an entire shift from what I already know, but a separation from it, with extra networking and docker configuration to fuss with. If I wasn't already used to managing servers pre-docker, then yeah, I totally get it.

[–] sardaukar@lemmy.world 1 points 7 months ago (1 children)

I used to be a sysadmin in 2002/3 and let me tell you - Docker makes all that menial, boring work go away and services just work. Which is want I want, instead of messing with php.ini extensions or iptables custom rules.

[–] MrMcGasion@lemmy.world 2 points 7 months ago

Maybe I'll try and give it another go soon to see if things have improved for what I need since I last tried. I do have a couple aging servers that will probably need upgraded soon anyway, and I'm sure my python scripts that I've used in the past to help automate server migration will need updated anyway since I last used them.

[–] Moonrise2473@feddit.it 1 points 7 months ago (1 children)

I have everything in docker too, but a single yml with 40 services is a bit extreme - you would be forced to upgrade everything together, no?

[–] sardaukar@lemmy.world 1 points 7 months ago

Not really. The docker-compose file has services in it, and they're separate from eachother. If I want to update sonarr but not jellyfin (or its DB service) I can.

[–] Shimitar@feddit.it -5 points 7 months ago (1 children)

This is one of the annoying issues with docker, or better, on how docker is abused in production.

The single instance/multiple databases is the correct way to go, docker approach mess up with that.

Rewriting docker files is always a possibility but honestly defies the reason why docker is used by self hosters.

Also beware that some devs will shunt you out of support if you do, specially the apps that ships docker files by default.

Go bare metal if possible, that way you have full control. Do docker for testing up stuff quickly and be flexible at cost of accepting how stuff is packaged by upstream

[–] sardaukar@lemmy.world 6 points 7 months ago

The official Postgres Docker image is geared towards single database per instance for several reasons - security through isolation and the ability to run different versions easily on the same server chief among them. The performance overhead is negligible.

And about devs not supporting custom installation methods, I'm more inclined to think it's for lack of time to support every individual native setup and just responding to tickets about their official one (which also is why Docker exists in the first place).