Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
Good news, DNS over TCP in musl has been fixed since v1.2.4 released in May https://www.openwall.com/lists/musl/2023/05/02/1
So if you use alpine >= 3.18 you should no longer have this issue.
It’s always DNS lol
As pointed out, the DNS issue was fixed, and the other point made about Python wheels has also been addressed; quite a good chunk of packages on PyPi have had a musl wheel added in the past 6 months or so, including numpy & scipy. I'm also not certain if the Go part is true; probably somewhere around half of the Go apps I'm running as a container are running or were built on an Alpine base.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:
Fewer Letters | More Letters |
---|---|
DNS | Domain Name Service/System |
HTTP | Hypertext Transfer Protocol, the Web |
IP | Internet Protocol |
LXC | Linux Containers |
NAS | Network-Attached Storage |
RPi | Raspberry Pi brand of SBC |
SBC | Single-Board Computer |
TCP | Transmission Control Protocol, most often over IP |
VPS | Virtual Private Server (opposed to shared hosting) |
nginx | Popular HTTP server |
[Thread #67 for this sub, first seen 19th Aug 2023, 15:26] [FAQ] [Full list] [Contact] [Source code]
best bot
Even Debian LXC are under 100meg of ram! I love LXC, it just feels good to use them.
I've played with both Alpine and Debian in LXC, launched multiple services in both at the same time, and, honestly, did not notice any advantages in RAM or CPU consumption. Debian LXC uses slightly more disk, but this is trivial for me
I used to use Alpine containers but I’ve since standardize on Debian completely. Proxmox is Debian, my VMs run Debian, my LXCs run Debian, my VPSs run Debian, Raspian on my RPi is Debian, Armbian on my Odroid is Debian, etc, etc.
The benefit of running the same distribution on all my servers no matter where or how they’re hosted can’t be overstated.
Less mental overhead remembering different commands or config paths, same software on everything, etc. It’s been fantastic and Debian has always been rock solid for me.
Alpine packages services like Gitea and Nextcloud which Debian does not. This makes keeping up to date alot simpler for myself but that's personal preference.
I've found the idea of LXC containers to be better than they are in practice. I've migrated all of my servers to Proxmox and have been trying to move various services from VMs to LXC containers and it's been such a hassle. You should be able to directly forward disk block devices, but just could not get them to mount for an MinIO array - ended up just setting their entire contents to 100000:100000 and mounting them on the host and forwarding the mount point instead. Never managed to CAP_IPC_LOCK to work correctly for a HashiCorp Vault install. Docker in LXC has some serious pain points and feels very fragile.
It's damning that every time I have a problem with LXC the first search result will be a Proxmox forum topic with a Proxmox employee replying to the effect of "we recommend VMs over LXC for this use case" - Proxmox doesn't seem to recommend LXC for anything. Proxmox + LXC is definitely better than CentOS + Podman, but my heart longs for the sheer competence of FreeBSD Jails.
I've had relatively good luck with docker in containers but eventually decided to run docker in VMs as I only semi trust most docker apps and like the added security I get from having it in a full VM in full isolation. Some of the workarounds for docker in LXCs are far from security best practices.
I'm assuming you installed it directly to the container vs running docker in there?
I have been debating making the jump from docker in a VM to a container, but I've been maintaining Nextcloud in docker the entire time I've been using it and not had any issues. The interface can be a little slow at times but I'm usually not in there for long. I'm not sure it's worth it to have to essentially rearchitect mely setup for that.
All that aside, I also map an NFS share to my docker container that stores all my files on my NAS. This could be what causes the interface slowness I sometimes see, but last time I looked into it there wasn't a non hacky way to mount a share to an LXC container, has that changed?
Yes, Alpine maintains Nextcloud in their repos. I mount my NFS share to the Proxmox host (you can mount using the gui and set it to any form of storage you want) then bind mount the share folder to the LXC. I moved from docker in a VM to this LXC with no disruption to my data.
Thanks, that's awesome news to see! I'm currently in the process of tearing out pieces from a monolithic docker stack into more lightweight (and independent) CTs, and have been apprehensive about moving NextCloud.
CTs?
Container, in proxmox speak
24 MiB is too little. Not even enough for nginx/apache. What installation instructions did you follow?