this post was submitted on 21 Jun 2024
110 points (95.8% liked)

Selfhosted

40183 readers
564 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Not exactly self hosting but maintaining/backing it up is hard for me. So many “what if”s are coming to my mind. Like what if DB gets corrupted? What if the device breaks? If on cloud provider, what if they decide to remove the server?

I need a local server and a remote one that are synced to confidentially self-host things and setting this up is a hassle I don’t want to take.

So my question is how safe is your setup? Are you still enthusiastic with it?

you are viewing a single comment's thread
view the rest of the comments
[–] Saik0Shinigami@lemmy.saik0.com 25 points 4 months ago (5 children)

Absurdly safe.

Proxmox cluster, HA active. Ceph for live data. Truenas for long term/slow data.

About 600 pounds of batteries at the bottom of the rack to weather short power outages (up to 5 hours). 2 dedicated breakers on different phases of power.

Dual/stacked switches with lacp'd connections that must be on both switches (one switch dies? Who cares). Dual firewalls with Carp ACTIVE/ACTIVE connection....

Basically everything is as redundant as it can be aside from one power source into the house... and one internet connection into the house. My "single point of failures" are all outside of my hands... and are all mitigated/risk assessed down.

I do not use cloud anything... to put even 1/10th of my shit onto the cloud it's thousands a month.

[–] iso@lemy.lol 13 points 4 months ago (1 children)

It's quite robust, but it looks like everything will be destroyed when your server room burns down :)

[–] Saik0Shinigami@lemmy.saik0.com 10 points 4 months ago* (last edited 4 months ago) (1 children)

Fire extinguisher is in the garage... literal feet from the server. But that specific problem is actually being addressed soon. My dad is setting up his cluster and I fronted him about 1/2 the capacity I have. I intend to sync longterm/slow storage to his box (the truenas box is the proxmox backup server target, so also collects the backups and puts a copy offsite).

Slow process... Working on it :) Still have to maintain my normal job after all.

Edit: another possible mitigation I've seriously thought about for "fire" are things like these...

https://hsewatch.com/automatic-fire-extinguisher/

Or those types of modules that some 3d printer people use to automatically handle fires...

[–] iso@lemy.lol 4 points 4 months ago (1 children)

Yeah I really like the "parent backup" strategy from @hperrin@lemmy.world :) This way it costs much less.

[–] Saik0Shinigami@lemmy.saik0.com 3 points 4 months ago* (last edited 4 months ago) (1 children)

The real fun is going to be when he's finally up and running... I have ~250TB of data on the Truenas box. Initial sync is going to take a hot week... or 2...

Edit: 23 days at his max download speed :(

Fine.. a hot month and a half.

[–] shiftymccool@programming.dev 1 points 4 months ago* (last edited 4 months ago)

I'm doing something similar (with a lot less data), and I'm intending on syncing locally the first time to avoid this exact scenario.

[–] notfromhere@lemmy.ml 3 points 4 months ago* (last edited 4 months ago) (2 children)

Different phases of power? Did you have 3-phase ran to your house or something?

You could get a Starlink for redundant internet connection. Load balancing / fail over is an interesting challenge if you like to DIY.

[–] corroded@lemmy.world 4 points 4 months ago (1 children)

In the US at least, most equipment (unless you get into high-and datacenter stuff) runs on 120V. We also use 240V power, but a 240V connection is actually two 120V phases 180-degrees out of sync. The main feed coming into your home is 240V, so your breaker panel splits the circuits evenly between the two phases. Running dual-phase power to a server rack is as simple as just running two 120V circuits from the panel.

My rack only receives a single 120V circuit, but it's backed up by a dual-conversion UPS and a generator on a transfer switch. That was enough for me. For redundancy, though, dual phases, each with its own UPS, and dual-PSU servers are hard ro beat.

[–] Saik0Shinigami@lemmy.saik0.com 1 points 4 months ago

Exactly this. 2 phase into house, batteries on each leg. While it would be exceedingly rare for just one phase to go out... i can in theory weather that storm indefinitely.

[–] Saik0Shinigami@lemmy.saik0.com 1 points 4 months ago (1 children)

Nope 240. I have 2x 120v legs.

I actually had verizon home internet (5g lte) to do that... but i need static addresses for some services. I'm still working that out a bit...

[–] possiblylinux127@lemmy.zip 1 points 4 months ago (1 children)

Couldn't you use a VPS as the public entry point?

[–] Saik0Shinigami@lemmy.saik0.com 1 points 4 months ago

I could... But it would be a royal pain in the ass to find a VPS that has a clean address to use (especially for email operations).

[–] Mora@pawb.social 2 points 4 months ago (2 children)

Absurdly safe.

[...] Ceph

For me these two things are exclusive of each other. I had nothing but trouble with Ceph.

[–] Saik0Shinigami@lemmy.saik0.com 2 points 4 months ago

Ceph has been FANTASTIC for me. I've done the dumbest shit to try and break it and have had great success recovering every time.

The key in my experience is OODLES of bandwidth. It LOVES fat pipes. In my case 2x 40Gbps link on all 5 servers.

[–] possiblylinux127@lemmy.zip 1 points 4 months ago

It depends on how you set it up. Most people do it wrong.

[–] possiblylinux127@lemmy.zip 2 points 4 months ago (1 children)

You should edit you post to make this sound simple.

"just a casual self hoster with no single point of failure"

[–] Saik0Shinigami@lemmy.saik0.com 2 points 4 months ago

Nah, that'd be mean. It isn't "simple" by any stretch. It's an aggregation of a lot of hours put into it. What's fun is that when it gets that big you start putting tools together to do a lot of the work/diagnosing for you. A good chunk of those tools have made it into production for my companies too.

LibreNMS to tell me what died when... Wazuh to monitor most of the security aspects of it all. I have a gitea instance with my own repos for scripts when it comes maintenance time. Centralized stuff and a cron stub on the containers/vms can mean you update all your stuff in one go

[–] possiblylinux127@lemmy.zip 1 points 4 months ago (1 children)

What does your internal network look like for ceph?

[–] Saik0Shinigami@lemmy.saik0.com 2 points 4 months ago* (last edited 4 months ago)

40 ssds as my osds... 5 hosts... all nodes are all functions (monitor/manager/metadataservers), if I added more servers I would not add any more of those... (which I do have 3 more servers for "parts"/spares... but could turn them on too if I really wanted to.

2x 40gbps networking for each server.

Since upstream internet is only 8gbps I let some vms use that bandwidth too... but that doesn't eat into enough to starve Ceph at all. There's 2x1gbps for all the normal internet facing services (which also acts as an innate rate limiter for those services).