this post was submitted on 03 Oct 2024
47 points (96.1% liked)

Selfhosted

40218 readers
985 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
47
Anyone running ZFS? (lemmy.fwgx.uk)
submitted 1 month ago* (last edited 1 month ago) by blackstrat@lemmy.fwgx.uk to c/selfhosted@lemmy.world
 

At the moment I have my NAS setup as a Proxmox VM with a hardware RAID card handling 6 2TB disks. My VMs are running on NVMEs with the NAS VM handling the data storage with the RAIDed volume passed through to the VM direct in Proxmox. I am running it as a large ext4 partition. Mostly photos, personal docs and a few films. Only I really use it. My desktop and laptop mount it over NFS. I have restic backups running weekly to two external HDDs. It all works pretty well and has for years.

I am now getting ZFS curious. I know I'll need to IT flash the HBA, or get another. I'm guessing it's best to create the zpool in Proxmox and pass that through to the NAS VM? Or would it be better to pass the individual disks through to the VM and manage the zpool from there?

you are viewing a single comment's thread
view the rest of the comments
[–] Mio@feddit.nu 1 points 1 month ago (1 children)
[–] blackstrat@lemmy.fwgx.uk 4 points 1 month ago (2 children)

It stole all my data. It's a bit of a clusterfuck of a file system, especially one so old. This article gives a good overview: https://arstechnica.com/gadgets/2021/09/examining-btrfs-linuxs-perpetually-half-finished-filesystem/ It managed to get into a state where it wouldn't even let me mount it readonly. I even resorted to running commands of which the documentation just said "only run this if you know what you're doing", but actually gave no guidance to understand - it was basically a command for the developer to use and noone else. It ddn't work anyway. Every other system that was using the same disks but with ext4 on their filesystems came back and I was able to fsck them and continue on. I think they're all still running without issue 6 years later.

For such an old file system, it has a lot of braindead design choices and a huge amount of unreliability.

[–] snugglebutt@lemmy.blahaj.zone 1 points 1 month ago

'short for "B-Tree File System"'. maybe i should stop reading it as butterfucks

[–] Mio@feddit.nu 1 points 1 month ago* (last edited 1 month ago) (1 children)

Dataloss is never fun. File systemet in general need a long time to iron out all the bugs. Hope it is in a better state today. I remember when ext4 was new and crashed in a laptop. Ubuntu was to early to adopt it, or I did not use LTS.

But as always, make sure to have a proper backup on a different physical location.

[–] zingo@sh.itjust.works 1 points 1 month ago (1 children)

Found a Swede in this joint! Cheers.

[–] Mio@feddit.nu 1 points 1 month ago* (last edited 1 month ago) (1 children)

You will find many more at feddit.nu

[–] zingo@sh.itjust.works 1 points 1 week ago

Yes I'm sure.

Not really searching for 'em though. :)