this post was submitted on 20 Sep 2023
14 points (85.0% liked)

Selfhosted

40246 readers
866 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I just attached a new volume to my vps and usually I follow the instructions provided using parted and mkfs.ext4 but I decided to try ZFS.

The guides I've found online are all very different and I'm not sure if I did everything correct to know the data will be safe.
What I mean is running lsblk -o name,size,fstype,type,mountpoint shows this

NAME     SIZE FSTYPE   TYPE MOUNTPOINT
vdb      100G          disk
└─vdb1   100G ext4     part /mnt/storage
vdc      100G          disk
├─vdc1   100G          part
└─vdc9     8M          part

You can see the type and mountpoint of the previous volume are listed, but the ZFS' ones aren't.

Still I can properly access the ZFS pool I created and I also already copied some test data.

root@vps:~/services# zpool list
NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
local-zfs   99.5G  6.88G  92.6G        -         -     0%     6%  1.00x    ONLINE  -
root@vps:~/services# zfs list
NAME         USED  AVAIL     REFER  MOUNTPOINT
local-zfs   6.88G  89.5G     6.88G  /mnt/zfs

The commands I ran were these ones

parted -s /dev/vdc mklabel gpt
parted -s /dev/vdc unit mib mkpart primary 0% 100%
zpool create -o ashift=12 -O canmount=on -O atime=off -O recordsize=8k -O compression=lz4 -O mountpoint=/mnt/zfs local-zfs /dev/vdc

Does this look good?
Should I do something else? (like writing something to fstab)

The list of properties is very long, is there any one you recommend I should look into for a simple server where currently non-critical data is stored?
(I already have a separate backup solution, maybe I'll check to update it later)

you are viewing a single comment's thread
view the rest of the comments
[–] twack@lemmy.world 1 points 1 year ago (1 children)

Ubuntu has ZFS on root as one of the options in the normal graphical installer. I have it running on multiple machines.

[–] _TK@lemmy.antemeridiem.xyz 1 points 1 year ago

This must have changed with 23.04 or something then, because when I set up my home server a little over a year ago, ZFS as root was not only not a part of the install, but also heavily recommended against as something that could be hacked in. Basically you could do it, but you shouldn't was my impression. I ended up doing EXT4 as root, then mounted my ZFS storage in my home directory.