Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
Do two NICs. I have a bigger setup, and it's all running on one LAN, and it is starting to run into problems. Changing to a two network setup from the outset probably would have saved me a lot of grief.
Can you explain what benefit that would bring?
So dual NIC on each device and set up another lan on my router? Sorry it seems like a dumb question but just want to make sure.
Why would you need two nics unless you’re planning on having a proxmox Vm being your router?
I haven't done it - but I believe Proxmox allows for creating a "backplane" network which the servers can use to talk directly to each other. This would be used for ceph and server migrations so that the large amount of network traffic doesn't interfere with other traffic being used by the VMs and the rest of your network.
You'd just need a second NIC and a switch to create the second network, then staticly assign IPs. This network wouldn't route anywhere else.
In proxmox there's no need to assign it to a physical NIC. If you want a virtual network that goes as frast as possible you'd create a bridge or whatever and assign it to nothing. If you assign it to a NIC then since it wants to use SR-IOV it would only go as fast as the NIC can go.
I think two NICs is required to do VLANing properly? Not 100% sure.
Nope - Proxmox lets you create VLAN trunks, just like a physical switch.
Edit: here's one of my Proxmox server network configs.
Is there a reason to do this over just giving the nic for the vm/container a vlan tag?
You still need to do that, but you need the Linux bridge interface to have VLANs defined as well, as the physical switch port that trunks the traffic is going to tag the respective VLANs to/from the Proxmox server and virtual guests.
So,
vmbr1
maps to physical interfaceenp2s0f0
. Onvmbr1
, I have two VLAN interfaces defined -vmbr1.100
(Proxmox guest VLAN) andvmbr1.60
(Phsyical infrastructure VLAN).My Proxmox server has its own address in vlan60, and my Proxmox guests have addresses (and vlan tag) for vlan100.
The added headfuck (especially at setup) is that I also run an OPNsense VM on Proxmox, and it has its own vlan interfaces defined - essentially virtual interfaces on top of a virtual interface. So, I have:
enp2s0f0
(physical)vmbr1
(Linux bridge)vmbr1.60
(Proxmox server interface)vmbr1.100
(Proxmox VLAN interface)vtnet1
(OPNsense "physical" nic, but actually virtual)vtnet1_vlan[xxx]
(OPNsense virtual nic per vlan)All virtual guests default route via OPNsense's IP address in vlan100, which maps to OPNsense virtual interface
vtnet1_vlan100
.Like I said, it's a headfuck when you first set it up. Interface-ception.
The only unnecessary bit in my setup is that my Proxmox server also has an IP address in vlan100 (via
vmbr1.100
). I had it there when I originally thought I'd use Proxmox firewalling as well, to effectively create a zero trust network for my Proxmox cluster. But, for me, that would've been overkill.Huh, cool, thank you! I'm going to have to look into that. I'd love for some of my containers and VMs to be on a different VLAN from others. I appreciate the correction. 😊
No worries mate. Sing out if you get stuck - happy to provide more details about my setup if you think it'll help.
Thanks for the kind offer! I won't get to this for a while, but I may take you up on it if I get stuck.
No, you can do more than 1 VLAN per port. It's called a trunk
Security. Keeping publicly accessible and locally accessible on different networks.
Hmmm - not really any more. I have everything on the same VLAN, with publicly accessible services sitting behind nginx reverse proxy (using Authelia and 2FA).
The real separation I have is the separate physical interface I use for WAN connectivity to my virtualised firewall/router - OPNsense. But I could also easily achieve that with VLANs on my switch, if I only had a single interface.
The days of physical DMZs are almost gone - virtualisation has mostly superseded them. Not saying they're not still a good idea, just less of an explicit requirement nowadays.
You want to have at least 3 if you're going to do that. I usually use the one on the mobo for all the other services and management. Then a dedicated port for lan and wan on a separate nic.
This is exactly my setup on one of my Proxmox servers - a second NIC connected as my WAN adapter to my fibre internet. OPNsense firewall/router uses it.