this post was submitted on 14 Jul 2025
128 points (97.1% liked)

Selfhosted

49591 readers
520 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I'm not really sure how to ask this because my knowledge is pretty limited. Any basic answers or links will be much appreciated.

I have a number of self hosted services on my home PC. I'd like to be able to access them safely over the public Internet. There are a couple of reasons for this. There is an online calendar scheduling service I would like to have access to my caldav/carddav setup. I'd also like to set up Nextcloud, which seems more or less require https. I am using http connections secured through Tailscale at the moment.

I own a domain through an old Squarespace account that I would like to use. I currently have zero knowledge or understanding of how to route my self hosted services through the domain that I own, or even if that's the correct way to set it up. Is there a guide that explains step by step for beginners how to access my home setup through the domain that I own? Should I move the domain from Squarespace to another provider that is better equipped for this type of setup?

Is this a bad idea for someone without much experience in networking in general?

all 39 comments
sorted by: hot top controversial new old
[–] francois@sh.itjust.works 3 points 2 days ago* (last edited 2 days ago)

Exposing services over the public internet is not without risks, you might consider getting more knowledge before doing it

Every service you expose should require authentication and may need to handle bots

To gain knowledge about reverse proxies and dns without much risk exposure, you could start by setting up your custom domain name on your private tailscale network, here is an example of how you could do it

Now if you really want to expose services on the internet because you have devices that you don't want to connect to your tailscale network, you could use tailscale funnel, but only with your ts.net custom subdomain that they provide you, not with your own domain

There is an open issue to support custom domains https://github.com/tailscale/tailscale/issues/11563

I'm waiting for this to get resolved, in the meantime I have a vps with a reverse proxy connected to a tailscale container, that serves my services from my home network, so that I dont need a static IP or open a port in my home router

[–] eksb@programming.dev 41 points 5 days ago (2 children)
  1. Consider getting a VPS to play around with to learn how this stuff works before you expose your data to the internet.
  2. Learn about how DNS works. You will create an A record (and possibly also an AAAA recordy) for your domain pointing to your home IP (or VPS).
  3. If SquareSpace does not let you set records (and will only allow you to use Squarespace-hosted services) you will need to migrate your domain to another provider. I like gandi.net.
  4. Learn how your router does port forwarding. You will forward port(s) for the calendar service from your router to your home PC. (Or learn how to do firewalls on your VPS.)
  5. Before you actually connect to it with credentials over the internet, set up SSL/TLS certificates with LetsEncrypt.
[–] irmadlad@lemmy.world 1 points 2 days ago

Consider getting a VPS to play around with to learn how this stuff works before you expose your data to the internet.

Highly recommend this, especially when exposing your local server to the internet when you may still be a bit green with the security aspects of self hosting. Small VPS for under $30 a year are dime a dozen really, and well worth the price for the education you can get from them.

Even now, I have a small VPS that I regularly test things on before I put it on the production server.

[–] pHr34kY@lemmy.world 10 points 4 days ago* (last edited 4 days ago) (1 children)

The educational route I took was Hurricane Electric's free IPv6 online course. It taught me a bunch of networking principles. When you finish the course (and get "sage" status), you get free lifetime DNS access. This includes dynamic DNS that automatically updates when your IP address changes.

Because of this, I can self-host on a basic residential plan without paying for any additional services.

[–] Fedegenerate@lemmynsfw.com 3 points 3 days ago* (last edited 3 days ago)

Oooo this might be the path I take to finally get off IPv4. Cheers. I've already set up reverse proxies, but finally updating to 1999 technology seems like a good plan.

[–] illusionist@lemmy.zip 25 points 5 days ago (2 children)

Caddy with caddyfile is very easy although it lacks a gui. Use nginx proxy manager if you want a gui, but it is more work than a caddyfile.

https://caddyserver.com/docs/quick-starts/caddyfile

[–] irmadlad@lemmy.world 1 points 2 days ago

it lacks a gui

I've never used this, but I wandered across it about a month ago: https://github.com/qdm12/caddy-ui

If you search for 'caddy ui' there are a number of them. I don't really see a need for a caddy ui, but some might.

[–] CrayonDevourer@lemmy.world 15 points 5 days ago* (last edited 5 days ago) (1 children)

Seconding Caddy -- It's as close to it gets for "Just works". It handles all the certs, it's easy to refresh and add a subdomain instantly, handles wildcard domains, and the config file is dead simple to understand.

You can use https://xcaddy.tech/ to build Caddy with various plugins, I use mine with transform-encoder so that logs can be made compatible with fail2ban.

[–] illusionist@lemmy.zip 1 points 5 days ago* (last edited 5 days ago) (2 children)

I wish I would understand how to use xcaddy but I failed the last two times setting it up 😅 it was something about another language (go?) that was needed iirc

[–] CrayonDevourer@lemmy.world 4 points 4 days ago* (last edited 4 days ago) (1 children)

https://caddyserver.com/download

Use this if xcaddy is too much.

Select your platform, then just click the little boxes next to the modules you want included, then hit the download button

[–] illusionist@lemmy.zip 1 points 4 days ago

I will test that ASAP!! that looks great, thank you!

[–] ryandenotter@programming.dev 3 points 3 days ago* (last edited 3 days ago)

The easiest way to do this is through Tailscale. It is a super easy to set up Wireguard VPN Mesh that allows you to access your self-hosted services without exposing them to the public internet.

https://tailscale.com/

Here is a great article to get you started: https://tailscale.com/kb/1017/install

They also have an awesome YouTube channel with great tutorials to help you get started. https://www.youtube.com/@Tailscale

Note: while this way not directly answer OP's specific question, I believe they will get the outcome they are looking for: external access to self-hosted services

[–] 3dcadmin@lemmy.relayeasy.com 1 points 2 days ago

If you want the easy way consider Cloudflare and a tunnel. You can set it up in various ways but one way is to have a public hostname which can be a sub domain and then point it at your server. You'd have to have the DNS/domain at least use Cloudflare nameservers though for that. This is really easy to do - and you can move on to other ways later if you wish. Tailscale is another way, but Cloudflare will also act as a very good CDN/cache without much tweaking on your part. I have used Cloudflare for ever so I do still use tunnels - never seen the need to change yet. In fact my lemmy instance is cached/proxied through a cloudflare tunnel

https://lemmy.relayeasy.com/

[–] OhVenus_Baby@lemmy.ml 2 points 3 days ago

Tailscale the end.

[–] littleomid@feddit.org 9 points 5 days ago* (last edited 4 days ago) (1 children)

Three steps:

  1. point the FQDN to your network (Dynamic DNS).
  2. set up reverse proxy (Nginx, etc.)
  3. set up certificates (Certbot, etc.)

Optional step 4: harden with fail2ban and a firewall.

[–] bruce965@lemmy.ml 7 points 5 days ago* (last edited 5 days ago) (2 children)

I would say this would be the proper way to do it (at least as a sysadmin), but since it's OP's first time I would simplify it to:

  1. Install CloudFlare ZeroTrust daemon on your local server;
  2. Set up reverse proxy such as Nginx (optional, the alternative is to use a different subdomain for each service, which might be easier);
  3. Point the FQDN to CloudFlare.

Let CloudFlare handle the certificates, DDoS protection, etc... Link if you'd like to give this setup a try.

[–] ag10n@lemmy.world 6 points 5 days ago (1 children)

Cloudflare isn’t very self-host, unless you want/need to trust a third party I wouldn’t recommend this.

[–] bruce965@lemmy.ml 2 points 4 days ago

They provide decent defaults for all the not-so-straightforward configurations, and they provide a web UI to configure the rest. That's the sole reason I would recommend it to get one's feet wet without having to work too much.

If one is committed to do things "the right way" they could switch to Nginx and "proper" self-hosting later.

[–] brian@lemmy.ca 1 points 5 days ago (2 children)

How would you go about using a different subdomain without something like a reverse proxy? Heck, in my head that's almost the only reason I use a reverse proxy

[–] bruce965@lemmy.ml 1 points 4 days ago

Yeah, I'm afraid you have to use a reverse proxy to host multiple subdomains. The CloudFlare daemon is the reverse proxy.

[–] SheeEttin@lemmy.zip 1 points 5 days ago

Most web servers already use the Host header.

[–] irotsoma@lemmy.blahaj.zone 7 points 4 days ago

Really the first issue is your IP address. How does your ISP hand out IP addresses IPv4 and/or IPv6?

If you have an ISP that gives a static block of IPv6 addresses that simplifies things immensely. But also consider that many legacy, monopoly ISPs have not implemented IPv6 for their customers, especially in the US, and so domains without an IPv4 address aren't accessible from people's homes that use those ISPs. But it means you could assign static IPv6 addresses to each service if you wanted to and add subdomains for each. Then you just need to deal with security on that system.

Otherwise you'll likely need to deal with dynamic DNS. If your router and your domain registrar's DNS can work together for DDNS that's ideal. For example, my OpnSense router updates my cloudflare registered domain directly when my ISP changes my IPv4 address (I have one of those ISPs that doesn't assign IPv6 still but I don't have any choice if I want > 5-10Mbps upload speeds).

Then you need to deal with routing. The best way is with a reverse proxy like Caddy or I actually like Traefik a lot because it works well with my complex setup with docker and kubernetes among other things. Basically your router needs to route all the inbound traffic on the appropriate inbound ports to the reverse proxy to it to then route to the appropriate service based on the subdomain and/or port of the request.

Once you route the subdomain to the appropriate service you need to deal with security. Once a service is exposed, it's going to eventually start getting hit by bots trying to access it. Best to implement something like fail2ban to stop them from wasting your processing power with failed logins and 404 errors and such.

[–] UndeadDreads@lemmy.world 7 points 5 days ago

Check out Nginx Proxy Manager https://nginxproxymanager.com/

Create some subdomains and use Nginx Proxy Manager to generate SSL certs and point to your self hosted applications.

[–] themadcodger@kbin.earth 6 points 4 days ago (3 children)

We all got to learn somewhere!

Lot of good advice here, but sometimes people forget what it's like to be a beginner. Since you don't know what you're doing, I would recommend not trying to host things on your home server and access it from the outside world. That usually involves port forwarding on your router, and that comes with a lot of risks, especially if you don't know what you're doing. Others have mentioned it, but a better option when you're starting off is to rent a vps and host your software there.

Squarespace might work, but my guess is it'll be easier to transfer your domain elsewhere. You can follow guides for that online and it's pretty straightforward.

Having a vps, a domain name, you're most of the way there. On your vps, you'll want to install a reverse proxy, which is what routes incoming urls to the right place (nextcloud.domain.tld goes here, calendar.domain.tld goes there).

Docker is another thing I'd recommend learning as a lot of what you'll self host will likely be in a Docker container. I'd watch a few YouTube videos to see how it's done. This channel has some great videos, and there are others out there.

It seems like a lot, but learn a little here and there and don't expect to have this all working overnight. You'll get there!

[–] SkabySkalywag@lemmy.world 2 points 4 days ago

Nice one, mate!

[–] marighost@piefed.social 2 points 4 days ago

Appreciate the write up specifically for beginners. And thanks for the channel recommendation!

[–] uranibaba@lemmy.world 2 points 4 days ago (1 children)

Love docker. Updating has never been easier.

[–] mic_check_one_two@lemmy.dbzer0.com 2 points 3 days ago (1 children)

I actually wanted to ask about that… Is it considered best practice to run a bunch of different compose files, and update them all separately? Or do you just throw all of them into a single compose file, and refresh the entire stack when updating?

The latter definitely seems like it would be more streamlined in terms of updating, but could potentially run into issues as images change. It also feels like it would result in a bunch of excess pulls. Maybe only two images out of a dozen need to be updated, but you just pulled your entire stack. Maybe you want to stay on a specific version of one container, while updating all the others. Sure you could go edit the version number in the compose, but that means actually remembering to edit the compose before you update.

[–] uranibaba@lemmy.world 1 points 2 days ago* (last edited 2 days ago)

Is it considered best practice to run a bunch of different compose files, and update them all separately?

tl;dr I do one compose file per application/folder because I found that to suite me best.

I knew about docker and what is was for a long time, but just recently started to use it (past year or so) so I'm no expert . Before docker, I had one VM for each application I wanted and if I messed something up (installed something and it broke or something), I just removed the entier VM and made a new one. This also comes with the problem that every VM needs to be stopped before the host can be shutdown, and startup took more work to ensure that it worked correctly.

Here is a sample of my layout:

.
├──audiobookshelf
│  ├──config
├──diun
│  └───data
├──jellyfin
├──kuma
├──mealie
│  ├──data
│  └──pgdata
├──n8n
│  ├──n8n_data
│  └──n8n_files
├──paperless
│  ├──consume
│  └──export
├──syncthing
│  └──data
└───tasksmd
    └──config

I considered using one compose file and put everything in it by opted to instead use one file for each project. Using one compose file for everything would make it difficult to stop just one application. And by having it split into separate folders, I can just remove everything in it if I mess up and start a new container.

As for updating, I made script that pulls everything:

#!/bin/bash

function docker_update {
    cd $1
    docker compose down && docker compose pull && docker compose up -d
}
docker_update "/path/to/app1"
docker_update "/path/to/app2"
docker_update "/path/to/app3"

Here is a small sample from my n8n compose file (not complete file):

services:
  db:
    container_name: n8n-db
    image: postgres
    ...
    networks:
      - n8n-network

  adminer:
    container_name: n8n-db-adminer
    image: adminer
    restart: unless-stopped
    ports:
      - 8372:8080
    networks:
      - shared-network
      - n8n-network

  n8n:
    container_name: n8n
    networks:
      - n8n-network
      - shared-network
    depends_on:
      db:
        condition: service_healthy

volumes:
  db_data:

networks:
  n8n-network:
  shared-network:
    external: true

shared-network is shared between Caddy and any containter I need to access to externally (reverse proxy) and then one network that is shared between the applications.

[–] Blaster_M@lemmy.world 3 points 5 days ago* (last edited 5 days ago) (1 children)

On your DNS provider, make an A record with your IP address, AAAA record with your IPv6 address. If these addresses change often, either setup a dyndns (your DNS provider needs to support this) or pay for a Static IP from your ISP. Firewall the hell out of your network, have a default deny (drop) new inbound rule, and only open ports for your service. Use an nginx reverse proxy if possible to keep direct connections out of your service, and use containers (docker?) for your service(s). Don't forget to setup certbot and fail2ban. You need certbot to auto update your certs, and you need fail2ban to keep the automated login hacker bots from getting in.

That's the minimum. You can do more with ip region blocking and such, as well as more advanced firewalling and isolation. Also possible to use Tailscale and point the DNS A record to the Tailscale IP, which will eliminate exposing your public IP to the internet.

[–] gedaliyah@lemmy.world 1 points 4 days ago (1 children)

If I use Tailscale as described, how will a request connect to the tailnet? Is there anything you can link that explains how to do this?

[–] Blaster_M@lemmy.world 2 points 4 days ago* (last edited 4 days ago) (1 children)

When you put your server's tailscale IP in the dns, anything that looks up that dns gets the tailscale IP. You only need to connect the devices you want to have connect to the server to the same tailscale network, and your system will handle the routing.

[–] gedaliyah@lemmy.world 1 points 4 days ago

Okay, that makes sense. Would that help to set up NextCloud or other services that require https?

It doesn't really help with connecting my calendar to an external scheduling app that is not based on my device.

[–] PeriodicallyPedantic@lemmy.ca 2 points 4 days ago

It depends on your motivations and security requirements.

If you're already hosting Home Assistant, there is an add-on for CloudFlared which will take care of most of everything for you, using CloudFlare secure tunnels.
It even does simple subdomain reverse proxy, to serve your other services.

It requires that you use CloudFlare for your DNS entries, and it won't secure your host for you (they do offer some free services to help a little), and you still end up depending on a cloud service provider so it's not pure self hosting.
But it's free, you're still mostly in control, and it's less likely to catastrophically mess up your netsec if you're a beginner.

[–] vostrik@pol.social 2 points 4 days ago (1 children)

@gedaliyah after a lot of thinking I decided that only two things I need available in public internet are my searxng instance and my xmpp server. the rest, like music streaming, file sharing, home automation, etc etc etc could live happily in VPN with clients on trusted devices.
this way you don't have to wake up every night to check, if some piece of software, which has access to your whole network, was pwned because of outdated leftpad version or something.

[–] gedaliyah@lemmy.world 1 points 4 days ago

Thanks, I appreciate your experience.

[–] qaz@lemmy.world 1 points 4 days ago

If you want to expose it publically for others to use consider using Cloudflare for easy setup and avoiding exposing your home IP. If you want to use it for yourself you can access it with Tailscale and forward traffic to certain ports based on the subdomain using Nginx Proxy Manager.

Cloudflare makes it pretty easy and is free (or close to free)