[-] manwichmakesameal@lemmy.world 4 points 7 months ago

Different use cases.

[-] manwichmakesameal@lemmy.world 10 points 7 months ago

"They're not here for me"

[-] manwichmakesameal@lemmy.world 3 points 9 months ago

Why not just run your own WireGuard instance? I have a pivpn vm for it and it works great. You could also just put jellyfin behind a TLS terminating reverse proxy.

[-] manwichmakesameal@lemmy.world 82 points 10 months ago

Correct. SearxNG is very much still active. Check the GitHub page or matrix/IRC.

[-] manwichmakesameal@lemmy.world 23 points 10 months ago

Fuck that and fuck him. He deserves no sympathy.

[-] manwichmakesameal@lemmy.world 9 points 11 months ago

It was me. Guess what I'll be doing today.

[-] manwichmakesameal@lemmy.world 4 points 11 months ago

So, docker networking uses it's own internal DNS. Keep that in mind. You can create (and should) docker networks for your containers. My personal design is to have only nginx exposing port 443 and have it proxy for all the other containers inside those docker networks. I don't have to expose anything. I also find nginx proper to be much easier to deal with than using NPM or traefik or caddy.

[-] manwichmakesameal@lemmy.world 3 points 11 months ago

Sure did. I totally tried recording sounds of the coins dropping in. Never worked but I was too young to know why.

1

So I'm looking for a solution that is a self-hosted (docker preferably) podcast streamer/aggregator. I DO NOT NEED A DOWNLOADER. Ideally, I'd be able to add RSS feeds and stream them through a web interface that will keep track of progress, etc. I'm not talking about something that serves up downloaded podcasts either, I can do that in Plex/Jellyfin.

[-] manwichmakesameal@lemmy.world 2 points 11 months ago

Use a USB drive or otherwise download this on the Win side and get it over to your Ubuntu side: linky Install that package and you should be able to build your kernel module using dkms.

[-] manwichmakesameal@lemmy.world 2 points 11 months ago

links is pretty lightweight. All joking aside, I'd look at adding RAM to it if possible. That's probably going to help the most.

[-] manwichmakesameal@lemmy.world 2 points 11 months ago* (last edited 11 months ago)

Also, to add to this: you're setup sounds almost identical to mine. I have a NAS with multiple TBs of storage and another machine with plenty of CPU and RAM. Using NFS for your docker share is going to be a pain. I "fixed" my pains by also using shares inside my docker-compose files. What I mean by that is specify your share in a volume section:

volumes:
  media:
    driver: local
    driver_opts:
      type: "nfs"
      o: "addr=192.168.0.0,ro"
      device: ":/mnt/zraid_default/media"

Then mount that volume when the container comes up:

services:
  ...
  volumes:
        - type: volume
        source: media
        target: /data
        volume:
          nocopy: true

This way, I don't have to worry as much. I also use local directories for storing all my container info. e.g.: ./container-data:/path/in/container

[-] manwichmakesameal@lemmy.world 8 points 11 months ago

I'm 100% sure that your problem is permissions. You need to make sure the permissions match. Personally, I created a group specifically for my NFS shares then when I export them they are mapped to the group. You don't have to do this, you can use your normal users, you just have to make sure the UID/GID numbers match. They can be named different as long as the numbers match up.

view more: next ›

manwichmakesameal

joined 1 year ago