Docker

1301 readers
1 users here now

founded 2 years ago
MODERATORS
1
 
 

I have installed Nginx Proxy Manager on my Synology NAS using the Container Manager, but I haven’t set up SSL encryption. I already have a Let's Encrypt certificate via Control Panel > Security > Certificate. However, I want to use Nginx Proxy Manager as a reverse proxy instead of Web Station to forward my Docker instances from http://dockerinstance:8000/ to https://dockerinstance:8001/. Is there a guide for beginners?

thanks a lot!

2
 
 

I have a few seperate docker compose files and I thought to make my life easier, I'd just include them in another file and that should work like the project manager. But it keeps trying to take ownership. Is there anyway to avoid that so they're all treated as separate autonomous docker files?

3
4
5
6
2
submitted 2 months ago* (last edited 2 months ago) by neme@lemm.ee to c/docker@programming.dev
 
 
7
8
9
10
 
 
11
12
 
 

I noticed docker compose is now telling me I can set COMPOSE_BAKE=true for "better performance".

Does anyone have any experience with this? Is it worth it? I get suspicious when a program tells me "just use this, it has better performance", but it's not the default.

13
14
15
 
 

I want to be sure the torrent traffic of my transmission docker instance go through my VPN.

I got different interfaces with different vlans on the host. I want to be sure the container created with docker compose use only a specific interface. The interface with the correct vlan has IP 192.168.90.92

I have tested the host connectivity with: curl --interface ethX https://api.ipify.org/ and it's working fine, meaning that public ips are different.

I have tried with the following on the docker compose file:

ports: - 9091:9091 # Web UI port - 192.168.90.92:51413:51413 # Torrent port (TCP) - 192.168.90.92:51413:51413/udp # Torrent port (UDP)

However, the traffic is still coming from the default gateway.

Any idea?

Thanks!

16
 
 

Over the week I've been dealing with the Kinsing virus via Docker on my VPS. I've been learning about it and I've come to find I've been thinking about Docker all wrong with the way that I was using it.

I enjoy using Portainer, so that's a must for me. I know Docker allows you to secure Docker sockets via context; docker context create vps --docker "host=ssh://user@vps".

I would like to use this method, via Portainer (locally) to connect to docker (remote) via SSH. Anyone know of a way to do this? I've been looking around and haven't found much.

17
18
 
 

I recently asked the best way to run my Lemmy bot on my Synology NAS and most people suggested Docker.

I'm currently trying to get it running on my machine in Docker before transferring it over there, but am running into trouble.

Currently, to run locally, I navigate to the folder and type npm start. That executes tsx src/main.ts.

The first thing main.ts does is access argv to detect if a third argument was given, dev, and if it was, it loads in .env.development, otherwise it loads .env, containing environment variables. It puts those variables into a local variable that I then pass around in the bot. I am definitely not tied to this approach if there is a better practice way of doing it.

opening lines of main.ts

import { config } from 'dotenv';

let path: string;

const env = process.argv[2];
if (env && env === 'dev') {
    path = '.env.development';
} else {
    path = '.env';
}

config({
    override: true,
    path
});

const {
    ENVIROMENT_VARIABLE_1
} = process.env as Record<string, string>;

Ideally, I would like a way that I can create a Docker image and then run it with either the .env.development variables or the .env ones...maybe even a completely separate one I decide to create after-the-fact.

Right now, I can't even run it. When I type docker-compose up I get npm start: not found.

My Dockerfile

FROM node:22
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
USER node
COPY . .
CMD "npm start"

My compose.yaml

services:
  node:
    build: .
    image: an-image-name:latest
    environment:
      - ENVIROMENT_VARIABLE_1 = ${ENVIROMENT_VARIABLE_1}

I assume the current problem is something to do with where stuff is being copied to and what the workdir is, but don't know precisely how to address it.

And once that's resolved, I have even less idea how to go about passing through the environment variables.

Any help would be much appreciated.

19
 
 

Hi guys, I have no problem running docker (containers) via CLI, but I though it would be nice try Docker Desktop on my Ubuntu machine. But as soon as I start Docker Desktop it runs "starting Docker engine" indefinitely until my drive is full. The .docker folder then is about 70GB large. I read somewhere that this is the virtual disk size that is being created and that I could change it in the settings, but those are blocked until the engine starting process is finished (which it never does). Anyone else has experienced this?

20
4
submitted 7 months ago* (last edited 7 months ago) by batman654987@lemmynsfw.com to c/docker@programming.dev
 
 

I installed ollama for using AI localy on my computer. And now i want to use OpenWebUI. That needs to be installed in docker, so i did that and it should host a page which is gui for openwebui. And its working but i have this problem: https://github.com/open-webui/open-webui/discussions/4376

So i pasted this command as they say:

docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main

But after that it returned this error code: docker: Error response from daemon: Conflict. The container name "/open-webui" is already in use by container "1cbc8ac3b80f2a6921778964f94eff32541a4540ee6ab5d3335427a0fc8366a8". You have to remove (or rename) that container to be able to reuse that name. See 'docker run --help'.

Can anyone help me with this?

21
 
 

I'm experimenting with i2p and a librewolf container setup in Docker compose. However, the i2p web console front end (127.0.0.1:7657) becomes inaccessible if the container itself is restarted. This can be remedied by removing the directories that get created by the volume mappings in the compose file, but this obviously not ideal. Anyone have experience with this problem? I've seen hints from people online suggesting that the data in those directories getting somehow corrupted. I have not yet investigated that further.

version: "3.5"
services:
  i2p_router:
    image:
      geti2p/i2p:latest
    environment:
    - JVM_XMX=256m
    volumes:
    - ./i2phome:/i2p/.i2p
    - ./i2ptorrents:/i2psnark
    ports:
    - 4444:4444
    - 6668:6668
    - 7657:7657
    - 9001:12345
    - 9002:12345/udp

  libre_wolf:
    image:
      linuxserver/librewolf
    ports:
    - 9300:3000
    - 9301:3001

volumes:
  i2phome:
  i2ptorrents:
networks:
  frontend:
    driver: bridge
22
 
 

Are you shipping your dev environment? Let me show you how we can use Buildx in Docker to go from 2.5Gb to just 5Mb.

23
 
 

I have a docker compose file with a bind volume. It basically mounts /media/user/drive/media to the container's /mnt.

It works as expected when /media/user/drive/ is mounted and its media folder has the files I want the container to see.

However, as it's a network drive, the container usually tries to start before it is mounted, so it would throw the error that /media/user/drive/media doesn't exist. So I created an empty folder in /media/user/drive called media while the drive was not mounted so that at least the container starts with the volume /mnt being empty until the network drive gets mounted and all the files appear at /media/user/drive/media.

To my surprise, when the drive gets mounted, even though if I do ls /media/user/drive/media it lists the drive contents correctly, the container still sees /mnt empty.

How would I go about getting the drive files inside the docker container when it automatically starts?

24
 
 

I am hoping that you awesome people can help me with something I've noticed in my Plex logs. Quick notes on my set up:

Mini PC running Ubuntu 22.04, portainer, Plex, arrs and calibre all running in docker. All of these except Plex are using a bridge network that I created in portainer. The PC is connected to the router by ethernet cable, and I have set up a static IP in the router settings. I have also added the static IP info into the network settings in Ubuntu.

The following text is repeated over and over in the Plex Media Server log, about 6 seconds apart. My playback is mostly fine, but I have been experiencing buffering. Regardless this can't be right!.

n.b. I did post elsewhere but I feel that this is not necessarily Plex related and you can likely help with this more technical question.

DEBUG - NetworkInterface: received Netlink message len=88, type=RTM_DELADDR, flags=0x0 Aug 22, 2024 18:51:49.636 [139016208919352] DEBUG - NetworkInterface: Netlink address message family=2, index=3, flags=0x0 Aug 22, 2024 18:51:49.636 [139016208919352] DEBUG - Network change. Aug 22, 2024 18:51:49.636 [139016208919352] DEBUG - NetworkInterface: Notified of network changed (force=0) Aug 22, 2024 18:51:49.637 [139016208919352] DEBUG - Network change notification but nothing changed.

25
 
 

I have readarr and all other arrs working in Ubuntu with docker portainer. I followed the trash guides and LinuxServerio guides to get me this far. I want to expand my book library, and so I have added calibre.

After having calibre import my book library, I went to readarr to delete the root, and re-add it with the new path to the calibre library. I am having problems with the Calibre Settings on the Add Root page.

The calibre server is listening at 172.18.0.2, port 8081, HTTP. I have created a user account on the calibre "sharing over the net" page. In readarr, I have set the Calibre Host to 172.18.0.2 and the Calibre Port to 8081. When I click save, I get the error Unknown exception: Http request timed out.

Most of the guides I have found are 3 or 4 years old. On one guide the Calibre Host was set to: calibre. That doesn't work. Setting the Host to the IP of my server doesn't work either.

Can any one help? I don't know if I have a permissions or firewall problem, or if I am just doing something wrong. The calibre logs are not showing any issues. I have copied the .yaml files used below.

-services:

  • calibre:
  • image: lscr.io/linuxserver/calibre:latest
  • container_name: calibre
  • security_opt:
    • seccomp:unconfined
  • environment:
    • PUID=1000
    • PGID=1000
    • TZ=Europe/London
    • CLI_ARGS = #optional
  • volumes:
    • /data/calibre:/config
    • /data/Media/calibre:/library
    • /data/Media/books:/upload
  • ports:
    • 8080:8080
    • 8081:8081
  • restart: unless-stopped

-services:

  • readarr:
  • image: lscr.io/linuxserver/readarr:develop
  • container_name: readarr
  • environment:
    • PUID=1000
    • PGID=1000
    • TZ=Europe/London
  • volumes:
    • /data/readarr:/config
    • /data/Media/calibre:/library
    • /data/Media/downloads:/downloads
  • ports:
    • 8787:8787
  • restart: unless-stopped
view more: next ›