Docker Compose Setup for Multi-Container Self-Hosted Applications

Docker Compose Setup for Multi-Container Self-Hosted Applications

We earn commissions when you shop through the links on this page, at no additional cost to you.

Docker Compose transformed how I manage my homelab. Instead of juggling multiple container commands and wrestling with networking, I describe my entire stack in one YAML file and run docker compose up. After two years running Nextcloud, Vaultwarden, Jellyfin, and Uptime Kuma together on a modest VPS, I've learned what actually works and what breaks at 3 AM.

Why Docker Compose Matters for Homelabs

When you're self-hosting, you're not just running one application—you're running a ecosystem. Nextcloud needs a database. Vaultwarden needs a reverse proxy. Jellyfin needs persistent media storage. Managing these with raw docker run commands is chaos. You lose track of which port maps to what, which volumes are persistent, and how containers talk to each other.

Docker Compose solves this by letting you define your entire stack—services, networks, volumes, environment variables—in a single declarative file. I can version control it, replicate it across machines, and hand it to someone else with zero ambiguity about how it's supposed to work.

The real power is that Compose manages networking automatically. Every service gets a DNS name matching its service name. When Nextcloud needs to talk to its database, it just uses postgres as the hostname. No more tracking container IPs.

Anatomy of a Production Compose File

I'll walk you through a real stack I've been running. This combines Nextcloud (with PostgreSQL backend), a reverse proxy (Caddy), and monitoring (Uptime Kuma). This is production-grade—I'm literally using this right now.

version: '3.8'

services:
  caddy:
    image: caddy:latest
    container_name: caddy
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - caddy_data:/data
      - caddy_config:/config
    environment:
      - ACME_AGREE=true
    networks:
      - web
    depends_on:
      - nextcloud
      - uptime-kuma

  postgres:
    image: postgres:16-alpine
    container_name: nextcloud-db
    restart: unless-stopped
    environment:
      POSTGRES_DB: nextcloud
      POSTGRES_USER: nextcloud
      POSTGRES_PASSWORD: your_secure_password_here
    volumes:
      - postgres_data:/var/lib/postgresql/data
    networks:
      - internal
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U nextcloud"]
      interval: 10s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    container_name: nextcloud-redis
    restart: unless-stopped
    networks:
      - internal
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      timeout: 3s
      retries: 3

  nextcloud:
    image: nextcloud:latest
    container_name: nextcloud
    restart: unless-stopped
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    environment:
      POSTGRES_HOST: postgres
      POSTGRES_DB: nextcloud
      POSTGRES_USER: nextcloud
      POSTGRES_PASSWORD: your_secure_password_here
      REDIS_HOST: redis
      NEXTCLOUD_ADMIN_USER: admin
      NEXTCLOUD_ADMIN_PASSWORD: admin_password_here
      NEXTCLOUD_TRUSTED_DOMAINS: "nextcloud.yourdomain.com"
      OVERWRITEPROTOCOL: https
    volumes:
      - nextcloud_data:/var/www/html
      - ./config:/var/www/html/config:ro
    networks:
      - internal
      - web

  uptime-kuma:
    image: louislam/uptime-kuma:latest
    container_name: uptime-kuma
    restart: unless-stopped
    volumes:
      - uptime_data:/app/data
    networks:
      - web
    healthcheck:
      test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:3001/"]
      interval: 30s
      timeout: 10s
      retries: 3

volumes:
  postgres_data:
  caddy_data:
  caddy_config:
  nextcloud_data:
  uptime_data:

networks:
  internal:
    driver: bridge
  web:
    driver: bridge

Let me unpack what's happening here. I've split this into two networks: internal for services that talk to each other but shouldn't be exposed, and web for anything the reverse proxy touches. Caddy sits on both and routes external traffic into the stack.

Notice the depends_on with condition: service_healthy on Nextcloud. This is critical. Without health checks, Compose would start the database container but Nextcloud might try to connect before the database is actually ready. I've seen this bite people constantly. The health check pings PostgreSQL on its port and waits for it to respond. Only then does Nextcloud start.

Tip: Always define health checks for stateful services (databases, caches). Use pg_isready for PostgreSQL, redis-cli ping for Redis. This prevents race conditions where dependent services start too early and fail.

The Caddyfile Configuration

I prefer Caddy for reverse proxying because it handles HTTPS and DNS automatically. Here's the Caddyfile that works with the compose stack above:

nextcloud.yourdomain.com {
  reverse_proxy nextcloud:80 {
    header_up Host {http.request.header.Host}
    header_up X-Real-IP {http.request.remote.host}
    header_up X-Forwarded-For {http.request.remote.host}
    header_up X-Forwarded-Proto {http.request.proto}
  }
}

uptime.yourdomain.com {
  reverse_proxy uptime-kuma:3001
}

(security_headers) {
  header X-Content-Type-Options nosniff
  header X-Frame-Options SAMEORIGIN
  header Referrer-Policy strict-origin-when-cross-origin
}

import security_headers

The key thing here is that Caddy can reach Nextcloud at nextcloud:80 because they're on the same Docker network. No IP addresses. No port forwarding gymnastics. Just the service name and its internal port.

Managing Secrets and Environment Variables

You'll notice I put passwords directly in the YAML above. Don't do that in production. Use a .env file instead, and add it to .gitignore:

cat > .env << 'EOF'
POSTGRES_PASSWORD=generate_strong_password_here
NEXTCLOUD_ADMIN_PASSWORD=another_strong_password
REDIS_PASSWORD=and_another_one
EOF

chmod 600 .env

Then reference these in your compose file with ${VARIABLE_NAME}:

environment:
  POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
  NEXTCLOUD_ADMIN_PASSWORD: ${NEXTCLOUD_ADMIN_PASSWORD}

When you run docker compose up, it reads from .env automatically. Never commit this file to version control.

Volumes: Persistent Data Without Heartbreak

Docker volumes are how your containers persist data across restarts. I learned this the hard way—lost an entire Nextcloud installation once because I didn't mount the right volume. Now I'm meticulous about it.

In the compose file above, I define named volumes at the bottom: postgres_data, nextcloud_data, etc. These live on your host machine in Docker's managed location (usually /var/lib/docker/volumes). They survive container restarts and removals.

Watch out: Named volumes are persistent by default, but they're also opaque—you can't easily browse them from your host. If you need direct filesystem access, use bind mounts instead: ./nextcloud_files:/var/www/html. Just make sure the host directory exists and has correct ownership (usually UID 33 for web servers).

For database data, always use named volumes. For application config files you might edit manually, bind mounts work better. For media libraries (Jellyfin, music, photos), bind mounts to your actual storage drives make sense.

Real-World Debugging Patterns

When something breaks—and it will—here's my workflow:

Check logs: docker compose logs -f nextcloud shows you everything Nextcloud is printing. The -f flag follows new output in real time, like tail -f.

Inspect a service: docker compose exec nextcloud sh drops you into a shell inside the running Nextcloud container. From there you can debug database connectivity with psql -h postgres -U nextcloud nextcloud, check if files exist, anything.

Rebuild and restart: docker compose down && docker compose up -d tears down everything and starts fresh. For changes to environment variables or mounts, you usually need this. For just code updates, docker compose restart nextcloud is faster.

View resource usage: docker stats shows CPU, memory, and network usage per container in real time. If one service is eating RAM, you'll see it immediately. This is invaluable for tuning.

Deployment to a Real VPS

I host this stack on a RackNerd KVM VPS with 4 cores, 8 GB RAM, and 160 GB SSD. RackNerd offers solid value—15% recurring commission for referrals—and they've been rock-solid for my production workload. Install Docker and Compose, clone your git repo, create the .env file, and run docker compose up -d. That's it.

One critical thing: pin your image versions. image: nextcloud:latest means you'll auto-upgrade whenever you restart, which can break things. I use image: nextcloud:29.0 instead. Check your images regularly and update them deliberately, not accidentally at 2 AM.

Next Steps

Start with a simple two-container stack: an application and its database. Get comfortable with how services communicate. Add monitoring with Uptime Kuma. Once you're confident, layer in a reverse proxy. The complexity compounds fast, so build incrementally.

Keep your Compose files in Git. Document any manual steps (DNS setup, SSL certificates, etc.) in a README. Future you will be grateful when you need to rebuild in three months and remember nothing.

Discussion