Setting Up Docker Compose for Multi-Container Self-Hosted Applications
Docker Compose is the glue that holds multi-container homelab setups together. Instead of juggling five separate docker run commands, you define everything in a single YAML file, hit enter, and your entire application stack boots up—networking, volumes, environment variables, all orchestrated. I prefer Compose because it's simple enough to learn in an afternoon but powerful enough to scale from a Raspberry Pi to a small VPS.
This tutorial walks you through building a real, production-adjacent stack: a web application with a database backend, reverse proxy, and persistent storage. You'll understand not just the "how" but the "why" behind each configuration choice.
What You'll Need
I'm assuming you already have Docker installed. If not, grab it from the official Docker site. Docker Compose comes bundled with Docker Desktop, but on Linux servers, you may need to install it separately via your package manager or as a standalone binary.
For this tutorial, I'll use a modest VPS setup—something like RackNerd's KVM VPS (1 vCore, 2GB RAM, 40GB SSD) is more than enough to test these concepts. Compose works identically whether you're on a $3/month VPS or your home server.
Understanding Docker Compose Basics
Docker Compose reads a file called docker-compose.yml (or docker-compose.yaml—both work) and interprets it as a blueprint for your entire application. Each service (container) gets its own section. Think of it as Infrastructure as Code for your containers.
The three pillars of Compose are:
- Services: Individual containers (web app, database, cache, etc.)
- Networks: Isolated communication channels between services
- Volumes: Persistent storage that survives container restarts
When you run docker-compose up, Compose creates a private network, boots all services in dependency order, and keeps them running. When you run docker-compose down, everything stops cleanly. This is why I love it for testing: reproducible, isolated, disposable.
Your First Multi-Container Stack
Let me show you a complete, real-world example: a Nextcloud instance (self-hosted cloud storage) with MariaDB database and Redis caching. This is a stack I run on my own homelab.
version: '3.8'
services:
nextcloud:
image: nextcloud:latest
container_name: nextcloud-web
depends_on:
- db
- redis
environment:
MYSQL_HOST: db
MYSQL_DATABASE: nextcloud
MYSQL_USER: nextcloud_user
MYSQL_PASSWORD: ${DB_PASSWORD}
REDIS_HOST: redis
REDIS_PORT: 6379
NEXTCLOUD_ADMIN_USER: ${ADMIN_USER}
NEXTCLOUD_ADMIN_PASSWORD: ${ADMIN_PASSWORD}
NEXTCLOUD_TRUSTED_DOMAINS: "localhost nextcloud.example.com"
volumes:
- nextcloud_data:/var/www/html
- nextcloud_config:/var/www/html/config
ports:
- "8080:80"
networks:
- nextcloud_network
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost/status.php"]
interval: 30s
timeout: 10s
retries: 3
db:
image: mariadb:11
container_name: nextcloud-db
environment:
MYSQL_ROOT_PASSWORD: ${ROOT_PASSWORD}
MYSQL_DATABASE: nextcloud
MYSQL_USER: nextcloud_user
MYSQL_PASSWORD: ${DB_PASSWORD}
volumes:
- nextcloud_db:/var/lib/mysql
networks:
- nextcloud_network
restart: unless-stopped
healthcheck:
test: ["CMD", "healthcheck.sh", "--connect", "--innodb_initialized"]
interval: 10s
timeout: 5s
retries: 3
redis:
image: redis:7-alpine
container_name: nextcloud-redis
command: redis-server --appendonly yes
volumes:
- nextcloud_redis:/data
networks:
- nextcloud_network
restart: unless-stopped
volumes:
nextcloud_data:
nextcloud_config:
nextcloud_db:
nextcloud_redis:
networks:
nextcloud_network:
driver: bridge
Before running this, create an .env file in the same directory:
DB_PASSWORD=your_secure_db_password_here
ROOT_PASSWORD=your_secure_root_password_here
ADMIN_USER=admin
ADMIN_PASSWORD=your_secure_admin_password_here
.env files to version control. Add .env to your .gitignore. Environment variables are loaded by Compose at runtime, and Docker will interpolate them into the YAML using the ${VARIABLE_NAME} syntax.Now start the stack:
docker-compose up -d
The -d flag runs in detached mode (background). Watch the startup:
docker-compose logs -f nextcloud
Hit Ctrl+C to exit the logs. Once you see "ready for connections", access Nextcloud at http://localhost:8080.
Key Configuration Patterns I Use
Environment Variables & Secrets
I always externalize configuration. Hardcoding passwords in YAML is a fast way to accidentally leak them. The ${VARIABLE_NAME} syntax pulls from your .env file. For production on a VPS, I use Docker Secrets or a proper secret manager, but .env is fine for homelab.
Networks
By default, Compose creates a bridge network that all services join automatically. This means db is reachable at http://db:3306 from any other container—no need to hardcode IPs. If you need containers to talk to the host machine, use host.docker.internal (macOS/Windows) or the host's actual IP (Linux).
Volumes
Named volumes like nextcloud_data:/var/www/html are managed by Docker and persist even if the container is destroyed. Bind mounts like ./config:/etc/app/config mount directories from your host filesystem. I prefer named volumes for production because they're easier to backup and migrate.
Dependency & Health Checks
Notice the depends_on field—it tells Compose to start the database before the web service. However, depends_on only waits for the container to start, not for the application inside to be ready. That's why I add healthcheck blocks. The web service defines a curl command that checks if Nextcloud is actually responding. In real setups, you'd add a more robust wait script, but healthchecks are a good first step.
Managing Your Stack: Common Commands
Here's what I run daily:
docker-compose up -d— Start all services in the backgrounddocker-compose down— Stop and remove all containers (data in volumes persists)docker-compose logs -f service_name— Stream logs from a specific servicedocker-compose ps— List running containers and their statusdocker-compose exec db mysql -u nextcloud_user -p— Execute a command inside a container (great for database access)docker-compose pull && docker-compose up -d— Update all images and restartdocker-compose restart nextcloud— Restart a single service without touching others
update.sh that pulls the latest images, runs any migrations, and restarts the stack. Makes updates one-liner simple.Adding a Reverse Proxy
In production, you don't want services exposed on random ports. I use Caddy (lightweight, built-in HTTPS) to sit in front. Add this service to your Compose file:
caddy:
image: caddy:latest
container_name: nextcloud-caddy
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
- caddy_config:/config
networks:
- nextcloud_network
restart: unless-stopped
depends_on:
- nextcloud
And create a Caddyfile in your project root:
nextcloud.example.com {
reverse_proxy nextcloud:80
encode gzip
}
Replace nextcloud.example.com with your actual domain. Caddy automatically provisions Let's Encrypt certificates. Now update your Nextcloud service to only expose to the network, not to the host:
nextcloud:
# ... (rest of config)
ports:
- "127.0.0.1:8080:80" # Only accessible from the host itself, not the world
This is the pattern I use on every self-hosted VPS: Caddy at the edge, Compose managing the internals. Clean, secure, and scalable.
Backup Strategy for Volumes
Compose handles networking and orchestration, but you own your backup strategy. I use a simple cron job that runs daily:
#!/bin/bash
BACKUP_DIR="/backups/nextcloud"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
docker-compose exec -T db mysqldump -u nextcloud_user -p"${DB_PASSWORD}" nextcloud > "${BACKUP_DIR}/nextcloud_${TIMESTAMP}.sql"
tar -czf "${BACKUP_DIR}/nextcloud_files_${TIMESTAMP}.tar.gz" /var/lib/docker/volumes/nextcloud_data/_data/
find "${BACKUP_DIR}" -name "*.sql" -mtime +7 -delete
find "${BACKUP_DIR}" -name "*.tar.gz" -mtime +7 -delete
This backs up the database and file volume, then deletes backups older than 7 days. Adjust retention to your needs.
Common Gotchas
Port conflicts: If port 80 or 443 is already in use, your Caddy service won't start. Check with lsof -i :80 and either kill the conflicting process or map to a different port in Compose.
Permissions: Volumes created by Compose are owned by root inside the container. If you later want to access files from the host, you'll hit permission errors. Use explicit user directives in your service definition: user: "1000:1000" (matching your host user ID).
Memory limits: Multi-container stacks can balloon in memory usage. Nextcloud + MariaDB + Redis easily consumes 1–2GB. On a 2GB VPS, set limits per service to avoid OOMkill. Add mem_limit: 512m under your service.
DNS inside containers: Containers use Docker's internal DNS server. If you're running Pi-hole or AdGuard Home, don't point containers to it directly; it causes loops. Either run Pi-hole as a separate stack or use the host's resolver.
Next Steps
Start small: write a Compose file for something you actually use. Nextcloud, Vaultwarden (password manager), or Immich (photo gallery) are great entry points. Once you're comfortable, layer in monitoring (Uptime Kuma), logging (Loki), or automated updates (Watchtower). The Compose syntax is the same, just more services.
For infrastructure to run this on, a reliable VPS is essential. RackNerd offers solid KVM options starting at $10–20/year; plenty for a homelab. Whatever you choose, keep regular backups of your docker-compose.yml and .env` files—they're your infrastructure in plain text.
Discussion