Docker Compose for Multi-Container Self-Hosted Applications
When I first started self-hosting, running containers individually with long docker run commands was a nightmare. Environment variables scattered everywhere, manual networking between containers, and restarting everything meant typing out the same twenty flags again. Docker Compose changed that—it's the difference between chaos and actually sustainable infrastructure. Once you define your entire stack in a single YAML file, you go from minutes of manual setup to docker-compose up -d and you're live.
This tutorial walks you through real, production-ready Docker Compose configurations. I'll show you exactly how to structure multi-container applications, manage networking, handle persistent data, and debug when things inevitably go sideways.
Why Docker Compose Matters for Self-Hosting
Docker Compose is essentially orchestration for the rest of us. It's not Kubernetes—you don't need distributed systems expertise. It's just YAML that tells Docker "here's my database, here's my web server, they talk to each other on this network, and this data persists."
When you're running a self-hosted Nextcloud instance, for example, you need PostgreSQL, Redis for caching, and the Nextcloud container itself. Without Compose, you're managing three separate containers, three separate run commands, and praying they can actually talk to each other. With Compose, you define it once and forget it.
On a budget VPS—say a RackNerd instance at around $40/year—you might be squeezing multiple applications onto a single server. Docker Compose makes that feasible because it's lightweight and keeps everything organized. You're not wasting resources on a full orchestration platform.
Core Concepts: Services, Networks, and Volumes
A Docker Compose file is built on three pillars:
Services are your containers. Each service is one container image, and you can have as many as you want. A service definition includes the image, ports, environment variables, and mount points.
Networkspostgres:5432, not some random container IP.
Volumes
Your First Multi-Container Stack
Let me show you a real example: a simple application with a web service, database, and caching layer. This is the pattern you'll see everywhere.
version: '3.8'
services:
web:
image: nginx:latest
container_name: app-web
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./html:/usr/share/nginx/html:ro
depends_on:
- app
restart: unless-stopped
networks:
- app-network
app:
image: python:3.11-slim
container_name: app-backend
working_dir: /app
volumes:
- ./app:/app
environment:
DATABASE_URL: postgresql://appuser:apppass@postgres:5432/appdb
REDIS_URL: redis://redis:6379
DEBUG: "false"
command: python app.py
depends_on:
- postgres
- redis
restart: unless-stopped
networks:
- app-network
postgres:
image: postgres:15-alpine
container_name: app-db
environment:
POSTGRES_USER: appuser
POSTGRES_PASSWORD: apppass
POSTGRES_DB: appdb
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
networks:
- app-network
redis:
image: redis:7-alpine
container_name: app-cache
restart: unless-stopped
networks:
- app-network
volumes:
postgres_data:
networks:
app-network:
driver: bridge
This stack has four services. The web service (Nginx) listens on ports 80 and 443, the app service runs your Python code, PostgreSQL stores data, and Redis caches. Notice how the app service connects to the database at postgres:5432—that hostname is automatically resolved by Docker's internal DNS because they're on the same network.
The depends_on directive tells Docker to start PostgreSQL and Redis before the app service, and the app before the web service. Without this, your app might try to connect to the database before it's ready.
depends_on to control startup order, but always implement retry logic in your application code. Just because a container started doesn't mean it's ready to accept connections.Environment Variables and Configuration
Hardcoding secrets in your Compose file is a disaster. I use a .env file that Compose reads automatically. Git-ignore it, obviously.
# Create .env file
cat > .env << 'EOF'
POSTGRES_USER=appuser
POSTGRES_PASSWORD=your_strong_password_here
POSTGRES_DB=appdb
REDIS_PASSWORD=your_redis_password_here
DEBUG=false
DOMAIN=example.com
TIMEZONE=UTC
EOF
# Protect it
chmod 600 .env
echo ".env" >> .gitignore
Then reference those variables in your Compose file using ${VARIABLE_NAME} syntax. Your updated Postgres service would look like:
postgres:
image: postgres:15-alpine
container_name: app-db
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
networks:
- app-network
Compose loads the .env file automatically, so you never have to manually export variables.
--env-file flag: docker-compose --env-file .env.prod up -d. By default, Compose only reads the file named .env.Managing Volumes and Persistent Data
Volumes are non-negotiable for databases, user-uploaded files, and anything that shouldn't disappear on a restart. In the example above, postgres_data:/var/lib/postgresql/data means the postgres_data volume (defined in the top-level volumes section) mounts at /var/lib/postgresql/data inside the container.
Docker manages the actual storage location, typically somewhere like /var/lib/docker/volumes/projectname_postgres_data/_data. You don't interact with it directly—that's the point.
For bind mounts (mounting a directory from your host), use the relative path syntax: ./nginx.conf:/etc/nginx/nginx.conf:ro. The :ro suffix makes it read-only. This is great for config files you want to edit on the host.
If you need to back up your database, you can exec into the container and dump it:
# Create a backup of PostgreSQL
docker-compose exec postgres pg_dump -U appuser appdb > backup.sql
# Restore from backup
docker-compose exec -T postgres psql -U appuser appdb < backup.sql
The -T flag allocates a pseudo-terminal, which you need when piping data into stdin.
Networking: How Containers Talk to Each Other
By default, every service in your Compose file gets a hostname equal to its service name. If I have a service called postgres, other containers on the same network can reach it at postgres:5432. Docker's embedded DNS resolver handles this.
If you need to expose a service to the host machine (not just other containers), use the ports directive. ports: - "5432:5432" means "listen on the host's port 5432 and forward to the container's port 5432." You'd do this if an external tool needs to connect, or if you're debugging locally.
For a production setup, I typically only expose the reverse proxy (Nginx or Caddy) on ports 80 and 443. Everything else stays internal to the Docker network. Your application never talks directly to the outside world—the reverse proxy handles that.
Running and Managing Your Stack
Start everything in detached mode (runs in background):
docker-compose up -d
Check the status:
docker-compose ps
View logs from all services:
docker-compose logs -f
Or logs from a specific service:
docker-compose logs -f postgres
Stop everything:
docker-compose down
This stops and removes containers, but preserves volumes. If you also want to delete volumes (careful!), add -v:
docker-compose down -v
Restart a specific service without touching the others:
docker-compose restart app
Real-World Example: Self-Hosted Nextcloud
Let me show you a production-ready Nextcloud setup. This is what I actually run on a small VPS:
version: '3.8'
services:
nextcloud:
image: nextcloud:27-apache
container_name: nextcloud
restart: unless-stopped
ports:
- "8080:80"
volumes:
- nextcloud_data:/var/www/html
- ./config:/var/www/html/config:ro
environment:
POSTGRES_HOST: postgres
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: ${DB_NAME}
NEXTCLOUD_ADMIN_USER: ${ADMIN_USER}
NEXTCLOUD_ADMIN_PASSWORD: ${ADMIN_PASSWORD}
NEXTCLOUD_TRUSTED_DOMAINS: ${DOMAIN}
REDIS_HOST: redis
depends_on:
- postgres
- redis
networks:
- nextcloud-network
postgres:
image: postgres:15-alpine
container_name: nextcloud-db
restart: unless-stopped
environment:
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: ${DB_NAME}
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- nextcloud-network
redis:
image: redis:7-alpine
container_name: nextcloud-cache
restart: unless-stopped
networks:
- nextcloud-network
volumes:
nextcloud_data:
postgres_data:
networks:
nextcloud-network:
driver: bridge
In your .env file:
DB_USER=ncuser
DB_PASSWORD=super_secure_password
DB_NAME=nextcloud
ADMIN_USER=admin
ADMIN_PASSWORD=another_secure_password
DOMAIN=files.example.com
Then docker-compose up -d and you have Nextcloud with PostgreSQL and Redis running. The Nextcloud container serves on port 8080 on the host, and you'd typically proxy it through Caddy or Nginx on the outside.
Debugging and Troubleshooting
When a service won't start, check the logs first:
docker-compose logs postgres
If you need to get inside a container and poke around:
docker-compose exec app bash
If the service isn't responding to health checks or feels slow, inspect what's happening:
docker-compose stats
This shows CPU, memory, and network I/O for each running container. If your database is eating 3GB of RAM, you found your problem.
For network issues, make sure all services are on the same network and that you're using the service name, not localhost. Inside a container, localhost refers to that container itself, not the host or other containers.
Next Steps: Production Hardening
A working Compose file is a starting point, not the finish line. In production, you'll want:
Health checks so Docker knows when a container is actually ready. Add a healthcheck section to your services.
Resource limits to prevent one runaway container from consuming all your server's memory. Use deploy: and limits: directives.
Automated backups of your volumes. A cron job that runs docker-compose exec -T postgres pg_dump and uploads the result to S3 or wherever.
A reverse proxy in front of everything. If you're running on a small VPS