Docker Compose for Multi-Container Applications: A Step-by-Step Guide
When I first tried to run a homelab application with multiple services—a web frontend, API backend, and database—I manually created each container, linked networks, managed volumes by hand, and watched everything fall apart when I rebooted. That's when Docker Compose saved my life. Instead of wrestling with dozens of CLI flags, I define my entire stack in a single YAML file and bring everything up with one command.
Docker Compose turns orchestration from painful to boring—which is exactly what you want in a homelab. In this guide, I'll walk you through building real multi-container applications from scratch, covering networking, persistent storage, environment variables, and the gotchas I've hit along the way.
What Is Docker Compose and Why You Need It
Docker Compose is a tool for defining and running multi-container Docker applications. Instead of launching containers manually with long docker run commands, you write a declarative YAML file that describes all your services, their dependencies, networks, volumes, and environment configuration. Then one command brings them all up together.
I prefer Compose because it's:
- Reproducible: Anyone (or any machine) can clone your repo and spin up the identical stack.
- Version-controlled: Your infrastructure lives in git alongside your code.
- Fast to iterate: Change one line, run
docker-compose up, and test immediately. - Built-in networking: All containers on the same compose network can reach each other by service name.
Docker Compose comes bundled with Docker Desktop, but on servers, you'll install it separately. On Linux, I usually run:
sudo apt update && sudo apt install docker-compose -y
Or grab the latest version directly:
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
docker-compose --version
Your First Multi-Container Application
Let me show you a real example: a self-hosted URL shortener with a Node.js API, PostgreSQL database, and Redis cache. Create a directory called shortener and add this docker-compose.yml:
version: '3.8'
services:
api:
build: .
container_name: shortener-api
ports:
- "3000:3000"
environment:
NODE_ENV: production
DATABASE_URL: postgresql://user:password@db:5432/shortener
REDIS_URL: redis://cache:6379
depends_on:
- db
- cache
networks:
- shortener-net
restart: unless-stopped
db:
image: postgres:16-alpine
container_name: shortener-db
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: shortener
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- shortener-net
restart: unless-stopped
cache:
image: redis:7-alpine
container_name: shortener-cache
networks:
- shortener-net
restart: unless-stopped
volumes:
postgres_data:
networks:
shortener-net:
driver: bridge
Now bring everything up:
docker-compose up -d
Check the status:
docker-compose ps
Your API is immediately accessible at http://localhost:3000, the database is listening on the internal shortener-net network at hostname db, and Redis is available at cache:6379. All containers can talk to each other by service name—no manual networking required.
depends_on to ensure the database and cache start before the API. However, depends_on only waits for containers to exist, not for services to be ready. For production, add a health check or a startup script that retries database connections.Volumes: Keeping Data After Container Death
In my example above, I declared a named volume for PostgreSQL:
volumes:
postgres_data:
When the database container stops or is removed, the data persists on the host in Docker's managed volume directory. List all volumes on your system:
docker volume ls
You can also bind-mount a directory on your host:
db:
image: postgres:16-alpine
volumes:
- /mnt/data/postgres:/var/lib/postgresql/data
I prefer named volumes for most cases—they're cleaner and Docker manages the path for you. But for backups, bind-mounts make it easier to access files directly from your host.
docker-compose down (volumes stay), update your compose file, and bring them back up.Environment Variables and .env Files
Hardcoding passwords in your compose file is a nightmare. Instead, use environment variables. Create a .env file in the same directory:
POSTGRES_USER=dbadmin
POSTGRES_PASSWORD=super_secret_here
NODE_ENV=production
REDIS_PASSWORD=redis_secret_here
Then reference them in your compose file:
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: shortener
Never commit .env to version control. Add it to .gitignore and document expected variables in a .env.example file.
Networking: Services Talking to Each Other
When you define a custom network in Compose (like shortener-net, above), Docker runs an embedded DNS server. Every container on that network can reach others by service name. So my Node.js app connects to the database using the connection string postgresql://user:password@db:5432/shortener—not an IP address.
If you need to expose a service to the host machine, use ports:
api:
ports:
- "3000:3000" # host_port:container_port
But keep internal services (databases, caches) off the ports list. They're only accessible from other containers on the network, which is more secure.
Health Checks and Restart Policies
I always add a restart policy to critical services:
api:
restart: unless-stopped
This tells Docker to automatically restart the container if it crashes, except when you explicitly stop it. Options are:
no– Don't restart (default)always– Always restart, even if it exited cleanlyunless-stopped– Restart unless explicitly stoppedon-failure– Restart only if exit code was non-zero
For production, add health checks so Docker knows when a service is actually ready:
db:
image: postgres:16-alpine
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user"]
interval: 10s
timeout: 5s
retries: 5
start_period: 10s
The API can then wait for the database to be healthy, not just running.
Logging, Debugging, and Watching Logs
Check what's happening inside your containers:
docker-compose logs -f api
The -f flag follows logs in real-time (like tail -f). Drop a service name to see all services:
docker-compose logs
If a container is failing to start, run:
docker-compose up api
Without the -d flag, you see output directly. If it's crashing silently, exec into it:
docker-compose exec db sh
This opens a shell inside the db container. Now you can inspect files, check configs, and test connections manually.
Updating, Scaling, and Cleanup
When you pull a new image version, bring services down and back up:
docker-compose pull
docker-compose up -d
To scale a service (run multiple replicas), use:
docker-compose up -d --scale cache=3
However, you can't scale services with exposed ports—Docker can't bind the same port multiple times. Use expose instead if you only need internal access.
To stop everything:
docker-compose down
This stops and removes containers but keeps volumes. To nuke everything including volumes:
docker-compose down -v
Real-World Tip: Using Compose with Watchtower for Auto-Updates
If you're running services on a VPS, you probably want to auto-update images. Add Watchtower to your compose file:
watchtower:
image: containrrr/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command: --interval 86400 --cleanup
restart: unless-stopped
Watchtower polls your images daily and redeploys them if updates are available. The --cleanup flag removes old images to save disk space. This is a game-changer for a homelab on a budget—automatic updates without downtime.
Where to Deploy: RackNerd VPS
When your homelab outgrows your Raspberry Pi, a small VPS is the natural next step. I've had good experiences with RackNerd's KVM VPS plans—reliable uptime, generous resource allocations, and they support Docker without fussing about container usage. Their hybrid dedicated servers are also excellent if you need more raw power for intensive workloads. Use their managed infrastructure to run Compose stacks with Watchtower and focus on your applications, not server maintenance.
Next Steps
You now have the fundamentals to build robust multi-container applications. From here, explore:
- Adding a reverse proxy like Caddy or Traefik in front of your services for SSL and routing
- Setting up Portainer for a web UI to manage containers visually
- Implementing backup strategies for your volumes with automated snapshots
- Using docker-compose override files to customize environments per machine
Docker Compose transforms chaos into clarity. Once you're comfortable with the basics, you'll find yourself reaching for it for every new project—it's that good.
Discussion