Docker Compose Networking: Isolating Services with Custom Bridge Networks
We earn commissions when you shop through the links on this page, at no additional cost to you. Learn more.
One of the most under-used features in Docker Compose is also one of the most important for security: custom bridge networks. Most people throw every service into the default network Docker Compose creates automatically, then wonder why a compromised container can reach everything else on the stack. I've been burned by this before — a misconfigured Redis instance that was reachable from a web-facing container it had absolutely no business talking to. Custom networks fix exactly that problem.
In this tutorial I'll show you how to define explicit custom bridge networks in your Compose files, how to attach containers to multiple networks when necessary, and how to build a proper multi-tier layout where your database is completely invisible to your reverse proxy. By the end you'll have a pattern you can apply to every stack you run.
Why the Default Network Isn't Good Enough
When you run docker compose up without specifying any networks, Compose creates a single bridge network named after your project directory (e.g., myapp_default). Every service in that file joins it. That means your frontend, your backend API, your database, and your cache are all on the same Layer 2 segment, able to reach each other by container name on any port.
This is convenient for development, but in production — including homelabs that are exposed to the internet — it's a real risk. Container escape vulnerabilities happen. Misconfigured services expose unexpected ports. The principle of least privilege applies just as much to container networking as it does to user permissions.
Custom bridge networks let you say: "the reverse proxy can only talk to the app, the app can talk to both the cache and the database, but the reverse proxy cannot reach the database at all." That's a meaningful security boundary.
Defining Custom Networks in Compose
The syntax is straightforward. You declare networks at the top level of your docker-compose.yml, then assign containers to them under each service's networks: key. Here's a minimal example with a Caddy reverse proxy, a Node.js app, and a PostgreSQL database:
## docker-compose.yml
services:
caddy:
image: caddy:2-alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
networks:
- frontend
app:
image: node:22-alpine
restart: unless-stopped
working_dir: /app
volumes:
- ./app:/app
command: node server.js
environment:
- DATABASE_URL=postgres://appuser:changeme@db:5432/appdb
networks:
- frontend
- backend
depends_on:
- db
db:
image: postgres:16-alpine
restart: unless-stopped
environment:
- POSTGRES_USER=appuser
- POSTGRES_PASSWORD=changeme
- POSTGRES_DB=appdb
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- backend
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true
volumes:
caddy_data:
caddy_config:
pgdata:
Notice a few things here. The db service only exists on the backend network. Caddy only exists on the frontend network. The app service spans both, because it needs to serve traffic through Caddy and also query the database. If someone were to compromise Caddy, they'd be stuck on the frontend network with no route to PostgreSQL whatsoever.
internal: true on a network tells Docker not to attach a default gateway to it. Containers on that network cannot initiate outbound connections to the internet, which is exactly what you want for a database-tier network. The app container can still reach the database because they share the backend network, but the database itself can't phone home.A Real-World Stack: Immich with Isolated Networks
Let me show a more realistic example using Immich — a self-hosted photo platform that consists of multiple cooperating services. The default Immich Compose file puts everything on one network. Here's how I restructure it with proper isolation:
## docker-compose.yml (Immich with network isolation)
services:
immich-server:
image: ghcr.io/immich-app/immich-server:release
restart: unless-stopped
volumes:
- /mnt/photos:/usr/src/app/upload
environment:
- DB_HOSTNAME=immich-postgres
- DB_USERNAME=immich
- DB_PASSWORD=changeme
- DB_DATABASE_NAME=immich
- REDIS_HOSTNAME=immich-redis
depends_on:
- immich-postgres
- immich-redis
networks:
- proxy
- immich-internal
immich-machine-learning:
image: ghcr.io/immich-app/immich-machine-learning:release
restart: unless-stopped
volumes:
- model_cache:/cache
networks:
- immich-internal
immich-redis:
image: redis:7-alpine
restart: unless-stopped
networks:
- immich-internal
immich-postgres:
image: tensorchord/pgvecto-rs:pg16-v0.2.0
restart: unless-stopped
environment:
- POSTGRES_USER=immich
- POSTGRES_PASSWORD=changeme
- POSTGRES_DB=immich
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- immich-internal
caddy:
image: caddy:2-alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
networks:
- proxy
networks:
proxy:
driver: bridge
immich-internal:
driver: bridge
internal: true
volumes:
pgdata:
model_cache:
caddy_data:
With this layout, Caddy can reach immich-server via the proxy network using the hostname immich-server. But Caddy has zero awareness of Redis, PostgreSQL, or the machine learning container. They live entirely inside immich-internal. The machine learning service doesn't need internet access either — it downloads models at startup and works entirely within the internal network after that.
internal: true but a service on it needs to pull images or download data at runtime (not just at build time), it will silently fail. Check your service logs if a container seems to hang on startup — lack of outbound access is a common culprit. Move that specific service to a non-internal network or pre-pull everything it needs.Inspecting Your Networks
Once your stack is running, you can verify the isolation is working as expected. These commands are the ones I run every time I set up a new stack:
# List all Docker networks on the host
docker network ls
# Inspect a specific network to see which containers are attached
docker network inspect immich_immich-internal
# Confirm a container cannot reach a service on a different network
# This should fail/time out if isolation is working:
docker exec immich-caddy ping immich-postgres
# Check which networks a specific container belongs to
docker inspect immich-server --format '{{json .NetworkSettings.Networks}}' | python3 -m json.tool
# Verify internal networks have no gateway
docker network inspect immich_immich-internal \
--format '{{range .IPAM.Config}}Gateway: {{.Gateway}}{{end}}'
That last command is particularly useful. On an internal: true network, the gateway field will be empty — confirming no outbound route exists.
Naming Conventions and Project Prefixes
Docker Compose automatically prefixes network names with your project name, which defaults to the directory name. So if your Compose file is in /home/user/immich/ and you define a network called immich-internal, Docker will actually create a network called immich_immich-internal. This matters when you're referencing networks from external tools or connecting containers across multiple Compose stacks.
You can set an explicit project name to keep this predictable. Either use the --project-name flag, or add it to a .env file in the same directory:
## .env
COMPOSE_PROJECT_NAME=immich
If you want a Compose stack to attach a container to a network managed by a different Compose stack — for example, connecting a monitoring stack to the same proxy network as your apps — use the external: true option:
networks:
proxy:
external: true
name: immich_proxy
This tells Compose not to create the network itself, but to attach to the already-existing one. I use this pattern to connect Uptime Kuma's container to the same internal networks as everything it monitors, without giving it internet access or exposing it through the main proxy unnecessarily.
Deploying Isolated Stacks on a VPS
If you're running this kind of setup on a VPS rather than local hardware, proper network isolation becomes even more important. I run several of my own stacks on DigitalOcean Droplets — they give you a clean Ubuntu environment, predictable pricing, and the ability to snapshot your whole server before making risky changes. Create your DigitalOcean account today if you want a solid foundation for running containerized services with these network patterns in a cloud environment.
On a VPS, combining Docker's custom bridge networks with UFW rules at the host level gives you defense in depth. Docker manages internal communication, UFW controls what reaches the host at all. The two layers complement each other well.
Wrapping Up
Custom bridge networks in Docker Compose cost you almost nothing to implement — a few extra lines in your Compose file — and they deliver a genuine security improvement over the default single-network setup. The pattern I reach for on every stack is: one proxy network that the reverse proxy and any internet-facing services share, and one internal network (with internal: true) for databases, caches, and background workers. Services that need to span tiers, like your main application container, join both.
From here, a natural next step is combining this network isolation with Authelia or Authentik for authentication on the proxy layer — so even if someone reaches a service on the proxy network, they still have to authenticate. Check out our tutorial on implementing zero-trust security with Authelia for exactly that setup.
Discussion