Docker Networking Essentials: Connecting Containers Securely
We earn commissions when you shop through the links on this page, at no additional cost to you. Learn more.
When I first started running multiple containers in my homelab, I made a rookie mistake: I exposed everything to the host network and relied on random port mappings. Within weeks, my Nextcloud instance was fighting with my Jellyfin server over ports, and I had no idea how my containers were actually talking to each other. Docker networking seemed like magic until I realized it's actually one of the most powerful—and underutilized—features of containerization.
In this guide, I'm going to show you how to build a secure, organized networking setup for your Docker containers. You'll learn to create custom networks, implement service discovery, isolate services, and communicate securely between containers—all without exposing unnecessary ports to the internet.
Why Default Docker Networking Isn't Enough
Out of the box, Docker creates containers on a bridge network where they can reach each other by IP address, but not by hostname. That means your Nextcloud container can't just call http://database:5432—you need to figure out the container's IP or use environment variables with hardcoded values. It's messy, and it scales poorly.
Worse, the default bridge network doesn't support automatic service discovery. If you're running a homelab with ten services, that's ten manual configurations. Custom user-defined networks fix this entirely.
I also learned the hard way that port exposure is too easy with the default setup. A single misplaced port mapping, and your internal database is accessible from the internet. Custom networks give you granular control over what's exposed and what stays internal.
Understanding Docker's Three Network Types
Docker ships with three main network drivers, and picking the right one matters:
Bridge: The default. Containers get their own virtual network interface and can communicate via IP or hostname (if they're on a user-defined bridge network). Perfect for most homelabs.
Host: Containers share the host's network stack directly. No isolation. I avoid this except for specific tools like Pi-hole where I need direct access to port 53 across the entire system.
Overlay: For Docker Swarm clusters. You probably don't need this for a homelab unless you're running multiple servers.
I use custom bridge networks for everything. They're secure, they support DNS resolution, and they're simple to reason about.
Creating Your First Custom Network
Let's start with the simplest working example. I'll create a network that holds a Nextcloud instance and a PostgreSQL database:
docker network create nextcloud-net
docker run -d \
--name nextcloud-db \
--network nextcloud-net \
-e POSTGRES_PASSWORD=secure_password_here \
postgres:15-alpine
docker run -d \
--name nextcloud \
--network nextcloud-net \
-p 8080:80 \
-e NEXTCLOUD_ADMIN_USER=admin \
-e NEXTCLOUD_ADMIN_PASSWORD=admin_password \
nextcloud:latest
That's it. Now your Nextcloud container can reach the database using the hostname nextcloud-db. No IP addresses, no guessing. The Docker daemon resolves the name automatically.
Notice I only exposed port 8080 on the Nextcloud container. The database port 5432 stays internal to the network—it's not accessible from the host or the internet. That's security through default isolation.
Docker Compose: The Practical Approach
For anything beyond a single container or two, I always reach for Docker Compose. It handles networking automatically and makes configurations reproducible. Here's a real example from my homelab:
version: '3.8'
services:
redis:
image: redis:7-alpine
container_name: homelab-redis
networks:
- internal
restart: unless-stopped
volumes:
- redis_data:/data
command: redis-server --appendonly yes
immich-server:
image: ghcr.io/immich-app/immich-server:latest
container_name: immich-server
networks:
- internal
environment:
REDIS_HOSTNAME: redis
DB_HOSTNAME: immich-db
DB_USERNAME: immich
DB_PASSWORD: ${DB_PASSWORD}
DB_NAME: immich
depends_on:
- redis
- immich-db
ports:
- "2283:3001"
restart: unless-stopped
volumes:
- /media/immich:/usr/src/app/upload
- /etc/localtime:/etc/localtime:ro
immich-db:
image: postgres:15-alpine
container_name: immich-db
networks:
- internal
environment:
POSTGRES_USER: immich
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: immich
restart: unless-stopped
volumes:
- immich_db:/var/lib/postgresql/data
immich-microservices:
image: ghcr.io/immich-app/immich-server:latest
container_name: immich-microservices
command: start.sh microservices
networks:
- internal
environment:
REDIS_HOSTNAME: redis
DB_HOSTNAME: immich-db
DB_USERNAME: immich
DB_PASSWORD: ${DB_PASSWORD}
DB_NAME: immich
depends_on:
- redis
- immich-db
restart: unless-stopped
volumes:
- /media/immich:/usr/src/app/upload
- /etc/localtime:/etc/localtime:ro
networks:
internal:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
volumes:
redis_data:
immich_db:
Here's what makes this secure and organized:
Single internal network: All services live on the internal network. They can talk to each other by service name (Redis, immich-db, etc.), and nothing is exposed except the web port 2283.
Environment variables: Sensitive data like the database password comes from a .env file, not hardcoded. I never commit passwords to version control.
Explicit dependencies: The depends_on directive ensures the database starts before the application tries to connect.
Fixed subnet: I define a custom subnet (172.20.0.0/16) for predictability. Some networking tools need to know IP ranges in advance.
Service Discovery and DNS
One of Docker's greatest features is embedded DNS. Every container on a custom network gets a DNS resolver (127.0.0.11:53) that the Docker daemon manages.
When your Immich server tries to connect to immich-db, it's actually running a DNS query. The Docker daemon intercepts it and returns the IP of the container named immich-db. If that container restarts and gets a new IP, the DNS name still works. Your application doesn't care.
This is why I never hardcode container IPs. Always use service names.
If you need to debug DNS resolution, you can run:
docker exec immich-server nslookup immich-db
If the name resolves, you'll see the internal IP. If it doesn't, the container isn't on the same network.
Multi-Network Isolation
For bigger homelabs, I create separate networks for different service groups. For example:
media-net: Jellyfin, Immich, radarr, sonarrauth-net: Authentik, Vaultwarden, Keycloakstorage-net: Nextcloud, its database, Redismonitoring-net: Prometheus, Grafana, node-exporter
A single reverse proxy container sits on all networks, allowing external traffic to reach the right service while keeping the networks isolated from each other.
This isn't strictly necessary for security if you're running everything on a single trusted machine, but it becomes critical if you ever want to network multiple homelab servers together or implement zero-trust architecture.
Connecting to the Host and External Services
Sometimes you need a container to reach something on the host machine itself. The magic hostname is host.docker.internal on Mac and Windows, but on Linux, it doesn't work by default.
If you're running Docker on Linux and need a container to reach a service on the host, add this to your docker-compose.yml:
services:
my-app:
image: my-app:latest
networks:
- mynet
extra_hosts:
- "host:host-gateway"
Then your container can reach the host at host:PORT. I use this for accessing local NAS drives or a host-based database that I haven't containerized yet.
Exposing Services Safely
The port mapping syntax 8080:80 means "listen on host port 8080, forward to container port 80." But you can restrict which network interface this binds to:
docker run -p 127.0.0.1:8080:80 myapp
This binds only to localhost, making the service inaccessible from other machines on your LAN. Perfect for temporary access or development.
docker run -p 0.0.0.0:8080:80 myapp
This binds to all interfaces, making it accessible from anywhere that can reach your host. Use this carefully, and always put a reverse proxy or authentication layer in front.
Reverse Proxy Integration
My preferred pattern is a single reverse proxy (usually Caddy or Traefik) that sits on all internal networks and handles SSL/TLS termination and routing. External users hit the reverse proxy, which then forwards to the right internal service.
For example, a Caddy config might look like:
immich.homelab.local {
reverse_proxy immich-server:3001
}
nextcloud.homelab.local {
reverse_proxy nextcloud:80
}
Caddy queries DNS, finds the service by hostname on its connected network, and routes traffic. No IP addresses exposed, no port chaos.
Practical Networking Checklist
Before deploying any multi-container application, I run through this:
- ✓ Define a custom network for related services (don't use the default bridge)
- ✓ Use service names in connection strings, not IPs
- ✓ Store passwords and secrets in .env files
- ✓ Only expose ports that need external access
- ✓ Bind non-public services to 127.0.0.1
- ✓ Use a reverse proxy for all external-facing traffic
- ✓ Test connectivity with
docker execandnslookup
Why This Matters for Your Homelab
Running services on a proper Docker network setup scales from a simple two-container setup to a dozen services without adding complexity. Your configurations stay readable, your services stay secure by default, and troubleshooting becomes straightforward.
If you ever want to migrate from a homelab setup to a small VPS (around $40/year with providers like RackNerd), this networking knowledge transfers directly. Container networking is the same whether you're running on your spare laptop or a cloud server.
Master Docker networking now, and you'll never have to relearn it when your homelab grows.
Discussion