Docker Networking for Homelabs

Docker Networking for Homelabs: Internal Communication and External Access

We earn commissions when you shop through the links on this page, at no additional cost to you. Learn more.

When I first started containerizing my homelab, I treated each Docker container like an island—figuring out how to talk to them was painful, chaotic, and involved a lot of port mapping headaches. After running Nextcloud, Jellyfin, Gitea, and a bunch of monitoring stacks, I learned that understanding Docker networking fundamentally changed how I deploy services. Today, I'm sharing what actually works.

Docker networking doesn't have to be complicated. Whether you're running services on a single machine or bridging containers across a VPS (I recommend RackNerd's VPS starting around $40/year if you want public-facing containers), the principles remain the same. Let me walk you through the architecture that's kept my homelab stable and accessible.

Why Default Bridge Networking Breaks Everything

By default, Docker creates a bridge network called bridge for every container you run. This sounds simple, but it's not. Containers on the default bridge network cannot resolve each other by hostname—they can only communicate by IP address, which changes every time you restart. This is why so many people resort to linking or run everything with --network host.

I learned this the hard way when my Nginx reverse proxy couldn't reach my application container after a restart. The IP had changed, and the hardcoded DNS entry broke. The solution? Custom user-defined bridge networks.

Custom networks enable automatic service discovery via DNS. When you create a bridge network and attach multiple containers to it, Docker's embedded DNS server automatically resolves container names to their IP addresses. This is the foundation of any reliable homelab setup.

Creating and Using Custom Bridge Networks

Here's how I structure networking in my homelab. I typically create separate networks for different service tiers: one for the reverse proxy and publicly-facing apps, another for databases and internal services, and sometimes a third for IoT and monitoring.

# Create a custom bridge network for your reverse proxy and frontend services
docker network create frontend

# Create a network for backend services that shouldn't be exposed
docker network create backend

# Create a network for databases and sensitive services
docker network create database

# List all networks
docker network ls

# Inspect a network to see connected containers
docker network inspect frontend

Now when I deploy a Caddy reverse proxy and Nextcloud, I connect them both to the frontend network. The Caddy container can reach Nextcloud simply by using the hostname nextcloud in its configuration—no IP addresses, no manual DNS.

# Run Caddy on the frontend network
docker run -d \
  --name caddy \
  --network frontend \
  -p 80:80 -p 443:443 \
  -v /home/user/Caddyfile:/etc/caddy/Caddyfile \
  -v caddy_data:/data \
  -v caddy_config:/config \
  caddy:latest

# Run Nextcloud on the same frontend network
docker run -d \
  --name nextcloud \
  --network frontend \
  -e MYSQL_HOST=db.database \
  -v nextcloud_data:/var/www/html \
  nextcloud:latest

In the Caddyfile, I reference Nextcloud by its container name:

files.example.com {
    reverse_proxy nextcloud:80
}

This works because both containers are on the same custom network. Docker's DNS resolves nextcloud to the container's current IP automatically.

Tip: Use custom bridge networks for every multi-container setup. It enables service discovery, isolation, and makes your docker-compose files much cleaner. Never rely on container linking (the --link flag) in production—it's deprecated and inflexible.

Docker Compose and Network Isolation

In Docker Compose, networks are even simpler. When you define a compose file, all services are automatically placed on a network named after your project directory. No extra configuration needed.

Here's a real compose stack I run—a Caddy reverse proxy with Gitea backend and PostgreSQL database:

version: '3.8'

networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge

services:
  caddy:
    image: caddy:latest
    container_name: caddy
    networks:
      - frontend
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - caddy_data:/data
      - caddy_config:/config
    restart: unless-stopped

  gitea:
    image: gitea/gitea:latest
    container_name: gitea
    networks:
      - frontend
      - backend
    environment:
      - DB_TYPE=postgres
      - DB_HOST=postgres
      - DB_PORT=5432
      - DB_USER=gitea
      - DB_PASSWD=secure_password_here
      - DB_NAME=gitea
    volumes:
      - gitea_data:/data
    restart: unless-stopped

  postgres:
    image: postgres:15-alpine
    container_name: postgres
    networks:
      - backend
    environment:
      - POSTGRES_USER=gitea
      - POSTGRES_PASSWORD=secure_password_here
      - POSTGRES_DB=gitea
    volumes:
      - postgres_data:/var/lib/postgresql/data
    restart: unless-stopped

volumes:
  caddy_data:
  caddy_config:
  gitea_data:
  postgres_data:

Notice how Gitea is on both frontend and backend networks? This is intentional. Gitea needs to accept reverse proxy traffic from Caddy (frontend) while connecting to PostgreSQL on the backend network. The PostgreSQL container only sits on backend—it's never exposed to the outside, and the reverse proxy can't reach it directly. This is network isolation done right.

When you run docker-compose up -d, Gitea can reach PostgreSQL using just postgres:5432 because they're on the same network. Caddy reaches Gitea using gitea:3000. No hardcoding IPs, no restart headaches.

Port Mapping: Exposing Services Safely

Port mapping is where internal Docker networking meets the outside world. When I deploy a service that needs external access, I map a host port to a container port using the -p flag (or ports: in compose).

The syntax is -p HOST_PORT:CONTAINER_PORT. If I run a Jellyfin media server, I'd do:

docker run -d \
  --name jellyfin \
  --network frontend \
  -p 8096:8096 \
  -v jellyfin_config:/config \
  -v jellyfin_cache:/cache \
  -v /mnt/media:/media \
  jellyfin/jellyfin:latest

This maps port 8096 on the host to port 8096 in the container. Anyone accessing your-ip:8096 reaches Jellyfin. But here's the key: Jellyfin doesn't need to know about this port mapping. Inside the container, it listens on port 8096. The mapping is purely a host-level translation.

If your VPS already has something running on port 8096, you'd change it to:

docker run -d \
  --name jellyfin \
  --network frontend \
  -p 9000:8096 \
  # ... rest of the config

Now the host listens on 9000, but Jellyfin still listens on 8096 internally. The container neither knows nor cares about the host port.

Watch out: Never expose everything. If a service only needs internal communication (like a database), don't map its port. Map only what needs to be reachable from outside the Docker network. This keeps your attack surface small and your homelab secure.

Service Discovery and DNS Resolution

One of the biggest wins from using custom networks is automatic service discovery. When you're inside a container on a custom network, you can reach other containers by name.

For example, in my Caddy config, I reference services by hostname:

files.example.com {
    reverse_proxy nextcloud:80
}

git.example.com {
    reverse_proxy gitea:3000
}

media.example.com {
    reverse_proxy jellyfin:8096
}

This works because Caddy is on the same network as these services. Docker's internal DNS resolver handles the translation. If a container restarts and gets a new IP, the DNS entry updates automatically. No manual intervention, no downtime from IP changes.

This is fundamentally different from host-mode networking, where containers see the host's network directly. With host mode, you lose isolation and gain port conflicts. I only use host mode when I have a very specific reason (like running Pi-hole on a dedicated box), and I've rarely encountered such a case in my homelab.

Connecting External Hosts and Exposing to the Internet

What if you want to reach Docker containers from another machine on your network? Or from the internet?

For local network access, you map ports to the host and connect to the host's IP. For internet access, you either:

1. Port forward through your router (traditional but requires managing ports, DynDNS, etc.)

2. Use a reverse proxy with a public domain (my preference). I run a VPS with Caddy, and it proxies traffic to my homelab via a Tailscale VPN or Cloudflare Tunnel. This way, my home IP never touches the public internet.

With Cloudflare Tunnel, I install cloudflared on my homelab machine, and it creates an encrypted tunnel to Cloudflare's edge. My Caddy on the VPS then proxies to my homelab through this tunnel:

# On your homelab machine, install cloudflared and create a tunnel
cloudflared tunnel login
cloudflared tunnel create my-homelab
cloudflared tunnel route dns my-homelab example.com

# On your VPS, configure Caddy to proxy through the tunnel
example.com {
    reverse_proxy https://homelab.example.com {
        transport http {
            tls_insecure_skip_verify
        }
    }
}

Your containers don't know any of this is happening. They sit on your homelab network, Caddy proxies to them locally, and the tunnel encrypts everything end-to-end. This is dramatically more secure than port forwarding.

Common Networking Gotchas and How to Fix Them

Container can't reach another container by name: Check that both containers are on the same custom network. The default bridge network doesn't support DNS resolution by container name. Use docker network inspect <network-name> to verify.

Port already in use: If you get "address already in use", check what's listening: sudo netstat -tlnp | grep LISTEN (Linux) or netstat -tulpn | grep LISTEN (macOS). Change your mapped port or stop the conflicting service.

Container can't reach the internet: Custom bridge networks forward traffic to the default bridge, which has a route to your host and beyond. If this fails, check that your Docker daemon is running and that you haven't misconfigured your host firewall. On my VPS, I use UFW to allow outbound traffic: sudo ufw default allow outgoing.

Reverse proxy can't reach backend services: Ensure the reverse proxy container is on the same network as your backend. If you use docker-compose, services are automatically on the project network. If you're using manual docker run commands, explicitly add --network my-network to all containers.

Real-World Homelab Setup

Here's how I actually structure a complete homelab stack with multiple tiers:

External VPS: Runs Caddy listening on port 80/443, proxies to homelab over Cloudflare Tunnel or Tailscale

Homelab: Runs internal Caddy or reverse proxy, all user-facing services, and backend databases on separate networks

Networking: Frontend services (Nextcloud, Jellyfin, Gitea) on one network, backend services (databases, caches) on another, monitoring (Prometheus, Grafana) on a third

This separation means a compromised frontend service can't directly access your database. Traffic flows through defined channels. It's not bulletproof, but it's miles better than everything on the default network.

If you're running this on a VPS, I'd recommend RackNerd's offerings—their New Year deals include reliable Linux VPS from about $40/year with plenty of bandwidth for a homelab. The cost is low enough that you can afford redundancy without guilt.

Next Steps

Start by creating custom bridge networks for your existing compose stacks. Change --network bridge to --network my-network, deploy, and watch service discovery work. Once you're comfortable with this, layer in a reverse proxy (Caddy, Traefik, or Nginx) to unify access to all your services.

Then, when you're ready, expose that reverse proxy securely to the internet using