Zero-Trust Networking in Your Homelab: Microsegmentation Strategies

Zero-Trust Networking in Your Homelab: Microsegmentation Strategies

We earn commissions when you shop through the links on this page, at no additional cost to you. Learn more.

I've been running a homelab for five years, and I made every mistake in the book. One service got compromised? Suddenly every other container on my network was at risk. One compromised API key meant an attacker could pivot freely across my entire infrastructure. That changed the moment I stopped assuming everything inside my network was trusted.

Zero-trust networking isn't just for enterprise security teams anymore. It's becoming essential for anyone running self-hosted services, especially as our homelabs grow more complex. Today I'm walking you through practical microsegmentation strategies I've actually implemented—not theoretical security theater, but real tactics that work on modest hardware.

The core principle is simple: never trust, always verify. Every service, every connection, every request is treated as potentially hostile until proven otherwise. Let me show you how to build this in your homelab.

Understanding Zero-Trust and Microsegmentation

Zero-trust is fundamentally different from traditional "castle and moat" security. In a castle-and-moat model, you assume everything inside your network is safe. Once someone crosses the perimeter, they have broad access. I fell into this trap hard—my entire homelab lived in one massive Docker network where any container could talk to any other container.

Microsegmentation breaks your network into tiny zones. Each zone has its own access policies. A compromised Jellyfin instance can't automatically talk to your Nextcloud database. Your IoT devices live in a completely separate subnet from your VPS backup scripts. Even your Vaultwarden password manager has its own ingress rules.

The three pillars are:

In a homelab, you're implementing this through Docker networks, firewall rules, and careful service architecture. You're not replacing your firewall—you're layering controls on top of it.

Building Isolated Docker Networks

My first real step toward zero-trust was stopping the practice of throwing everything on the default bridge network. Docker networks are your microsegmentation weapon at the container level.

Here's a docker-compose.yml structure I use. I separate my stack into three networks: frontend (reverse proxy), backend (databases and sensitive services), and iot (untrusted IoT devices).

version: '3.8'

networks:
  frontend:
    driver: bridge
    ipam:
      config:
        - subnet: 172.20.0.0/16
  backend:
    driver: bridge
    ipam:
      config:
        - subnet: 172.21.0.0/16
  iot:
    driver: bridge
    ipam:
      config:
        - subnet: 172.22.0.0/16

services:
  caddy:
    image: caddy:latest
    container_name: caddy_reverse_proxy
    networks:
      - frontend
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - caddy_data:/data
    restart: unless-stopped

  nextcloud:
    image: nextcloud:latest
    container_name: nextcloud
    networks:
      - frontend
      - backend
    depends_on:
      - nextcloud_db
    environment:
      - POSTGRES_HOST=nextcloud_db
      - POSTGRES_DB=nextcloud
      - POSTGRES_USER=nextcloud
    restart: unless-stopped

  nextcloud_db:
    image: postgres:15-alpine
    container_name: nextcloud_db
    networks:
      - backend
    environment:
      - POSTGRES_DB=nextcloud
      - POSTGRES_USER=nextcloud
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
    volumes:
      - nextcloud_db_data:/var/lib/postgresql/data
    restart: unless-stopped

  vaultwarden:
    image: vaultwarden/server:latest
    container_name: vaultwarden
    networks:
      - frontend
      - backend
    environment:
      - DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@vaultwarden_db/vaultwarden
      - DOMAIN=https://vault.example.com
    restart: unless-stopped

  vaultwarden_db:
    image: postgres:15-alpine
    container_name: vaultwarden_db
    networks:
      - backend
    environment:
      - POSTGRES_DB=vaultwarden
      - POSTGRES_USER=${POSTGRES_USER}
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
    volumes:
      - vaultwarden_db_data:/var/lib/postgresql/data
    restart: unless-stopped

  tasmota_device_bridge:
    image: eclipse-mosquitto:latest
    container_name: mqtt_broker
    networks:
      - iot
    ports:
      - "1883:1883"
    volumes:
      - ./mosquitto.conf:/mosquitto/config/mosquitto.conf
    restart: unless-stopped

volumes:
  caddy_data:
  nextcloud_db_data:
  vaultwarden_db_data:
Tip: Each service only connects to the networks it absolutely needs. Nextcloud and Vaultwarden both hit the frontend network (for the reverse proxy), but they don't need to talk to each other. Databases live only on the backend network. IoT devices are completely isolated. If a device on the IoT network is compromised, it cannot reach your password manager or file server.

The key point: services can only communicate with containers on networks they're explicitly joined to. Your Tasmota MQTT broker on the IoT network literally cannot reach your Vaultwarden database on the backend network—there's no path. Docker won't route packets between networks without explicit configuration.

I've moved away from using environment variables like `POSTGRES_HOST=nextcloud_db` without understanding that service discovery. Docker's internal DNS resolver lets containers find each other by container name, but only on shared networks. If a container isn't on the same network, the hostname resolution fails completely. This is intentional isolation, and it's powerful.

Host-Level Firewalling with UFW and Port Exposure

Docker networks are great, but they're not enough. A malicious container with enough privileges can still see traffic on the host. I also run host-level firewall rules to deny everything by default, then explicitly allow only what I need.

On my homelab VPS (I host a few services on a DigitalOcean Droplet for redundancy), I use UFW to enforce zero-trust at the system level:

# Default policies: deny everything
sudo ufw default deny incoming
sudo ufw default allow outgoing

# Allow SSH from specific IPs only (adjust to your home IP)
sudo ufw allow from 203.0.113.0/24 to any port 22 proto tcp

# Allow HTTP/HTTPS globally (reverse proxy handles auth)
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp

# Allow Wireguard VPN from anywhere
sudo ufw allow 51820/udp

# Explicitly deny database ports from external access
sudo ufw deny 5432/tcp
sudo ufw deny 3306/tcp
sudo ufw deny 27017/tcp

# Allow Docker internal traffic on host (be careful here)
sudo ufw allow from 172.20.0.0/16 to 172.21.0.0/16

# Enable the firewall
sudo ufw enable

# Verify rules
sudo ufw status verbose

The critical part here is the default-deny stance. Every service starts with no external access. I only open ports that absolutely need to be reachable from the internet (HTTP, HTTPS, WireGuard VPN). Everything else is locked down.

I also use UFW's logging to catch connection attempts:

# Enable logging (be careful—this can get verbose)
sudo ufw logging on
sudo ufw logging low

# Watch rejected connections in real-time
sudo tail -f /var/log/ufw.log | grep BLOCK

This has caught several interesting things—port scanners, accidental connections from my own scripts, even a brief stint where a misconfigured service tried to expose itself externally. The logs are your early warning system.

Network Policies at the Container Level

Docker networks are good, but I wanted finer control. Enter network policies. If you're running multiple Docker hosts or thinking about Kubernetes down the line, Calico network policies give you per-service firewall rules.

I don't run Kubernetes in my homelab (overhead isn't worth it for my scale), but I do use a tool called Docker Host Firewall or simply stick with UFW rules that translate Docker subnet policies.

What I actually do is write a simple firewall rules file that maps to my services:

#!/bin/bash
# /opt/homelab/firewall_policies.sh

# Nextcloud can reach its database
sudo iptables -A FORWARD -i br-frontend -o br-backend \
  -p tcp --dport 5432 -j ACCEPT

# Vaultwarden can reach its database
sudo iptables -A FORWARD -i br-frontend -o br-backend \
  -p tcp --dport 5432 -j ACCEPT

# Caddy (reverse proxy) can reach frontend services
sudo iptables -A FORWARD -i br-host -o br-frontend \
  -p tcp --dport 8080 -j ACCEPT

# Deny everything else between networks by default
sudo iptables -A FORWARD -i br-frontend -o br-backend -j DROP
sudo iptables -A FORWARD -i br-iot -o br-backend -j DROP
sudo iptables -A FORWARD -i br-iot -o br-frontend -j DROP

# Persist these rules
sudo iptables-save > /etc/iptables/rules.v4
Watch out: iptables rules are powerful but easy to lock yourself out with. I always keep a console open during testing, and I use rules like `sudo iptables -I INPUT 1 -p tcp --dport 22 -j ACCEPT` (insert at position 1) to ensure SSH stays accessible while I'm testing. Test carefully, and consider using `nftables` on newer systems—it's more maintainable.

In practice, for my homelab size, the Docker network isolation plus UFW host firewall covers 95% of what I need. The iptables layer is there for defense-in-depth, but it's rarely the deciding factor.

Service-Level Authentication and Authorization

Network isolation stops an attacker from reaching a service, but what if they somehow do? Service-level authentication is your second line of defense.

I run Authelia as a single-sign-on (SSO) proxy in front of my services. Every request to Nextcloud, Vaultwarden, Jellyfin, or any web service first hits Authelia, which checks credentials.

Here's my Authelia configuration in the docker-compose:

  authelia:
    image: authelia/authelia:latest
    container_name: authelia
    networks:
      - frontend
    environment:
      - TZ=UTC
      - AUTHELIA_SERVER_ADDRESS_LISTEN=0.0.0.0
      - AUTHELIA_SERVER_ADDRESS_LISTEN_PORT=9091
    volumes:
      - ./authelia/config.yml:/config/configuration.yml
      - ./authelia/users.yml:/config/users_database.yml
    restart: unless-stopped

  # Caddy config references Authelia as an auth handler
  # In your Caddyfile:
  # *.example.com {
  #   forward_auth localhost:9091 {
  #     uri /api/verify?rd=https://auth.example.com
  #     copy_headers Remote-User Remote-Groups
  #   }
  #   reverse_proxy localhost:8080
  # }

Authelia sits between the user and every protected service. It handles TOTP (time-based one-time passwords), LDAP integration if you want it, and keeps detailed session logs. Even if an attacker somehow gets inside your network, they hit the SSO wall first.

I've also configured per-service authorization rules. My IoT admin panel, for example, only allows access from my phone on the home WiFi plus my work IP. Jellyfin is open to family but restricted to evening hours. This is business logic sitting on top of network isolation—defense in depth.

Monitoring and Detection

Isolation and access controls prevent many attacks, but they're not perfect. I run monitoring to catch suspicious behavior:

The moment a container tries to reach a port it shouldn't, I see it. The moment a service starts consuming unexpected resources, I see it. This isn't perfect intrusion detection, but it's good enough for a homelab and catches most real problems before they become serious.

Practical Implementation Checklist

Here's how