Zero-Trust Security in Self-Hosting: Implementing Proper Access Controls

Zero-Trust Security in Self-Hosting: Implementing Proper Access Controls

We earn commissions when you shop through the links on this page, at no additional cost to you. Learn more.

I learned the hard way that trusting your internal network is dangerous. Someone compromised one of my services last year, and because I'd assumed everything behind my firewall was safe, the attacker moved laterally across my entire homelab in minutes. Zero-trust security isn't just for enterprises—it's essential for anyone running self-hosted infrastructure. The principle is simple: never trust, always verify. Every connection, every user, every service needs authentication and authorization, regardless of where the request originates.

In this article, I'll walk you through implementing zero-trust security for your self-hosted environment. You'll learn how to segment your network, enforce authentication on every layer, audit access patterns, and catch threats before they spread. This isn't a theoretical exercise—I've tested these configurations on my own homelab running a mix of Docker services, reverse proxies, and internal applications.

Understanding Zero-Trust Architecture

Zero-trust assumes breach. That's the foundation. Instead of a hard perimeter with a soft interior, zero-trust treats every request as potentially hostile. You verify identity, validate device security, check permissions, and monitor behavior—all continuously.

The typical homelab trusts too much. Port forward SSH to your VPS, use the same password everywhere, assume your local network is safe, skip logs. Zero-trust flips that. You use:

For self-hosting, this means running a reverse proxy with authentication, segmenting services into network zones, enforcing MFA on everything, and collecting logs you actually review.

Layer 1: Reverse Proxy with Authentication

I prefer Caddy for this because it handles SSL/TLS automatically and supports authentication middleware cleanly. Here's my setup: every service sits behind Caddy, which forces authentication before the request even reaches the application.

# /etc/caddy/Caddyfile
{
  email [email protected]
}

# Public entry point - requires OIDC authentication
https://services.example.com {
  forward_auth localhost:9091 {
    uri /api/verify
    copy_headers Remote-User Remote-Groups
  }
  reverse_proxy /nextcloud localhost:8080
  reverse_proxy /vaultwarden localhost:8081
  reverse_proxy /immich localhost:8082
}

# Admin panel - stricter auth
https://admin.example.com {
  forward_auth localhost:9091 {
    uri /api/verify
    copy_headers Remote-User Remote-Groups Remote-Name
  }
  basicauth /* {
    admin $2a$14$your_bcrypt_hash_here
  }
  reverse_proxy /portainer localhost:9000
  reverse_proxy /monitoring localhost:3000
}

This configuration uses forward authentication via an auth service on port 9091. Every request to services.example.com hits the auth service first. If authentication fails, the user never reaches the backend application. The `copy_headers` directive passes verified user information downstream, so your apps know who's making requests.

Tip: Use Authelia or authentik for centralized authentication. Both are open-source and support OIDC, LDAP, and password-less authentication. I run Authelia in Docker and it handles auth for 12+ services without breaking a sweat.

Layer 2: Network Segmentation with VLANs

Even if someone compromises one service, they shouldn't have access to all your internal network. I segment my homelab into three zones:

I implement this with Docker networks and UFW firewall rules. Here's my Docker Compose structure:

version: '3.8'

networks:
  # DMZ - only exposed services
  dmz:
    driver: bridge
    driver_opts:
      com.docker.network.bridge.name: br-dmz
    ipam:
      config:
        - subnet: 172.20.0.0/16

  # Internal services - can talk to data layer
  services:
    driver: bridge
    driver_opts:
      com.docker.network.bridge.name: br-services
    ipam:
      config:
        - subnet: 172.21.0.0/16

  # Restricted - data only
  data:
    driver: bridge
    driver_opts:
      com.docker.network.bridge.name: br-data
    ipam:
      config:
        - subnet: 172.22.0.0/16

services:
  caddy:
    image: caddy:latest
    networks:
      - dmz
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - caddy_data:/data
    restart: unless-stopped

  authelia:
    image: authelia/authelia:latest
    networks:
      - dmz
      - services
    environment:
      - AUTHELIA_DEFAULT_REDIRECTION_URL=https://auth.example.com
    volumes:
      - ./authelia/configuration.yml:/config/configuration.yml
    restart: unless-stopped

  nextcloud:
    image: nextcloud:latest
    networks:
      - services
      - data
    environment:
      - POSTGRES_HOST=postgres.data
      - POSTGRES_PASSWORD=${DB_PASSWORD}
    depends_on:
      - postgres
    restart: unless-stopped

  postgres:
    image: postgres:15-alpine
    networks:
      - data
    environment:
      - POSTGRES_PASSWORD=${DB_PASSWORD}
    volumes:
      - postgres_data:/var/lib/postgresql/data
    restart: unless-stopped

volumes:
  caddy_data:
  postgres_data:

Then I enforce network policies with UFW on the host:

# Default deny
sudo ufw default deny incoming
sudo ufw default allow outgoing

# Allow SSH only from specific IPs
sudo ufw allow from 203.0.113.5 to any port 22

# Allow Caddy (DMZ)
sudo ufw allow from any to any port 80
sudo ufw allow from any to any port 443

# Block data network from reaching internet
sudo ufw insert 1 deny out from 172.22.0.0/16 to any

# Allow specific service-to-data traffic (only postgres)
sudo ufw allow from 172.21.0.0/16 to 172.22.0.0/16 port 5432

sudo ufw enable

A compromised Nextcloud instance can talk to its database, but it can't SSH to your NAS or reach your backup drive. That limits blast radius significantly.

Layer 3: Multi-Factor Authentication and Identity Verification

Passwords are single points of failure. I use Authelia configured with TOTP (time-based one-time passwords) as mandatory for all users. Here's my Authelia configuration:

# authelia/configuration.yml
authentication_backend:
  file:
    path: /config/users_database.yml
    password:
      algorithm: argon2
      argon2:
        parallelism: 8
        memory: 64
        iterations: 3
        salt_length: 16
        key_length: 32

session:
  secret: ${SESSION_SECRET}
  name: authelia_session
  domain: example.com
  authelia_url: https://auth.example.com
  inactivity: 1h
  expiration: 24h
  remember_me_duration: 30d

regulation:
  max_retries: 3
  find_time: 10m
  ban_time: 20m

storage:
  postgres:
    host: postgres.data
    port: 5432
    database: authelia
    username: authelia
    password: ${DB_PASSWORD}

notifier:
  smtp:
    host: ${SMTP_HOST}
    port: ${SMTP_PORT}
    username: ${SMTP_USER}
    password: ${SMTP_PASSWORD}
    sender: [email protected]

access_control:
  default_policy: deny
  rules:
    # All services require two-factor
    - domain: "services.example.com"
      policy: two_factor
      factors_per_request: 1

    # Admin services require two-factor + strict policy
    - domain: "admin.example.com"
      policy: two_factor
      subject: "group:admin"
      
    # Public endpoints without auth
    - domain: "public.example.com"
      policy: bypass

Every user must register a TOTP device (I use Bitwarden's built-in authenticator). Failed logins are tracked and accounts lock after three attempts. Sessions expire after 24 hours. This prevents credential stuffing and replay attacks.

Watch out: Test your account lockout. I discovered my Authelia setup locked me out for 20 minutes after mistyping my password—good security, but I needed to know the behavior. Also, always export TOTP backup codes and store them in a safe place.

Layer 4: Audit Logging and Threat Detection

You can't respond to threats you don't see. I collect logs from Caddy, Authelia, Docker, and UFW into a centralized system. I use Loki (log aggregator) with Grafana dashboards. Here's the Docker setup:

# docker-compose with logging
services:
  caddy:
    image: caddy:latest
    logging:
      driver: loki
      options:
        loki-url: http://loki:3100/loki/api/v1/push
        loki-batch-size: "100"
        labels: "service=caddy"
    # ... rest of config

  authelia:
    image: authelia/authelia:latest
    logging:
      driver: loki
      options:
        loki-url: http://loki:3100/loki/api/v1/push
        labels: "service=authelia"
    environment:
      - AUTHELIA_LOG_LEVEL=info
    # ... rest of config

  loki:
    image: grafana/loki:latest
    networks:
      - data
    ports:
      - "3100:3100"
    volumes:
      - ./loki-config.yml:/etc/loki/local-config.yaml
      - loki_data:/loki
    command: -config.file=/etc/loki/local-config.yaml
    restart: unless-stopped

  grafana:
    image: grafana/grafana:latest
    networks:
      - services
    ports:
      - "3000:3000"
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD}
    datasources:
      - name: Loki
        type: loki
        url: http://loki:3100
    restart: unless-stopped

I watch for patterns: multiple failed auth attempts, unusual access times, services making unexpected outbound connections, privilege escalation attempts. Set alerts for these. I get a Gotify notification when anyone from an unknown IP accesses the admin panel.

Layer 5: Secrets Management

Never hardcode passwords or API keys. I use Vaultwarden (self-hosted Bitwarden) for password management and environment files for service secrets. For Docker secrets at scale, Hashicorp Vault is worth the complexity.

All my Docker Compose files use .env files:

# .env
DB_PASSWORD=long_random_string_here
POSTGRES_PASSWORD=another_long_random_string
SESSION_SECRET=yet_another_secret
AUTHELIA_JWT_SECRET=jwt_secret_here

These are excluded from version control (in .gitignore), stored in encrypted backup vaults, and rotated every 90 days. Each service gets only the secrets it needs—Nextcloud doesn't have Authelia's JWT secret.

Practical Implementation: Where to Start

If you're running a homelab from a VPS (around $40/year on Hetzner or RackNerd), implement this in order:

  1. Set up Caddy with basic auth. Get your reverse proxy in place first. One day to implement.
  2. Deploy Authelia.** Use built-in users database initially. Two days including testing.
  3. Enforce TOTP on all users. Make it non-negotiable. Half a day.
  4. Segment networks with Docker and UFW. Start with DMZ/Services/Data split. One week to test thoroughly.
  5. Set up Loki and Grafana for logs. Build dashboards for failed auth, service restarts, network anomalies. One week.
  6. Enable audit alerts. Configure Gotify or Ntfy for notifications