VPS-to-Homelab Hybrid Setup: Managing Multiple Environments

VPS-to-Homelab Hybrid Setup: Managing Multiple Environments

We earn commissions when you shop through the links on this page, at no additional cost to you. Learn more.

I used to run everything on my homelab. Then my internet went down for three days, and I learned a hard lesson about single points of failure. Now I run a hybrid setup: critical services on a cheap VPS, experiments and media on the home hardware, everything tied together with DNS and monitoring. It's not as complicated as it sounds, and it gives me the best of both worlds—redundancy I can afford, and the control I refuse to give up.

Why Go Hybrid?

A pure homelab is fantastic for tinkering. You own the hardware, control the network, and don't pay recurring fees. But when the ISP hiccups or a power surge hits, you're offline. A pure VPS solves that—always-on, redundant, managed—but you're paying monthly for compute you might not use, and you lose the feeling of hands-on control.

The hybrid approach lets you split responsibilities: keep low-traffic, critical services (reverse proxy, DNS, identity provider, monitoring) on an inexpensive VPS around $40–50/year (I'd recommend checking RackNerd for deals, especially their seasonal promos). Run heavy workloads—Ollama, media servers, development environments—at home where you've already sunk the hardware cost. If either side goes down, the other keeps serving what it can.

Architecture: VPS as the Front Door

Here's my mental model: the VPS acts as your public-facing gateway and identity anchor. It runs:

Your homelab runs the heavy stuff: Jellyfin, Nextcloud (or Immich), Ollama, game servers, development databases. The VPS just coordinates and proxies.

Setting Up Unified DNS

The first thing I did was move DNS off CloudFlare and into my own hands. I installed CoreDNS on the VPS; it's lightweight and fast.

#!/bin/bash
# On your VPS, install CoreDNS
mkdir -p /opt/coredns
cd /opt/coredns

# Download the latest binary
curl -L https://github.com/coredns/coredns/releases/download/v1.11.1/coredns_1.11.1_linux_amd64.tgz | tar xz

# Create a Corefile configuration
cat > Corefile <<'EOF'
.:53 {
    forward . 1.1.1.1 8.8.8.8
    log
    errors
}

mylab.local:53 {
    file /etc/coredns/db.mylab.local
    log
    errors
}

yourdomain.com:53 {
    file /etc/coredns/db.yourdomain.com
    log
    errors
}
EOF

# Create zone file for local services
cat > /etc/coredns/db.mylab.local <<'EOF'
$ORIGIN mylab.local.
@   3600 IN SOA ns.mylab.local. admin.mylab.local. 2026032901 3600 900 604800 86400
@   3600 IN NS ns.mylab.local.
ns  3600 IN A 192.168.1.1

nextcloud 3600 IN A 192.168.1.10
jellyfin  3600 IN A 192.168.1.11
ollama    3600 IN A 192.168.1.12
homelab   3600 IN A 192.168.1.1
EOF

# Run CoreDNS in the background or with systemd
./coredns -conf Corefile &

Now point your devices to the VPS's IP as their DNS server. Everything on your internal network resolves to home IPs (192.168.x.x), but external queries go to the VPS. The VPS's zone file for yourdomain.com points the root and subdomains to the VPS itself, and Caddy (running on the VPS) reverse-proxies to your homelab.

Tip: Use split-horizon DNS: internal requests for nextcloud.yourdomain.com resolve to your homelab IP (low latency, direct), while external requests resolve to the VPS (public access). CoreDNS and Pi-hole both support this natively.

Caddy as the Hybrid Router

Install Caddy on the VPS and configure it to proxy both local and external traffic:

#!/bin/bash
# Install Caddy on VPS
apt update && apt install -y caddy

# Edit /etc/caddy/Caddyfile
cat > /etc/caddy/Caddyfile <<'EOF'
yourdomain.com, *.yourdomain.com {
    # Redirect HTTP to HTTPS
    @http {
        protocol http
    }
    redir @http https://{host}{uri}

    # API endpoint goes to homelab Ollama
    @api {
        path /api/*
    }
    reverse_proxy @api http://192.168.1.12:11434

    # Nextcloud stays on homelab
    @nextcloud {
        host nextcloud.yourdomain.com
    }
    reverse_proxy @nextcloud http://192.168.1.10:80 {
        header_uri -X-Forwarded-Proto https
        header_uri -X-Forwarded-For {http.request.remote}
    }

    # Jellyfin media streaming
    @jellyfin {
        host jellyfin.yourdomain.com
    }
    reverse_proxy @jellyfin http://192.168.1.11:8096

    # Fallback to VPS-hosted services
    reverse_proxy /* http://localhost:8080 {
        header_uri -X-Forwarded-Proto https
    }

    # Enable automatic HTTPS (LetsEncrypt)
    tls internal {
        on demand
    }
}

# Local proxy for VPS services
localhost:8080 {
    reverse_proxy /* 127.0.0.1:9000
}
EOF

systemctl restart caddy

This setup means:

Failover and Redundancy

What happens when your internet goes down? I use a second DNS server (at Hetzner, same region, around €3/month) that serves the same zones but points critical services to fallback IPs. My Caddy config on both VPS instances is identical; I keep them in sync with a simple git repo.

Watch out: Don't use DNS round-robin for stateful services (like databases). If Nextcloud writes to homelab but reads from VPS, you'll get corrupted data. Use DNS for stateless proxies only, or implement proper database replication.

For critical data, I run a PostgreSQL instance on the VPS (small, just for identity and app metadata) and replicate from homelab nightly via encrypted backups. Jellyfin and media are read-only from the external perspective, so they can failover to a cached copy on the VPS without issues.

Unified Monitoring

Install Prometheus on the VPS and scrape both the VPS metrics and a Prometheus instance on your homelab:

#!/bin/bash
# On VPS, /etc/prometheus/prometheus.yml
cat > /etc/prometheus/prometheus.yml <<'EOF'
global:
  scrape_interval: 15s
  evaluation_interval: 15s

scrape_configs:
  - job_name: 'vps'
    static_configs:
      - targets: ['127.0.0.1:9100']

  - job_name: 'homelab'
    static_configs:
      - targets: ['192.168.1.1:9100']  # Node exporter on homelab gateway
    scrape_interval: 30s  # Slower poll due to VPN latency

  - job_name: 'caddy'
    static_configs:
      - targets: ['127.0.0.1:2019']

  - job_name: 'coredns'
    static_configs:
      - targets: ['127.0.0.1:9153']
EOF

# Install node_exporter on both VPS and homelab
wget https://github.com/prometheus/node_exporter/releases/download/v1.8.1/node_exporter-1.8.1.linux-amd64.tar.gz
tar xzf node_exporter-1.8.1.linux-amd64.tar.gz
sudo cp node_exporter-1.8.1.linux-amd64/node_exporter /usr/local/bin/
sudo systemctl restart node_exporter

Now you can see at a glance which side is having issues. Set up alerting to notify you when the homelab goes offline so you know to SSH in and check things.

Cost Reality

I spend roughly €50/year for the primary VPS (RackNerd or similar deal), €3/month for the secondary, and electricity at home. That's way less than running everything on cloud infrastructure. A small Hetzner VPS or comparable RackNerd offer gives me redundancy and a public IP without the homelab's ISP limitations.

Data Sync Strategy

Keep these services in sync via nightly rsync over SSH:

Don't sync large media files; instead, rebuild indexes on the fallback if needed.

Next Steps

Start small: pick one critical service (maybe just your reverse proxy) and move it to a cheap VPS. Run it for a week in parallel with your homelab, then simulate an internet outage and see what breaks. That's where you'll learn the most.

Once you're comfortable, add authentication (Authelia), then monitoring, then backups. Each layer adds resilience without significantly increasing complexity or cost.

Discussion