From Synology to VPS: Migrating Your Homelab to the Cloud

From Synology to VPS: Migrating Your Homelab to the Cloud

We earn commissions when you shop through the links on this page, at no additional cost to you. Learn more.

I've been running a Synology DS920+ for three years. It's solid hardware, but as my homelab grew—more Docker containers, more databases, more traffic—I hit the ceiling hard. The truth is, NAS hardware is great for storage, not great for containerized workloads. Last month, I migrated everything to a Hetzner VPS, and I want to walk you through exactly how I did it, including the mistakes I made so you don't have to.

Why I Left Synology (And Why You Might Too)

Let me be clear: Synology isn't bad. It's brilliant at what it's designed to do—store files, back them up, serve media. But when you want to run 15 Docker containers simultaneously while handling database replication and reverse proxy traffic, a NAS becomes a bottleneck.

My Synology's DSM 7 Docker support is limited. CPU throttling kicks in when load spikes. Storage I/O contention means uploads slow down when a container rebuild is happening. And if you want true HA (high availability), redundancy, or auto-scaling? You're out of luck on a single box.

The economic argument sealed it: my Synology NAS cost $800 upfront, plus another $300 in drives and licensing. It uses 60W constantly—about 25 bucks a month in electricity. A Hetzner CCX12 VPS is $4.90/month with 2 vCPUs and 4GB RAM. Over two years, the VPS wins by hundreds of dollars, and I get infinitely better compute.

That said, if you're purely storing photos and running Plex, stay with Synology. This migration makes sense only if you're running services.

The Pre-Migration Audit: Know What You're Moving

Before I touched anything, I inventoried my entire setup. Here's what I found running on the DS920+:

This audit is crucial. Write down every service, every database, every static file, every scheduled task. You will forget something otherwise—I almost left behind a custom certificate renewal script.

Tip: Export your Synology Docker container configs before you start. SSH into the NAS, navigate to /var/lib/docker/containers/, and copy the config.v2.json from each container. This gives you the environment variables and mount points you'll need to recreate them.

Step 1: Back Up Everything (Seriously)

I cannot stress this enough: take multiple, verified backups before you begin. I created three:

  1. Full disk snapshot of the NAS (using Synology's Snapshot feature)
  2. Manual tar exports of the Docker volumes
  3. mysqldump export of the MariaDB database

Here's the command I used to dump the databases from Synology:

ssh admin@synology-ip
# Inside Synology shell:
docker exec synology-mariadb mysqldump -u root -p --all-databases > /volume1/backup-$(date +%Y%m%d).sql
# Then copy to your VPS:
scp admin@synology-ip:/volume1/backup-20260404.sql ~/vps-setup/

For the Docker volumes, I exported them as tar archives:

ssh admin@synology-ip

# List all volumes
docker volume ls

# Export each volume
docker run --rm -v nextcloud_data:/data -v $(pwd):/backup alpine tar czf /backup/nextcloud_data.tar.gz -C /data .
docker run --rm -v vaultwarden_data:/data -v $(pwd):/backup alpine tar czf /backup/vaultwarden_data.tar.gz -C /data .

# Verify the exports worked
ls -lh *.tar.gz

Step 2: Provision the VPS and Set Up Docker

I chose Hetzner's CCX12 in their Nuremberg datacenter (RAID-10 storage, 2.4 GHz vCPUs, 4GB RAM). On a fresh Ubuntu 22.04 image, I ran through the standard hardening and Docker setup.

Here's my VPS bootstrap script—I run this on every new instance:

#!/bin/bash
set -e

# Update system
apt-get update && apt-get upgrade -y

# Install Docker
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
usermod -aG docker root

# Install Docker Compose
curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

# Install fail2ban and UFW
apt-get install -y ufw fail2ban

# Configure UFW (allow only SSH, HTTP, HTTPS)
ufw default deny incoming
ufw default allow outgoing
ufw allow 22/tcp
ufw allow 80/tcp
ufw allow 443/tcp
ufw enable

# Create backup directory
mkdir -p /root/backups
cd /root/backups

# Done
docker --version
docker-compose --version
echo "VPS ready for deployment"

After that runs (takes about 3 minutes), I verified the Docker daemon was working and then began restoring data.

Step 3: Restore Data and Rebuild Containers

I first restored the database, then the Docker volumes, then brought up the containers in dependency order.

Database restore (from the mysqldump I created earlier):

# On the VPS, spin up a temporary MariaDB instance
docker run --name mariadb-restore -e MYSQL_ROOT_PASSWORD=temppass -d mariadb:10.6

# Wait for it to be ready
sleep 10

# Import the dump
docker exec -i mariadb-restore mysql -u root -ptemppass < backup-20260404.sql

# Verify
docker exec mariadb-restore mysql -u root -ptemppass -e "SHOW DATABASES;"

# Now stop and remove the temp container—we'll use docker-compose instead
docker stop mariadb-restore && docker rm mariadb-restore

For the Docker volumes, I created them first, then extracted the tar archives:

# Create the volumes Docker Compose will use
docker volume create nextcloud_data
docker volume create vaultwarden_data

# Extract the backups into the volumes
docker run --rm -v nextcloud_data:/data -v /root/backups:/backups alpine tar xzf /backups/nextcloud_data.tar.gz -C /data
docker run --rm -v vaultwarden_data:/data -v /root/backups:/backups alpine tar xzf /backups/vaultwarden_data.tar.gz -C /data

# Verify the data is there
docker run --rm -v nextcloud_data:/data alpine ls -lah /data | head -20

Then I created the docker-compose.yml for all services. This is where you convert from Synology's GUI into declarative infrastructure:

version: '3.8'

services:
  mariadb:
    image: mariadb:10.6
    container_name: mariadb
    environment:
      MYSQL_ROOT_PASSWORD: ${DB_ROOT_PASSWORD}
      MYSQL_DATABASE: ${DB_NAME}
      MYSQL_USER: ${DB_USER}
      MYSQL_PASSWORD: ${DB_PASSWORD}
    ports:
      - "3306:3306"
    volumes:
      - mariadb_data:/var/lib/mysql
    restart: unless-stopped
    networks:
      - homelab

  nextcloud:
    image: nextcloud:27-apache
    container_name: nextcloud
    depends_on:
      - mariadb
    environment:
      MYSQL_HOST: mariadb
      MYSQL_DATABASE: ${DB_NAME}
      MYSQL_USER: ${DB_USER}
      MYSQL_PASSWORD: ${DB_PASSWORD}
      NEXTCLOUD_ADMIN_USER: admin
      NEXTCLOUD_ADMIN_PASSWORD: ${NEXTCLOUD_PASSWORD}
    ports:
      - "8080:80"
    volumes:
      - nextcloud_data:/var/www/html
    restart: unless-stopped
    networks:
      - homelab

  vaultwarden:
    image: vaultwarden/server:latest
    container_name: vaultwarden
    environment:
      DOMAIN: https://vault.yourdomain.com
      SIGNUPS_ALLOWED: "false"
    ports:
      - "8081:80"
    volumes:
      - vaultwarden_data:/data
    restart: unless-stopped
    networks:
      - homelab

  caddy:
    image: caddy:2.7-alpine
    container_name: caddy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - caddy_data:/data
      - caddy_config:/config
    networks:
      - homelab
    restart: unless-stopped

volumes:
  mariadb_data:
  caddy_data:
  caddy_config:
  nextcloud_data:
    external: true
  vaultwarden_data:
    external: true

networks:
  homelab:
    driver: bridge

I also created a .env file with sensitive values (not committed to git, obviously):

DB_ROOT_PASSWORD=SecureRootPass123!
DB_NAME=homelab
DB_USER=homelab_user
DB_PASSWORD=SecureDBPass456!
NEXTCLOUD_PASSWORD=NextcloudAdminPass789!

Then the standard startup:

docker-compose up -d
docker-compose logs -f  # Watch the startup
Watch out: Nextcloud is paranoid about where it's accessed from. If you change the domain or IP, it locks you out. Make sure your Caddyfile reverse proxy sets the correct headers: header X-Real-IP {remote_host} and header X-Forwarded-Proto https. Also, after restore, go into Nextcloud's Admin Settings > System and update the trusted domains list.

Step 4: Update DNS and Certificate Migration

This is the critical cutover moment. I set up Caddy to handle reverse proxying and Let's Encrypt renewal, but I kept the old Synology running for 48 hours as a fallback.

My Caddyfile (in the Caddy container):

cloud.yourdomain.com {
    reverse_proxy nextcloud:80 {
        header_uri -X-Real-IP
        header X-Real-IP {remote_host}
        header X-Forwarded-Proto https
    }
}

vault.yourdomain.com {
    reverse_proxy vaultwarden:80 {
        header X-Real-IP {remote_host}
    }
}

app.yourdomain.com {
    reverse_proxy custom-app:5000
}

I updated my DNS A records to point to the VPS IP address. TTL was already set to 300 seconds, so propagation was fast. Traffic flipped over within minutes.

Step 5: Verify and Monitor

For the first 24 hours, I monitored everything obsessively:

I also set up monitoring. A simple health check script runs every 5 minutes:

#!/bin/bash
# Check if services are responding
curl -sf https://cloud.yourdomain.com > /dev/null || echo "Nextcloud down" | mail -s "Alert" [email protected]
curl -sf https://vault.yourdomain.com > /dev/null || echo "Vaultwarden down" | mail -s "Alert" [email protected]
docker exec mariadb mysqladmin -u root -p${DB_ROOT_PASSWORD} ping > /dev/null || echo "DB down" | mail -s "Alert" [email protected]

Cost and Performance Comparison

Let me be transparent about the economics:

Performance-wise, the VPS is dramatically faster. Docker container startup went from 45 seconds on the Synology to 8 seconds on the Hetzner CCX12. Nextcloud file operations are 3x quicker. Database queries that used to spike the NAS to 100% CPU now barely register.

The only downside: I lost 1.2TB of local storage. I solved that by keeping the Synology as a backup target (it now receives nightly rsync copies from the VPS via SSH).

What I'd Do Differently

If I did this again:

Next Steps

If you're considering this move, start by pricing a VPS that matches your workload. Hetzner, Contabo, and RackNerd are all solid choices for homelabs (I prefer Hetzner for their transparency and uptime). Then, on a weekend, do a dry-run migration on a test server. Restore your backups, bring up your services, poke around. If something breaks, you've learned it in a safe environment.

The VPS path isn't right for everyone—Synology is genuinely the better choice if you primarily store files. But if you're running containers, databases, and services? The cost, performance, and flexibility of cloud infrastructure will change how you think about your homelab.

Discussion

```