Deploying a VPS with Docker and Automated Backups
When I first moved my self-hosted services from a dusty Raspberry Pi to a proper VPS, I learned a hard lesson: without automated backups, you're one bad Docker restart away from losing everything. In this guide, I'll walk you through spinning up a VPS, containerizing your services with Docker, and setting up bulletproof automated backups that actually work when you need them.
Why Docker on a VPS Makes Sense
I prefer Docker on a VPS because it gives me isolation, reproducibility, and portability without the overhead of full virtualization. Each service runs in its own container—if Jellyfin crashes, your reverse proxy stays up. You can move your entire setup to another server by copying a docker-compose.yml file. Plus, rolling back a broken update is as simple as pulling the previous image tag.
The alternative—running services directly on the host OS—is a security and maintenance nightmare. You'll have Python version conflicts, Node.js upgrades breaking things, and port collisions that make you want to throw the server out the window.
Choosing Your VPS Provider
I've tested RackNerd, Hetzner, and Contabo for homelab workloads. Here's my honest take:
RackNerd offers solid KVM VPS starting at $1–2/month (seriously) for their entry-tier plans, with consistent uptime and reasonable support. Their US data centers have decent latency for North American users. You can grab a starter plan here—look for their "Yearly Specials" section where they drop the real deals.
Hetzner is my top pick for CPU-heavy workloads and Ollama deployments. Their Cloud VPS at €3–5/month are beasts for the price, with NVMe storage standard. The control panel is snappy, and you get API access for automation.
Contabo sits in the middle: cheaper than Hetzner, more reliable than budget providers. Their VPS Plus plan ($4/month) includes 8GB RAM and 160GB SSD—I use this for Nextcloud deployments.
For this tutorial, I'm assuming a 2GB RAM, 40GB SSD entry-level VPS running Ubuntu 22.04 LTS. Adjust the sizing based on your actual workloads.
Initial VPS Hardening
Before Docker goes near your server, lock it down. I always run these steps on fresh deploy:
#!/bin/bash
# Update and upgrade
apt update && apt upgrade -y
# Install essentials
apt install -y curl wget git htop ufw fail2ban
# Disable root login and password auth (assumes SSH key already set up)
sed -i 's/^#PermitRootLogin.*/PermitRootLogin no/' /etc/ssh/sshd_config
sed -i 's/^#PasswordAuthentication.*/PasswordAuthentication no/' /etc/ssh/sshd_config
systemctl restart sshd
# Enable UFW and allow SSH
ufw default deny incoming
ufw default allow outgoing
ufw allow 22/tcp
ufw allow 80/tcp
ufw allow 443/tcp
ufw enable
# Configure fail2ban for SSH brute-force protection
systemctl enable fail2ban
systemctl start fail2ban
Installing Docker and Docker Compose
I use the official Docker installation script because it handles repository setup and GPG keys correctly:
#!/bin/bash
# Install Docker
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
# Add your user to the docker group (avoid sudo for every command)
usermod -aG docker $USER
newgrp docker
# Install Docker Compose v2
curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" \
-o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
# Verify installation
docker --version
docker-compose --version
After this, log out and back in so the docker group change takes effect. Then test with docker run hello-world—if you see "Hello from Docker!", you're good.
Deploying Your First Stack
I'll show you a basic stack with Caddy (reverse proxy), a placeholder service, and persistent volumes. This is the template I use for every deployment:
# docker-compose.yml
version: '3.8'
services:
caddy:
image: caddy:latest
container_name: caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
networks:
- appnet
nextcloud:
image: nextcloud:latest
container_name: nextcloud
restart: unless-stopped
depends_on:
- db
environment:
MYSQL_HOST: db
MYSQL_DATABASE: nextcloud
MYSQL_USER: nextcloud
MYSQL_PASSWORD: ${DB_PASSWORD}
NEXTCLOUD_ADMIN_USER: admin
NEXTCLOUD_ADMIN_PASSWORD: ${NC_PASSWORD}
NEXTCLOUD_TRUSTED_DOMAINS: "yourdomain.com"
volumes:
- nextcloud_data:/var/www/html
- nextcloud_config:/var/www/html/config
networks:
- appnet
labels:
- "com.example.description=Nextcloud File Server"
db:
image: mariadb:latest
container_name: nextcloud_db
restart: unless-stopped
environment:
MYSQL_ROOT_PASSWORD: ${DB_ROOT_PASSWORD}
MYSQL_DATABASE: nextcloud
MYSQL_USER: nextcloud
MYSQL_PASSWORD: ${DB_PASSWORD}
volumes:
- db_data:/var/lib/mysql
networks:
- appnet
volumes:
caddy_data:
caddy_config:
nextcloud_data:
nextcloud_config:
db_data:
networks:
appnet:
driver: bridge
Create a .env file alongside this compose file with your secrets:
DB_ROOT_PASSWORD=super_secure_root_pass_here
DB_PASSWORD=secure_nextcloud_pass_here
NC_PASSWORD=secure_admin_pass_here
Create a Caddyfile for your reverse proxy:
yourdomain.com {
reverse_proxy nextcloud:80 {
header_up X-Forwarded-For {http.request.remote}
header_up X-Forwarded-Host {http.request.host}
}
}
Then spin it up with docker-compose up -d. Caddy will automatically request HTTPS certificates from Let's Encrypt. Check logs with docker-compose logs -f caddy.
.env file to Git. Add it to .gitignore immediately. Environment variables are the only secure way to handle secrets in Docker—hardcoding passwords in YAML is asking for trouble.Setting Up Automated Backups
This is the part that saved my skin when a Nextcloud database corruption happened at 2 AM. I use a combination of volume snapshots and off-server backup storage.
Create a backup script at /home/user/backup.sh:
#!/bin/bash
set -e
BACKUP_DIR="/backups/vps"
RETENTION_DAYS=30
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_NAME="docker-backup-${TIMESTAMP}"
# Create backup directory
mkdir -p "${BACKUP_DIR}"
# Stop containers gracefully (don't use kill)
echo "Pausing containers..."
docker-compose pause
# Backup volumes
echo "Backing up volumes..."
mkdir -p "${BACKUP_DIR}/${BACKUP_NAME}"
docker run --rm \
-v nextcloud_data:/source:ro \
-v "${BACKUP_DIR}/${BACKUP_NAME}":/backup \
alpine tar czf /backup/nextcloud_data.tar.gz -C /source .
docker run --rm \
-v db_data:/source:ro \
-v "${BACKUP_DIR}/${BACKUP_NAME}":/backup \
alpine tar czf /backup/db_data.tar.gz -C /source .
# Backup docker-compose and config files
cp docker-compose.yml "${BACKUP_DIR}/${BACKUP_NAME}/"
cp Caddyfile "${BACKUP_DIR}/${BACKUP_NAME}/"
cp .env "${BACKUP_DIR}/${BACKUP_NAME}/.env.backup"
# Resume containers
echo "Resuming containers..."
docker-compose unpause
# Create checksum
cd "${BACKUP_DIR}/${BACKUP_NAME}"
sha256sum * > checksum.sha256
cd -
# Upload to offsite storage (using rsync to remote server or S3)
echo "Uploading backup..."
rsync -avz --delete "${BACKUP_DIR}/${BACKUP_NAME}" [email protected]:/mnt/backups/ || \
aws s3 sync "${BACKUP_DIR}/${BACKUP_NAME}" s3://my-backup-bucket/vps/${BACKUP_NAME}
# Cleanup old backups locally (keep last 30 days)
find "${BACKUP_DIR}" -maxdepth 1 -type d -mtime +${RETENTION_DAYS} -exec rm -rf {} \;
echo "Backup completed: ${BACKUP_NAME}"
Make it executable: chmod +x /home/user/backup.sh
Now add it to crontab to run daily at 2 AM:
crontab -e
# Add this line:
0 2 * * * /home/user/backup.sh >> /var/log/docker-backup.log 2>&1
I always test the backup script once before trusting it. Run it manually first, verify the tar files are created, check the remote storage actually receives them. Untested backups are worthless.
Monitoring and Maintenance
I use Watchtower to automatically pull fresh images and restart affected containers when updates are available. Add this to your compose file:
watchtower:
image: containrrr/watchtower:latest
container_name: watchtower
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
WATCHTOWER_SCHEDULE: "0 3 * * *"
WATCHTOWER_CLEANUP: "true"
WATCHTOWER_REMOVE_VOLUMES: "true"
networks:
- appnet
This checks for image updates every day at 3 AM and pulls updates automatically. I also recommend Uptime Kuma for monitoring your services—it sends you Telegram/Discord notifications if Nextcloud goes down.
What I'd Do Differently Next Time
Use named volumes instead of bind mounts where possible—they're easier to backup and don't have permission issues. Always set restart policies: restart: unless-stopped is my default. Never run containers as root inside the container; use a restricted user even if the host is running as root.
Next steps: Once this stack is stable, add Authelia for authentication to sensitive services, set up log rotation with logrotate to avoid filling your disk, and implement a health check in your Caddyfile so failed services get restarted automatically.
If you're looking to spin up your first VPS, RackNerd's entry plans are genuinely affordable and reliable for homelab work. Just don't skip the hardening step—too many VPS get compromised in the first 48 hours because people rush to Docker without locking down SSH.
Discussion