Deploying Immich (Self-Hosted Photos) on a VPS with Docker
I've been running Immich on a personal VPS for eight months now, and it's completely replaced Google Photos for my household. Unlike proprietary cloud services that scan your imagery, Immich keeps everything encrypted, on your hardware, and under your control. In this guide, I'll walk you through deploying a production-ready Immich instance on a budget VPS—including Docker Compose setup, SSL termination with Caddy, storage strategy, and backups.
Why Immich Over Google Photos?
The privacy argument is obvious: Google Photos analyzes your photos. But there's more. Immich offers:
- Machine learning locally: Facial recognition, object detection, and clip search all run on your hardware—no API calls to external servers.
- Unlimited storage: You pay for disk, not per-gigabyte subscription fees.
- Family sharing: Create shared libraries for your household without trusting a corporation's terms of service.
- Raw photo support: Immich handles RAW files natively—critical if you shoot with a dedicated camera.
- Mobile apps: Immich has fully functional iOS and Android clients with auto-backup.
The only downside: you're responsible for backups and uptime. That's why I'm running it on a reliable VPS instead of a home server.
Choosing Your VPS
For Immich, you'll need reasonable CPU (for thumbnail generation) and sufficient disk space. I use a RackNerd KVM VPS with 4 vCores and 8 GB RAM—roughly £15/month. That's enough for a household of five uploading 100–200 photos weekly with ML features enabled.
Minimum specs I'd recommend:
- 2 vCores, 4 GB RAM
- 100 GB SSD (for OS and Immich binaries)
- Additional disk for photo storage (can be mounted separately)
- Ubuntu 22.04 LTS or Debian 12
RackNerd's pricing is competitive and their uptime is solid—I've had 99.8% availability over six months. For larger photo libraries (50,000+ images), bump to 8 GB RAM and consider a second storage volume.
Installing Docker and Prerequisites
Log into your VPS and install Docker and Docker Compose:
#!/bin/bash
# Update system packages
sudo apt update && sudo apt upgrade -y
# Install Docker
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
# Add your user to the docker group (so you don't need sudo every time)
sudo usermod -aG docker $USER
newgrp docker
# Install Docker Compose v2
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
# Verify installations
docker --version
docker-compose --version
After running this, log out and back in so the docker group permissions take effect.
Setting Up Immich with Docker Compose
Immich requires a PostgreSQL database and Redis cache alongside the main application server. I'll use Docker Compose to orchestrate all three. Create a dedicated directory:
mkdir -p /opt/immich/{db,photos,uploads}
cd /opt/immich
Now create the docker-compose.yml file. I'm using Immich v1.120.0 (current as of March 2026), but check the official repo for the latest tag:
version: '3.8'
services:
postgres:
image: postgres:16-alpine
container_name: immich-postgres
restart: always
environment:
POSTGRES_DB: immich
POSTGRES_USER: immich
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_INITDB_ARGS: "--encoding=UTF8"
volumes:
- ./db:/var/lib/postgresql/data
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U immich"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
container_name: immich-redis
restart: always
ports:
- "6379:6379"
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
immich-server:
image: ghcr.io/immich-app/immich-server:v1.120.0
container_name: immich-server
restart: always
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
environment:
DB_HOSTNAME: postgres
DB_USERNAME: immich
DB_PASSWORD: ${DB_PASSWORD}
DB_NAME: immich
REDIS_HOSTNAME: redis
REDIS_PORT: 6379
JWT_SECRET: ${JWT_SECRET}
IMMICH_LOG_LEVEL: log
IMMICH_MACHINE_LEARNING_ENABLED: "true"
IMMICH_MACHINE_LEARNING_URL: http://immich-ml:3003
volumes:
- ./uploads:/usr/src/app/upload
- ./photos:/photos
- /etc/localtime:/etc/localtime:ro
ports:
- "3001:3001"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3001/api/server/ping"]
interval: 10s
timeout: 10s
retries: 5
immich-ml:
image: ghcr.io/immich-app/immich-machine-learning:v1.120.0
container_name: immich-ml
restart: always
depends_on:
- immich-server
environment:
IMMICH_LOG_LEVEL: log
volumes:
- ./uploads:/usr/src/app/upload
- /opt/immich/ml_cache:/cache
ports:
- "3003:3003"
volumes:
immich_uploads:
immich_photos:
Create a .env file to store secrets securely:
# Generate strong secrets
DB_PASSWORD=$(openssl rand -base64 32)
JWT_SECRET=$(openssl rand -base64 32)
cat > .env << EOF
DB_PASSWORD=$DB_PASSWORD
JWT_SECRET=$JWT_SECRET
EOF
chmod 600 .env
echo ".env" >> .gitignore if you're tracking this in Git. Also, rotate JWT_SECRET and DB_PASSWORD every 6–12 months for security.Start the containers:
docker-compose up -d
Monitor startup with:
docker-compose logs -f immich-server
Wait for the message "Database migrations executed successfully" before proceeding. This typically takes 60–90 seconds on first run.
Reverse Proxy with Caddy and SSL
I prefer Caddy for reverse proxying Immich—it handles SSL termination automatically via Let's Encrypt with zero configuration. Install Caddy on your VPS (outside Docker):
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
curl https://dl.filippo.io/caddy/stable?os=linux&arch=amd64 | sudo bash
sudo setcap cap_net_bind_service=+ep /usr/bin/caddy
# Verify
caddy version
Create your Caddyfile. Replace photos.example.com with your actual domain:
sudo tee /etc/caddy/Caddyfile > /dev/null << 'EOF'
photos.example.com {
reverse_proxy localhost:3001 {
header_uri -X-Real-IP
header_up X-Real-IP {http.request.remote.host}
header_up X-Forwarded-For {http.request.remote.host}
header_up X-Forwarded-Proto {http.request.proto}
header_up Host {http.request.host}
}
encode gzip
@websocket {
path /api/socket.io*
header Connection *upgrade*
header Upgrade websocket
}
reverse_proxy @websocket localhost:3001
}
EOF
sudo systemctl enable caddy
sudo systemctl start caddy
sudo systemctl status caddy
Caddy automatically obtains an SSL certificate and renews it 30 days before expiry. Check the log:
sudo tail -f /var/log/caddy/caddy.log
Access Immich at https://photos.example.com. On first load, create an admin account.
Optimizing Storage and Performance
Immich generates thumbnails, preview images, and ML embeddings—this consumes extra disk space and CPU. I mount a secondary volume for photos to separate OS from data:
# Assuming /dev/sdb is your second disk
sudo mkfs.ext4 /dev/sdb
sudo mkdir -p /mnt/photos-storage
sudo mount /dev/sdb /mnt/photos-storage
sudo chown 1000:1000 /mnt/photos-storage
# Make it permanent in /etc/fstab
echo "/dev/sdb /mnt/photos-storage ext4 defaults,nofail 0 2" | sudo tee -a /etc/fstab
Update docker-compose.yml to use this mount point:
sed -i 's|./photos:/photos|/mnt/photos-storage:/photos|g' docker-compose.yml
docker-compose restart
POSTGRES_INITDB_ARGS: "--encoding=UTF8 -c shared_preload_libraries=pg_squeeze" in the postgres service, or run ALTER DATABASE immich SET default_statistics_target=100; after creation for better query planning.For ML performance, allocate adequate CPU. Check current usage:
docker stats immich-ml
If CPU is pegged at 100%, ML is running. This is normal during batch processing of new uploads. You can throttle it by limiting CPU in docker-compose.yml:
immich-ml:
cpus: "2"
mem_limit: 3g
Mobile App Setup and Auto-Backup
Download Immich from Google Play or Apple App Store. In the app:
- Tap Settings → Server Endpoint
- Enter https://photos.example.com
- Log in with your admin account
- Enable Backup → Auto Backup to sync photos automatically
- Choose which albums to back up (I back up everything except screenshots)
Auto-backup respects your mobile data limits—it'll only upload over WiFi unless you change settings.
Backing Up Your Immich Instance
Your VPS disk is one point of failure. Back up both your database and photos regularly. I use a nightly cron job:
#!/bin/bash
# /usr/local/bin/immich-backup.sh
BACKUP_DIR="/backups/immich"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
CONTAINER_NAME="immich-postgres"
mkdir -p $BACKUP_DIR
# Backup PostgreSQL
docker exec $CONTAINER_NAME pg_dump -U immich immich | gzip > $BACKUP_DIR/db_$TIMESTAMP.sql.gz
# Backup photos (optional if you're syncing elsewhere)
# tar czf $BACKUP_DIR/photos_$TIMESTAMP.tar.gz /mnt/photos-storage/
# Delete backups older than 30 days
find $BACKUP_DIR -name "db_*.sql.gz" -mtime +30 -delete
echo "Immich backup completed at $TIMESTAMP"
Make it executable and add to crontab:
chmod +x /usr/local/bin/immich-backup.sh
# Run daily at 2 AM
(crontab -l 2>/dev/null; echo "0 2 * * * /usr/local/bin/immich-backup.sh") | crontab -
Transfer backups off-site using