Persistent Storage Strategies for Docker Containers: Volumes, Bind Mounts, and Backups

Persistent Storage Strategies for Docker Containers: Volumes, Bind Mounts, and Backups

We earn commissions when you shop through the links on this page, at no additional cost to you. Learn more.

One of the most common ways self-hosters lose data is by treating Docker containers as if they were permanent. They're not — when a container is removed, everything inside it vanishes unless you've set up persistent storage correctly. I've made this mistake myself early on with a Vaultwarden instance, and recovering from it was not fun. This tutorial walks through Docker volumes, bind mounts, when to use each, and how to back them up reliably so you never lose important data again.

Understanding Docker's Storage Model

Docker containers have a writable layer on top of their image, but this layer is ephemeral — it goes away with the container. For anything you care about (databases, config files, uploaded media, app state), you need to mount storage that lives outside the container's lifecycle.

Docker gives you three main approaches:

My general rule: use named volumes for databases, use bind mounts for configs and user-facing files. It keeps things predictable and makes backups straightforward.

Named Volumes: The Docker-Native Approach

Named volumes are created and managed by Docker. You reference them by name and Docker handles where they live on disk. Here's how they look in a Docker Compose file for a typical Nextcloud + MariaDB stack:

services:
  db:
    image: mariadb:11
    restart: unless-stopped
    environment:
      MYSQL_ROOT_PASSWORD: changeme
      MYSQL_DATABASE: nextcloud
      MYSQL_USER: ncuser
      MYSQL_PASSWORD: changeme
    volumes:
      - db_data:/var/lib/mysql

  nextcloud:
    image: nextcloud:29
    restart: unless-stopped
    ports:
      - "8080:80"
    depends_on:
      - db
    volumes:
      - nextcloud_data:/var/www/html

volumes:
  db_data:
  nextcloud_data:

The volumes: block at the bottom tells Docker Compose to create these as named volumes if they don't already exist. When you run docker compose down, the volumes survive. Only docker compose down -v will delete them — which is a command I've learned to treat with great respect.

You can inspect a named volume to find its actual location on disk:

# List all volumes
docker volume ls

# Inspect a specific volume to find its mountpoint
docker volume inspect nextcloud_data

# Output includes something like:
# "Mountpoint": "/var/lib/docker/volumes/nextcloud_data/_data"

Bind Mounts: Direct Host Path Control

Bind mounts map a directory on your host directly into the container. I prefer bind mounts for anything I need to edit by hand — like a Caddy config, an AdGuard Home config file, or a Jellyfin media library that's shared across multiple containers.

Here's an example with Caddy as a reverse proxy, where I want full control over the config and certificates:

services:
  caddy:
    image: caddy:2-alpine
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./caddy/Caddyfile:/etc/caddy/Caddyfile:ro
      - ./caddy/data:/data
      - ./caddy/config:/config

  jellyfin:
    image: jellyfin/jellyfin:latest
    restart: unless-stopped
    ports:
      - "8096:8096"
    volumes:
      - ./jellyfin/config:/config
      - /mnt/media/movies:/media/movies:ro
      - /mnt/media/shows:/media/shows:ro

The :ro suffix makes the mount read-only inside the container, which I always apply to media libraries. Jellyfin doesn't need to write to your movie collection, and limiting write access is a simple hardening step.

Watch out: Bind mount paths must exist on the host before Docker starts the container, or Docker will create them as root-owned directories, which can cause permission errors — especially with apps like Nextcloud that expect specific UID/GID ownership. Always create the directories manually first: mkdir -p ./jellyfin/config.

Choosing Between Volumes and Bind Mounts

Here's how I think about it in practice:

One practical advantage of bind mounts is that they're trivially easy to include in your backup scripts since you know exactly where they are. Named volumes require an extra step to locate or extract data.

Backing Up Named Volumes

Named volumes live under /var/lib/docker/volumes/ and you can back them up while the container is running using a temporary busybox container. This is my go-to approach for database volumes where stopping the container isn't ideal:

#!/bin/bash
# backup-volumes.sh — Run this via cron, e.g. daily at 2am
# cron: 0 2 * * * /opt/scripts/backup-volumes.sh

BACKUP_DIR="/opt/backups/docker-volumes"
DATE=$(date +%Y-%m-%d)
mkdir -p "$BACKUP_DIR"

# Backup a named volume to a compressed tarball
backup_volume() {
  local VOLUME_NAME="$1"
  local OUTPUT="$BACKUP_DIR/${VOLUME_NAME}-${DATE}.tar.gz"

  echo "Backing up volume: $VOLUME_NAME"
  docker run --rm \
    -v "${VOLUME_NAME}:/data:ro" \
    -v "${BACKUP_DIR}:/backup" \
    busybox \
    tar czf "/backup/${VOLUME_NAME}-${DATE}.tar.gz" -C /data .

  echo "Saved to: $OUTPUT"
}

backup_volume "nextcloud_data"
backup_volume "db_data"
backup_volume "vaultwarden_data"

# Keep only the last 7 days of backups
find "$BACKUP_DIR" -name "*.tar.gz" -mtime +7 -delete

echo "Backup complete: $(date)"

To restore a named volume from one of these backups:

# Create the volume if it doesn't exist
docker volume create nextcloud_data

# Restore from backup tarball
docker run --rm \
  -v nextcloud_data:/data \
  -v /opt/backups/docker-volumes:/backup:ro \
  busybox \
  tar xzf /backup/nextcloud_data-2026-05-12.tar.gz -C /data
Tip: For databases specifically, prefer a proper database dump over a raw volume backup. Use docker exec db_container mysqldump -u root -p nextcloud > /opt/backups/nextcloud-db-$(date +%Y-%m-%d).sql for MariaDB/MySQL. Raw volume backups of a running database can produce inconsistent data if a write happens mid-copy.

Backing Up Bind Mount Directories

Bind mounts are easier to back up since they're just regular host directories. I use rsync for local backups and rclone for off-site copies to object storage. Here's a simple rsync-based backup script I use on my homelab server:

#!/bin/bash
# backup-bindmounts.sh

SOURCE_DIRS=(
  "/opt/docker/caddy"
  "/opt/docker/jellyfin/config"
  "/opt/docker/vaultwarden"
  "/opt/docker/immich"
)

BACKUP_ROOT="/opt/backups/bind-mounts"
DATE=$(date +%Y-%m-%d)

for DIR in "${SOURCE_DIRS[@]}"; do
  DIRNAME=$(basename "$DIR")
  DEST="$BACKUP_ROOT/$DIRNAME/$DATE"
  mkdir -p "$DEST"

  echo "Syncing $DIR -> $DEST"
  rsync -a --delete "$DIR/" "$DEST/"
done

# Optional: sync to remote storage with rclone
# rclone sync /opt/backups/bind-mounts remote:my-bucket/docker-backups

# Prune old backups older than 14 days
find "$BACKUP_ROOT" -mindepth 2 -maxdepth 2 -type d -mtime +14 \
  -exec rm -rf {} +

echo "Bind mount backup complete."

I run this at 3am via cron on every machine I self-host on. The rclone line is commented out but I use it on my VPS to push backups to Backblaze B2. If you're running on a DigitalOcean Droplet, their Spaces object storage integrates cleanly with rclone as an S3-compatible target — it's what I use for off-site copies of my Nextcloud and Immich data.

Docker Compose and Volume Portability

One thing I love about keeping all my bind mounts under a single directory like /opt/docker/ with one subdirectory per app is that migrating to a new server becomes a straightforward operation: stop all containers, rsync the entire /opt/docker/ tree to the new host, copy the compose files, and bring everything back up. No hunting around /var/lib/docker/volumes/ for data.

If you prefer named volumes but still want easy portability, the docker volume create --opt flags let you pin a named volume to a specific host path using the local driver:

# Create a named volume pinned to a specific host directory
docker volume create \
  --driver local \
  --opt type=none \
  --opt o=bind \
  --opt device=/opt/docker/myapp/data \
  myapp_data

This gives you the named volume syntax in your compose files while keeping data in a predictable, accessible location. It's the best of both worlds and what I increasingly use for new deployments.

Conclusion

Getting persistent storage right in Docker is non-negotiable for anything you care about. My practical summary: use named volumes for databases, bind mounts for configs and media, always run automated backup scripts (and test restoring from them at least once), and keep your bind mounts in a single organised directory structure. If you're planning to run your stack on a cloud VPS, DigitalOcean Droplets are a solid choice — predictable pricing and easy snapshot functionality give you an additional layer of whole-disk backup on top of your container-level strategy.

Next steps: once your storage is sorted, look into automating your backup verification with a simple script that checks backup file age and size and sends you an alert if something looks off. A backup you haven't tested is just a hope, not a plan.

Discussion