Scaling Your Homelab: When to Migrate from Docker Compose to Kubernetes

Scaling Your Homelab: When to Migrate from Docker Compose to Kubernetes

I started with Docker Compose about three years ago. It was simple, elegant, and solved everything I needed: spin up Nextcloud, Jellyfin, Home Assistant, a reverse proxy. A single docker-compose.yml file, and my entire homelab was orchestrated. Then things changed. My stack grew to 15+ services. I needed better resource allocation. I wanted automatic failover. I needed rolling updates without downtime. That's when I realized Docker Compose had hit its ceiling, and Kubernetes wasn't the obvious next step—but it became necessary.

This article is about knowing when that moment arrives for you, and how to actually make the jump without losing sleep.

Why Docker Compose Stops Working

Docker Compose is single-host orchestration. It's brilliant at that job. But the moment you:

…you're working against Docker Compose's design, not with it.

I kept trying to patch Docker Compose with extra scripts—a Portainer dashboard here, a cron job there. But when my Jellyfin container leaked memory and brought down my entire host, I realized I needed actual orchestration, not clever bash.

The Real Threshold: When to Actually Migrate

Before you commit 40 hours to learning Kubernetes, ask yourself these questions:

1. Do You Have Multiple Machines?

If all your services run on a single server (even a beefy one), Docker Compose is defensible. Kubernetes shines when you have 3+ nodes and want to distribute workload. If you're running everything on a single NUC or Intel NUC, Kubernetes is overkill—stay with Compose.

2. Are You Hitting Resource Contention?

I realized my problem when I looked at my Docker stats: Nextcloud was consuming unbounded RAM and starving Jellyfin. Kubernetes lets you set hard resource limits per pod and automatically evict greedy containers. Docker Compose doesn't enforce limits; it just warns you.

3. Do You Need High Availability?

If your homelab going down for 10 minutes is acceptable, Docker Compose is fine. If you're running mission-critical services (like a Vaultwarden password vault or a Nextcloud instance with active users), Kubernetes's automatic restart and rescheduling logic is worth the complexity.

4. Are You Managing More Than 8–10 Services?

My 6-service setup was easy. At 15 services, Docker Compose became a coordination nightmare. Kubernetes enforces declarative state, which actually simplifies management at scale.

If you answer "yes" to 2+ of these, migrate. If you answer "no" to all of them, save yourself the pain and stick with Compose.

The Migration Path: Do It Gradually

I didn't rip out Docker Compose overnight. I migrated incrementally—this matters because Kubernetes and Docker Compose can coexist during transition.

Step 1: Choose Your Kubernetes Flavor

For homelab, I have three legitimate options:

I went with k3s because I was already comfortable with Rancher's ecosystem, and it runs comfortably on three Raspberry Pi 4s with 4GB RAM each.

Tip: Install k3s with curl -sfL https://get.k3s.io | sh -. It pulls your kubeconfig to ~/.kube/config automatically. Unlike full Kubernetes, you don't need to debug 15 different systemd services.

Step 2: Convert Your First Non-Critical Service

Don't start with Nextcloud. Start with something stateless: a reverse proxy, a monitoring tool, an API gateway. I migrated my Caddy reverse proxy first because if it broke, I could quickly fall back to the Docker Compose version.

Here's how I converted my Caddy Compose service to a Kubernetes deployment:

# Old Docker Compose (docker-compose.yml)
services:
  caddy:
    image: caddy:2.7-alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - caddy-data:/data
    networks:
      - web

volumes:
  caddy-data:

Converted to Kubernetes manifests:

# Kubernetes ConfigMap (caddy-configmap.yaml)
apiVersion: v1
kind: ConfigMap
metadata:
  name: caddy-config
  namespace: default
data:
  Caddyfile: |
    :80 {
      reverse_proxy localhost:3000
    }
    :443 {
      reverse_proxy localhost:3000
      tls internal
    }

---
# Kubernetes Deployment (caddy-deployment.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: caddy
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: caddy
  template:
    metadata:
      labels:
        app: caddy
    spec:
      containers:
      - name: caddy
        image: caddy:2.7-alpine
        ports:
        - containerPort: 80
          name: http
        - containerPort: 443
          name: https
        volumeMounts:
        - name: config
          mountPath: /etc/caddy
          readOnly: true
        - name: data
          mountPath: /data
        resources:
          requests:
            memory: "64Mi"
            cpu: "100m"
          limits:
            memory: "256Mi"
            cpu: "500m"
      volumes:
      - name: config
        configMap:
          name: caddy-config
      - name: data
        emptyDir: {}

---
# Kubernetes Service (caddy-service.yaml)
apiVersion: v1
kind: Service
metadata:
  name: caddy
  namespace: default
spec:
  type: LoadBalancer
  selector:
    app: caddy
  ports:
  - name: http
    port: 80
    targetPort: 80
  - name: https
    port: 443
    targetPort: 443

Notice the differences:

Deploy it with: kubectl apply -f caddy-*.yaml

Step 3: Handle Persistent Data

Stateless services are easy. Stateful ones—Nextcloud, Vaultwarden, databases—require persistent volumes. This is where most homelab migrations fail.

k3s includes a local storage provisioner by default. Your PersistentVolumeClaim automatically creates a hostPath volume. For multi-node clusters, I use Longhorn (Rancher's distributed storage) or NFS exports from a NAS.

Example: Nextcloud with persistent storage in Kubernetes:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nextcloud-data
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  resources:
    requests:
      storage: 100Gi

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nextcloud
spec:
  serviceName: nextcloud
  replicas: 1
  selector:
    matchLabels:
      app: nextcloud
  template:
    metadata:
      labels:
        app: nextcloud
    spec:
      containers:
      - name: nextcloud
        image: nextcloud:28-apache
        ports:
        - containerPort: 80
        volumeMounts:
        - name: nextcloud-storage
          mountPath: /var/www/html
        env:
        - name: NEXTCLOUD_ADMIN_USER
          valueFrom:
            secretKeyRef:
              name: nextcloud-secrets
              key: admin-user
        - name: NEXTCLOUD_ADMIN_PASSWORD
          valueFrom:
            secretKeyRef:
              name: nextcloud-secrets
              key: admin-password
        resources:
          requests:
            memory: "512Mi"
            cpu: "500m"
          limits:
            memory: "2Gi"
            cpu: "1000m"
  volumeClaimTemplates:
  - metadata:
      name: nextcloud-storage
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: local-path
      resources:
        requests:
          storage: 100Gi

---
apiVersion: v1
kind: Secret
metadata:
  name: nextcloud-secrets
type: Opaque
stringData:
  admin-user: admin
  admin-password: changeme123
Watch out: Store secrets in a proper secret manager (Sealed Secrets, Vault). Never commit plaintext secrets to git. I use Sealed Secrets with k3s; it encrypts secrets at rest and only decrypts them on the cluster.

Step 4: Keep Docker Compose Running Alongside Kubernetes

During migration, my Docker Compose stack ran on Node A (a dedicated Pi), and my Kubernetes cluster ran on Nodes B, C, D. They could talk to each other via the network. I gradually moved services from Compose to Kubernetes over three months. This zero-risk approach meant I could roll back any broken service in seconds.

The Learning Curve Is Real, But Worth It

Kubernetes has a reputation for being overwhelming. It is—at first. But I went from "confused" to "productive" in about four weeks by:

I didn't need to understand etcd, RBAC, or CNI plugins. I learned those later because I was curious, not because they were essential.

When to Stay with Docker Compose

I want to be honest: if you have:

Docker Compose + a good backup strategy + Watchtower for automatic updates is perfectly legitimate. I know homelabbers running stable, happy Compose stacks for years. The difference is intentionality: you've chosen simplicity and accept the tradeoff.

Next Steps

If you're ready to migrate, start here:

  1. Install k3s on a test machine (a spare Pi, a second-hand laptop, even a VPS—RackNerd's KVM VPS is affordable for homelab experimentation)
  2. Deploy one stateless service (nginx, a monitoring agent, anything simple)
  3. Get comfortable with kubectl logs, kubectl describe pod, and kubectl exec
  4. Then move a stateful service, starting with something you can afford to break

The migration took me 8–10 weeks from "Kubernetes curious" to "running 12 services on k3s." Your timeline will vary, but don't rush it. Each service you migrate teaches you something about your infrastructure. That knowledge is worth more than speed.

Discussion