Skip to content

Running Stateful Workloads on EKS Auto Mode

EKS Auto Mode simplifies cluster operations by handling node provisioning, scaling, and infrastructure management automatically. But what about persistent storage? One of my first questions when testing Auto Mode was whether it would handle stateful workloads as seamlessly as it handles compute.

The good news is that storage is configured out of the box. EKS Auto Mode comes with the EBS CSI driver pre-installed, so you can start running stateful applications immediately without additional setup. This post walks through how to configure persistent storage on an EKS Auto Mode cluster and deploy a stateful application.

Storage Drivers in Auto Mode

Storage is pre-configured in EKS Auto Mode. When you run kubectl get csidrivers, you should see the EBS CSI driver already available:

EBS CSI Driver in EKS Auto Mode

This is a significant improvement over standard EKS clusters, where you'd need to install and configure the EBS CSI driver yourself, including IAM roles and service accounts.

Deploying a Stateful Application

Let's walk through deploying a stateful application that uses persistent storage. The process involves three steps: creating a StorageClass, claiming a persistent volume, and deploying a pod that uses it.

Step 1: Create a StorageClass

The StorageClass defines how persistent volumes should be provisioned. Here's a configuration that works well for most use cases:

yml
# fully default class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ebs-sc
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: ebs.csi.eks.amazonaws.com
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
parameters:
  type: gp3
  encrypted: "true"

Key Configuration Details:

  • provisioner: ebs.csi.eks.amazonaws.com - Uses the EBS CSI driver that comes with EKS Auto Mode
  • volumeBindingMode: WaitForFirstConsumer - Delays volume creation until a pod actually needs it. This ensures the volume is created in the same availability zone as the pod, avoiding cross-AZ mount failures
  • type: gp3 - Uses gp3 EBS volumes, which offer better performance and cost efficiency than gp2
  • encrypted: "true" - Encrypts volumes using the default aws/ebs KMS key. This is optional but recommended for most workloads
  • storageclass.kubernetes.io/is-default-class: "true" - Makes this the default storage class. Be careful with this if you're migrating from another storage controller

The WaitForFirstConsumer binding mode is particularly important. I've seen teams run into issues where volumes were created in different availability zones than their pods, making them unmountable. This setting prevents that problem entirely.

Step 2: Create a PersistentVolumeClaim

Once the StorageClass is in place, you can claim storage for your application:

yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ebs-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: ebs-sc
  resources:
    requests:
      storage: 4Gi

This PersistentVolumeClaim requests a 4GB volume using the ebs-sc storage class we created earlier. The ReadWriteOnce access mode means the volume can be mounted by one node at a time, which is appropriate for most stateful applications like databases.

The actual EBS volume won't be created yet. Because we set volumeBindingMode: WaitForFirstConsumer, the volume will only be provisioned when a pod claims it.

Step 3: Deploy a Pod with Persistent Storage

Now we can deploy a pod that uses the persistent volume:

yml
apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
  - name: app
    image: nginx
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 3500; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: ebs-claim

This example pod runs nginx and writes timestamped log entries to /data/out.txt every hour. The /data directory is backed by the persistent volume, so the data survives pod restarts.

When this pod is created, Kubernetes will trigger the EBS CSI driver to provision a 4GB gp3 encrypted volume in the same availability zone as the pod. The volume will be automatically attached to the node and mounted into the pod.

Verifying the Setup

After deploying the pod, you can verify everything is working:

bash
# Check the pod is running
kubectl get pod app

# Check the PVC is bound
kubectl get pvc ebs-claim

# Check the persistent volume was created
kubectl get pv

# Verify data is being written
kubectl exec app -- cat /data/out.txt

You should see timestamped entries in the output file, confirming that the application is writing to persistent storage.

Additional Resources