How to Backup and Restore Kubernetes cluster manually

Kubernetes is a powerful tool for container orchestration, but like any complex system, it is susceptible to failures and data loss. To protect your Kubernetes deployment from such disasters, it’s essential to have a robust backup and restore strategy in place.

In this article, we’ll discuss how to back up and restore a Kubernetes cluster manually. While there are many automated tools available to make this process easier, understanding the manual process can be helpful in troubleshooting and gaining a deeper understanding of how Kubernetes works.

Backup and Restore Kubernetes cluster manually

Backing up a Kubernetes cluster

There are 2 manual ways that we can use to backup all Kubernetes objects

– Backing up Resources Configuration

– Backing up ETCD Cluster

Resources Configuration:

– We would have created objects like a pod, deployment, and services in either imperative or declarative methods.

– If you are using the imperative method to create objects, We won’t have definition YAML files, Even in the declarative approach there are chances the definition files used while creating the object might become stale

To backup, We can just query the “kube-apiserver” with the below command

kubectl get all —all-namespaces -o yaml > all-pod-deploy-svc.yaml

The above command would get the definition for pod, deployment, and services and can be restored using “kubectl create” command. For another another resource group, we need to consider using “VELERO”, which would help to take backup of all objects in the cluster

ETCD Cluster:

If you want to back up the complete cluster state (Like all the objects on all resource groups), We can go with backing up ETCD. All Kubernetes objects are stored on ETCD and It is necessary to keep a backup plan.

The command for taking backup

ETCDCTL_API=3 etcdctl --endpoints= \
  --cacert=<trusted-ca-file> --cert=<cert-file> --key=<key-file> \
  snapshot save  /tmp/snapshot.db

You will see the backup file in the tmp directory

ls /tmp

Restore the backup

The restore process for a Kubernetes cluster is similar to the backup process but in reverse. Here’s a general outline:

service kube-apiserver stop

ETCDCTL_API=3 etcdctl snapshot restore snapshot.db --data-dir /var/lib/etcd-from-backup 

Now, we need to add the new backup dir to the etcd.service file and followed by etcd restart and start kube-apiserver

systemctl daemon-reload
service etcd restart
service kube-apiserver start

Your cluster will be back to original state.


We can try the below quick scenario for testing

Create a pod “test-pod”

Take etcd backup

delete the same pod “test-pod”

Restore backup

Now, you should be able to see the previous state of the cluster, Where your test-pod is still running


In this article, we discussed how to back up and restore a Kubernetes cluster manually. While the manual process can be time-consuming and error-prone, it’s essential to have a backup strategy in place to protect against data loss and system failures. There are many automated backup and restore tools available for Kubernetes, such as Velero and Stash, which can simplify this process. However, understanding the manual process is valuable for troubleshooting and gaining a deeper understanding of how Kubernetes works.

Good Luck with your learning !!

Related Topics:

Run a command inside a Kubernetes Container/Pod

How to View Kubernetes Pod Logs (With Docker logging Examples)

Create and Edit a pod in Kubernetes

Similar Posts