![]() ![]() The first one copies the folder containing all the certificates that kubeadm creates. So what is really going on here? There are three commands in the example and all of them should be run on the control plane Node. Note that the contents of the backup folder should then be stored somewhere safe, where it can survive if the control plane is completely destroyed. Sudo cp /etc/kubeadm/kubeadm-config.yaml backup/ Snapshot save /backup/etcd-snapshot-latest.db key=/etc/kubernetes/pki/etcd/healthcheck-client.key \ cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt \ cacert=/etc/kubernetes/pki/etcd/ca.crt \ v /etc/kubernetes/pki/etcd:/etc/kubernetes/pki/etcd \ Sudo docker run -rm -v $(pwd)/backup:/backup \ If you set up your cluster using kubeadm (with no special configuration) you can do it similar to this: # Backup certificates In addition to that, we need the certificates and optionally the kubeadm configuration file for easily restoring the master. So let’s take a look at what’s needed and how to do it.Īs mentioned previously, we need to backup etcd. Furthermore, there is no information about what else you need to backup. It’s simply unclear what an etcd snapshot has to do with your applications running in the Kubernetes cluster. ![]() This makes it hard to apply the knowledge. But as a consequence, etcd is treated like a separate component with few connections to the Kubernetes world. The documentation on etcd for Kubernetes is quite good on a general level. It’s time to take a look at how it can be done! One for etcd and relevant certificates in order to restore the control plane, and one for the applications running in the cluster. The two reasons for backing up Kubernetes gives us (at least) two different backup strategies. Otherwise you are not really taking advantage of what Kubernetes has to offer. If you find yourself needing backups of worker Nodes (for example because you are using local storage on the Node), you should really consider changing the way you deploy your applications instead. Some Pods may have to be evicted and rescheduled of course, but if you build your applications correctly this should not be a problem. As long as there are sufficient resources left in the cluster, you should be able to take down/replace a worker without affecting the workload. it should not matter what Node a Pod is running on. This is because workers should be interchangeable in Kubernetes. You may notice that we didn’t mention backing up worker Nodes in the previous section. We will focus mostly on the first point in this post: backing up and restoring a control plane Node. But here we are talking about only the workload, which should be able to run on any (similar) cluster. In the previous case the backup was heavily tied to a specific cluster exactly because it was supposed to restore that same cluster. Note that there is a difference here in that these resources should be completely cluster agnostic. This requires backups of all the resources in the cluster, along with any state stored in persistent volumes. The second point in the list is relevant for restoring/migrating the workload to a new cluster or restoring a single failed application. And this is of course especially important if you run a cluster with just a single control plane Node. In other words, you need to make backups for the reason of restoring a control plane Node, or be forced to migrate to a new cluster if this happens. But this isn’t very helpful unless you are able to restore the control plane later. That is, unless the workload needs to talk to the API, of course. To be able to restore applications (with data).Īs you may know, the workload will happily keep running even if the control plane goes down.To be able to restore a failed control plane Node.There are essentially two reasons for backing up: ![]() Isn’t it all about stateless applications that you can easily redeploy on any other cluster? Well state is still quite useful or even necessary, besides, avoiding downtime (due to migration) is still a thing for stateless applications. Some may even wonder if backups are needed at all in Kubernetes. This may seem like a silly question but it is quite important to know what the backup is for before you decide on how to do it. It was closed in the summer 2020 without a fix due to the complexity of the topic. When I first wrote this post I referred to this issue about how to do Kubernetes backups and migrations from 2016. But it is hard to find anything putting the pieces together. There are pages pointing out that you should do backups, some references to solutions like Velero and descriptions for how to back up etcd. Unfortunately, many of them do not explain the big picture. If you search for “backup Kubernetes” on google you will probably find quite a lot of different solutions. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |