Draining & Uncordoning in a Kubernetes cluster

Gaurav Kaushik
4 min readApr 16, 2021

--

In Kubernetes, while performing a Maintenance activity, sometimes it is needed to remove a k8s node from service. To do this, one can DRAIN the node. In this process containers running on the node(to be drained) will be gracefully terminated(& potentially rescheduled on another node).
Draining can be performed on a Control Plane node as well as a Worker node.

Command to Drain a Kubernetes node:
kubectl drain <node-name> — ignore-daemonsets

- -’ignore-daemonsets’ is used because there may be some daemonset pods that are tied to the node, so user may want to ignore them while executing the draining.

Uncordoning: After the Draining process is completed & all the Maintenance tasks are done, the user may want to run containers on that node once again. That’s where Uncordoning comes into play. Uncordoning means that the Node is ready to run the pods once again.

Now, in this post Iwill demonstrate Draining & Uncordoning by upgrading the kubeadm, kubectl & kubelet on the Control Plane node & Worker node in our Kubernetes setup one by one:

Control Plane Upgrade Steps:

a) Upgrade kubeadm on the Control Plane node
b) Drain the Control Plane node
c) Plan the upgrade (kubeadm upgrade plan)
d) Apply the upgrade (kubeadm upgrade apply)
e) Upgrade kubelet & kubectl on the control Plane node
f) Uncordon the Control Plane node

Worker Node Upgrade Steps:

a) Drain the node
b) Upgrade kubeadm on the node
c) Upgrade the kubelet configuration (kubeadm upgrade node)
d) Upgrade kubelet & kubectl
e) Uncordon the node

Starting with the Control node, following is the configuration:
1) Upgrade kubeadm

sudo apt-get update && \
> sudo apt-get install -y — allow-change-held-packages kubelet=1.20.2–00 kubectl=1.20.2–00
Kubeadm version for the Control node is shown as ‘v1.20.2’, but it is not applied yet. The output of ‘kubectl get nodes’ clearly shows that the version is still ‘v1.20.1’

2) Drain the Control node

kubectl drain k8s-control --ignore-daemonsets

3) Plan the upgrade

sudo kubeadm upgrade plan v1.20.2

4) Apply the upgrade

sudo kubeadm upgrade apply v1.20.2
The Cluster is upgraded to v1.20.2

5) Upgrade kubelet & kubectl

sudo apt-get install -y --allow-change-held-packages kubelet=1.20.2-00 kubectl=1.20.2-00

6) Uncordon the node
NOTE: It may be possible that the upgrade may change the kubelet unit file, so daemon-reload is executed in order that systemctl sees the new kubelet unit file. Next restart the kubelet

sudo systemctl daemon-reload
sudo systemctl restart kubelet
kubectl uncordon k8s-control

Finally the latest version of the components of the Control node can be checked on executing ‘kubectl get nodes’:

Steps for the Worker node:
1) Drain the Worker node (from the Control node console)

kubectl drain k8s-worker1 --ignore-daemonsets --force

2) Upgrade kubeadm

sudo apt-get update && \
> sudo apt-get install -y --allow-change-held-packages kubeadm=1.20.2-00
Version on which kubeadm will be upgraded: ‘kubeadm version’

3) Upgrade the kubelet configuration

sudo kubeadm upgrade node

4) Upgrade kubelet & kubectl

sudo apt-get update && \
> sudo apt-get install -y --allow-change-held-packages kubelet=1.20.2-00 kubectl=1.20.2-00

5) Restart kubelet & Uncordon the Worker node

sudo systemctl daemon-reload
sudo systemctl restart kubelet

6) Uncordon the Worker node (from the control node console)

kubectl uncordon k8s-worker1

Till this point our kubernetes setup has Control plane node & Worker node-1 in upgraded and functional state.
Please use the same set of steps(Worker node-1) for Worker node-2 to have it in upgraded state.

I hope this post will help you understand the basic concept of Draining & Uncordoning.
Next post will demonstrate on how to backup etcd in a Kubernetes cluster.

--

--

Gaurav Kaushik
Gaurav Kaushik

No responses yet