Managing the pods during node failure using replication controller
In this blog, We will show you how kubernetes is Managing the pods during node failure using replication controller.
- 2 Node Cluster ( 1 Master VM with 2 Nodes)
- Kubernetes Components
- We have already installed and configured the 2 Node cluster in our demo environment.
- Please check the URL https://blog.assistanz.com/cloud-computing/steps-to-install-kubernetes-cluster-manually-using-centos-7/ for more information
REPLICATION CONTROLLER OVERVIEW
- We have already created a replication controller in our environment.
- Currently, there are four PODs available in this replication controller. It created pods on each node randomly.
- To simulate the node failure, Let’s disable the node 2 network interface.
- After few seconds, you can verify the node status from the kubernetes master VM.
- Kubernetes will wait for a while before rescheduled the PODS to available nodes. If the nodes were not reachable after several minutes, the POD status will change to unknown.
- At this point, replication controller creates new pods in the available node.
- The unknown status POD’s are marked for deletion.
- Once the Node2 is back to online, you can see the nodes status change to ready.
MOVING THE POD OUT OF REPLICATION CONTROLLER
- Replication Controller manages the PODs through Label selector. We can view the pod labels using show labels option.
kubectl get pods --show-labels
- Let’s modify the label of a POD using below command.
Syntax: kubectl label <object type> <object name> <label variables> --overwrite
Example: kubectl label pod rep-pod-vl9mm env=prod --overwrite
- Command executed successfully.
- The old pod is no longer matched with replication controller label selector. So, Kubernetes will start creating a new pod to match with the desired state.
- After few seconds, the new POD (rep-pod-b7kl2) is in running status. The old POD (rep-pod-vl9mm) will remain as the unmanaged POD.
Thanks for reading this blog. We hope it was useful for you to learn about managing the PODS during node failure using replication controller.