Managing The Pods During Node Failure Using Replication Controller

Managing the pods during node failure using replication controller

In this blog, We will show you how kubernetes is Managing the pods during node failure using replication controller.

REQUIREMENTS

  • 2 Node Cluster ( 1 Master VM with 2 Nodes)
  • Kubernetes Components

INFRASTRUCTURE OVERVIEW

  • We have already installed and configured the 2 Node cluster in our demo environment.
  • Please check the URL https://blog.assistanz.com/cloud-computing/steps-to-install-kubernetes-cluster-manually-using-centos-7/ for more information

REPLICATION CONTROLLER OVERVIEW

  • We have already created a replication controller in our environment.
Managing the pods during node failure using replication controller
  • Currently, there are four PODs available in this replication controller. It created pods on each node randomly.

NODE FAILURE

  • To simulate the node failure, Let’s disable the node 2 network interface.
  • After few seconds, you can verify the node status from the kubernetes master VM.
  • Kubernetes will wait for a while before rescheduled the PODS to available nodes. If the nodes were not reachable after several minutes, the POD status will change to unknown.
  • At this point, replication controller creates new pods in the available node.
  • The unknown status POD’s are marked for deletion.
Managing the pods during node failure using replication controller
  • Once the Node2 is back to online, you can see the nodes status change to ready.
Managing the pods during node failure using replication controller

MOVING THE POD OUT OF REPLICATION CONTROLLER

  • Replication Controller manages the PODs through Label selector. We can view the pod labels using show labels option.

kubectl get pods --show-labels

Managing the pods during node failure using replication controller
  • Let’s modify the label of a POD using below command.

Syntax: kubectl label <object type> <object name> <label variables> --overwrite

Example: kubectl label pod rep-pod-vl9mm env=prod --overwrite

Managing the pods during node failure using replication controller
  • Command executed successfully.
Managing the pods during node failure using replication controller
  • The old pod is no longer matched with replication controller label selector. So, Kubernetes will start creating a new pod to match with the desired state.
Managing the pods during node failure using replication controller
  • After few seconds, the new POD (rep-pod-b7kl2) is in running status. The old POD (rep-pod-vl9mm) will remain as the unmanaged POD.
Managing the pods during node failure using replication controller

EXTERNAL LINKS

https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/

VIDEO

Thanks for reading this blog. We hope it was useful for you to learn about managing the PODS during node failure using replication controller.

Loges

Leave a Reply

Your email address will not be published. Required fields are marked *