Creation of K8s Cluster with containerd
Introduction
Does your application server keep on shutting down or not responding due to high network traffic?. Feel free as this blog aims to give a solution for those issues out there. After reading this blog you will be able to create a Kubernetes cluster with containerd effortlessly. At the end of the blog, you will be getting surprise information from our technical experts. So what are you waiting for? come and let’s explore the step-by-step guide for creating the cluster with the containerd as a container runtime environment.
What is meant by Kubernetes cluster?
A Kubernetes cluster is a set of nodes that work together to run containerized applications efficiently. Physical machines or virtual machines can serve as these nodes, and they form an architecture as master and worker nodes. Let’s break down the kubernetes cluster’s essential components,
Master node: API Server, Controller Manager, Scheduler
Worker node: Kubelet, Container Runtime, Kube Proxy
System Requirements
- Ubuntu 20.04 LTS
- Three Control plane nodes (2 vcpu, 4GB memory, 30GB HDD)
- Two worker nodes (12 vcpu, 8GB memory, 50GB HDD)
Pre-requisites
- In this blog, you will be creating a High Availability K8s cluster for your web application so before proceeding with the creation of the K8s cluster, you need to complete the load balancer setup. To make it easier we had already set up a load balancer with High Availability in our demo environment. To learn how to set up the load balancer using HAProxy refer to our previous blog.
Note: Follow the below steps in all nodes until you reach the cluster initialization procedure.
Installation of Containerd
- To begin with, load the br_netfilter module which is required for networking.
sudo modprobe overlay
sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
- The values of certain fields must all be set to 1 to allow iptables to see the bridged traffic as Kubernetes requires.
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
- Now, apply the new settings in the server which was done in the previous step without restarting the server.
sudo sysctl – -system
- Now, install the curl into your server.
sudo apt install curl -y
- Get the apt-key and then add the repository from which the containerd needs to get installed.
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –
sudo add-apt-repository “deb [arch=amd64]
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable”
- Now, update and install the containerd package in the server.
sudo apt update -y
sudo apt install -y containerd.io
- Set up the default configuration file.
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
- Next, we will be in need to modify the containerd configuration file and ensure that the cgroupDriver is set to systemd. To do so, edit the following file,
sed -i ‘/SystemdCgroup/s/false/true/g’ /etc/containerd/config.toml
Finally, we installed the containerd in the server and now, restart the containerd service to apply the modified changes.
Installation of Kubernetes
- To install the kubernetes in your server add the repository key and the repository.
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add –
echo “deb https://apt.kubernetes.io/ kubernetes-xenial main” | sudo tee /etc/apt/sources.list.d/kubernetes.list
- Now, update your system and then install the 3 Kubernetes modules.
sudo apt update -y
sudo apt install -y kubelet kubeadm kubectl
Note: To install specific K8 version, specify versions in these modules kubelet=1.2x.x kubeadm=1.2x.x kubectl=1.2x.x
- Make sure all the hostnames of the nodes are appropriate and update those entries in the host file /etc/hosts in every node.
10.1.1.1 k8-master-01
10.1.1.2 k8-worker-01
10.1.1.3 k8-worker-02
- Set up the firewall by installing the following rules on the master node,
sudo ufw allow 6443/tcp
sudo ufw allow 2379/tcp
sudo ufw allow 2380/tcp
sudo ufw allow 10250/tcp
sudo ufw allow 10251/tcp
sudo ufw allow 10252/tcp
sudo ufw allow 10255/tcp
sudo ufw reload
- Now, add the following rules on the worker node,
sudo ufw allow 10251/tcp
sudo ufw allow 10255/tcp
sudo ufw reload
- To allow Kubelet to work properly, we need to disable swaps on both machines.
sudo swapoff –a
- To make it permanent, remove swap entries in the /etc/fstab file and also remove the file /swap.img to save the disk space.
- Finally, enable the Kubelet service on both systems so we can start the initialization of the K8s cluster.
sudo systemctl enable kubelet
Cluster initialization in the Control plane node
1. Cluster image pull
Run the following command on the master node to allow the Kubernetes to fetch the required images before cluster initialization.
sudo kubeadm config images pull
For very very specific version of K8s, specify the flag –kubernetes-version v1.23.0
2. Initialize the cluster
kubeadm init –pod-network-cidr=10.244.0.0/16 –upload-certs —
kubernetes-version=v1.23.0 –control-plane-endpoint=<LB-IP>:6443
After this, the Kubernetes will get initialized successfully and then proceed with installing the CNI plugin.
Note: Now, proceed to add worker nodes to the K8s cluster by using the command that is shown during the cluster initialization process.
Finally, to set the proper role for the worker node, run this command on the master node,
kubectl label node <worker-node>
node-role.kubernetes.io/worker=worker
Tuning Kube Controller and Kubelet for better High Availability
As we said above, this will be our surprise to you. With the following steps you can make your web application uptime of 99.9% all the time.
When any one of the worker nodes goes down, PODs will get shifted to another worker node only after 5 minutes.
So with the below tweak settings, you could evict PODs in all namespaces to another node within 12 seconds. This increases your application’s durability and makes your web application available with an uptime of 99.9%.
Note: Depending upon your applications you can tweak your settings.
In all nodes including the controller, execute the below command
# kubeadm upgrade node phase kubelet-config
In this file,/var/lib/kubelet/kubeadm-flags.env, update the below variable and restart the kubelet service.
–node-status-update-frequency=5s
In controller machines, in file /etc/kubernetes/manifests/kube-controller-manager.yaml, add the below variables
– –node-monitor-period=3s
– –node-monitor-grace-period=10s
# service containerd restart
create a file /etc/kubernetes/manifests/kubeadm-apiserver-update.yaml in the controller, with the content
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: 1.27.4
apiServer:
extraArgs:
enable-admission-plugins: “DefaultTolerationSeconds”
default-not-ready-toleration-seconds: “10”
default-unreachable-toleration-seconds: “10”
# kubeadm init phase control-plane apiserver –config=kubeadm-
apiserver-update.yaml
In /var/lib/kubelet/config.yaml, update the below variable and then restart the kubelet service in all nodes.
shutdownGracePeriod: 8s
shutdownGracePeriodCriticalPods: 5s
Wrapping Up
Thank you for taking the time to read our blog. This blog “How to create the K8s cluster with containerd?” was written by our staff Senthilnathan. We hope you found the information valuable and insightful. If you find any issues with the information provided in this blog don’t hesitate to contact us (info@assistanz.com).
Optimize your kubernetes and never lose a valuable customer again!
Our mission is to ensure that your containers remain lightning-fast and protected at all times by monitoring and maintaining it 24×7 by our experts.
Related Posts
How to set up a HAProxy load balancer in Ubuntu 20.04?
How to create the K8s replication sets? – Simple guide