Kubernetes High Availability (HA) Cluster Deploying on REDHAT: –

I used keepalive and HA proxy load balancer to deploy the HA cluster.

My Deployment Overview: –

Master and worker node OS – RedHat 8.6

Master nodes – 3

Worker nodes – 3

Kubernetes version – 1.24.0

Container Network Interface (CNI) Plugin – flannel

Container Runtime Engine – CRI-O – 1.23

IP

Hostname

Kube API VIP

172.90.0.15

kvip.kdinesh.in

Masternode1

172.90.0.16

master0.kdinesh.in

Masternode2

172.90.0.17

master1.kdinesh.in

Masternode3

172.90.0.18

master2.kdinesh.in

Workernode1

172.90.0.19

Worker0.kdinesh.in

Workernode2

172.90.0.20

Worker1.kdinesh.in

Deployment: –

Deploy 5 VMs on vSphere using Redhat 8.6 ISO.

Configure DNS Entries for all the nodes and VIP. Check whether DNS is resolving.

Install and configure Keepalive and Haproxy on Master nodes

For installing and configuring the Keepalive and Haproxy. You can download the keepalive and Haproxy installation bash script using the below links (Update Hostname, IP …etc. details based on your infra).

Run it on the first master node.

Run it on the master node two.

Run it on the master node Three.

Next, Installing and Configuring Kubernetes

For installing and configuring Kubernetes, you download the below bash script (Based on your requirements, you update the bash file)

Run the below bash file on all master and worker nodes.

Deploying Kubernetes cluster using kubeadm

Run the below command on one of the master nodes

kubeadm init –control-plane-endpoint “172.90.0.15:8443” –upload-certs –pod-network-cidr=10.244.0.0/16

Note: – replace the control plane endpoint with your kube api VIP. And you use different cidr for pod network.

Fig – 1.0

You can see the screenshot above first master node Kubernetes cluster installation was completed successfully.

To access the Kubernetes cluster, run the below commands and check the node status

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

For adding the remaining master 2 and 3 to the Kubernetes cluster, you can copy the above (fig -1.0) red color highlighted command and paste it into master two and master three nodes.

For worker nodes worker one and worker 2 to the Kubernetes cluster, you can copy the above (fig -1.0) yellow highlighted command and paste it into worker nodes one and two.

Once adding nodes to the cluster, you can check the node and pod status.

Kubectl get nodes -o wide

Kubectl get pods -A

Installing CNI Flannel

Run the below command (If you want to customize any specification, you can edit yaml file before deploying)

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Check the flannel pod status. weather pods are running.

We can deploy a sample nginx pod, and we can expose pod service using metallb Load Balancer

First metallb pod on cluster

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml

After Metallb pod deployment, create and deploy the metallb config map (Update IP address range based on your infra). I specified the address range 172.90.0.20 -30.

Next, we can deploy nginx and expose the nginx service using the metallb load balancer.

You deploy the pod using a blow sample nginx file.

You can check below metallb, nginx pods running and nginx service exposed to 172.90.0.20 (In metallb we specified rang 172.90.0.20-30)

You can access the nginx service externally using your browser.