Kubernetes High Availability (HA) Cluster Deploying on REDHAT: –
I used keepalive and HA proxy load balancer to deploy the HA cluster.
My Deployment Overview: –
Master and worker node OS – RedHat 8.6
Master nodes – 3
Worker nodes – 3
Kubernetes version – 1.24.0
Container Network Interface (CNI) Plugin – flannel
Container Runtime Engine – CRI-O – 1.23
IP | Hostname | |
Kube API VIP | 172.90.0.15 | kvip.kdinesh.in |
Masternode1 | 172.90.0.16 | master0.kdinesh.in |
Masternode2 | 172.90.0.17 | master1.kdinesh.in |
Masternode3 | 172.90.0.18 | master2.kdinesh.in |
Workernode1 | 172.90.0.19 | Worker0.kdinesh.in |
Workernode2 | 172.90.0.20 | Worker1.kdinesh.in |
Deployment: –
Deploy 5 VMs on vSphere using Redhat 8.6 ISO.

Configure DNS Entries for all the nodes and VIP. Check whether DNS is resolving.

Install and configure Keepalive and Haproxy on Master nodes
For installing and configuring the Keepalive and Haproxy. You can download the keepalive and Haproxy installation bash script using the below links (Update Hostname, IP …etc. details based on your infra).
Run it on the first master node.
Run it on the master node two.
Run it on the master node Three.

Next, Installing and Configuring Kubernetes
For installing and configuring Kubernetes, you download the below bash script (Based on your requirements, you update the bash file)
Run the below bash file on all master and worker nodes.
Deploying Kubernetes cluster using kubeadm
Run the below command on one of the master nodes
kubeadm init –control-plane-endpoint “172.90.0.15:8443” –upload-certs –pod-network-cidr=10.244.0.0/16
Note: – replace the control plane endpoint with your kube api VIP. And you use different cidr for pod network.


Fig – 1.0
You can see the screenshot above first master node Kubernetes cluster installation was completed successfully.
To access the Kubernetes cluster, run the below commands and check the node status
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

For adding the remaining master 2 and 3 to the Kubernetes cluster, you can copy the above (fig -1.0) red color highlighted command and paste it into master two and master three nodes.
For worker nodes worker one and worker 2 to the Kubernetes cluster, you can copy the above (fig -1.0) yellow highlighted command and paste it into worker nodes one and two.


Once adding nodes to the cluster, you can check the node and pod status.
Kubectl get nodes -o wide
Kubectl get pods -A

Installing CNI Flannel
Run the below command (If you want to customize any specification, you can edit yaml file before deploying)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Check the flannel pod status. weather pods are running.

We can deploy a sample nginx pod, and we can expose pod service using metallb Load Balancer
First metallb pod on cluster
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
After Metallb pod deployment, create and deploy the metallb config map (Update IP address range based on your infra). I specified the address range 172.90.0.20 -30.

Next, we can deploy nginx and expose the nginx service using the metallb load balancer.
You deploy the pod using a blow sample nginx file.
You can check below metallb, nginx pods running and nginx service exposed to 172.90.0.20 (In metallb we specified rang 172.90.0.20-30)
You can access the nginx service externally using your browser.


I used the above process to create HA k8 cluster with 3 master and 2 worker. I got below error after kubeadm init step. Please suggest to fix.
root@vm105 ~]# kubeadm init –control-plane-endpoint “172.90.0.15:8443” –upload-certs –pod-network-cidr=10.244.0.0/16
I0403 13:41:34.713424 1053214 version.go:256] remote version is much newer: v1.29.3; falling back to: stable-1.28
[init] Using Kubernetes version: v1.28.8
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [fenrir-vm105 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 9.11.48.78 172.90.0.15]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] Generating “etcd/ca” certificate and key
[certs] Generating “etcd/server” certificate and key
[certs] etcd/server serving cert is signed for DNS names [fenrir-vm105 localhost] and IPs [9.11.48.78 127.0.0.1 ::1]
[certs] Generating “etcd/peer” certificate and key
[certs] etcd/peer serving cert is signed for DNS names [fenrir-vm105 localhost] and IPs [9.11.48.78 127.0.0.1 ::1]
[certs] Generating “etcd/healthcheck-client” certificate and key
[certs] Generating “apiserver-etcd-client” certificate and key
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
W0403 13:42:39.383925 1053214 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing “admin.conf” kubeconfig file
W0403 13:42:39.597689 1053214 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing “kubelet.conf” kubeconfig file
W0403 13:42:39.740591 1053214 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
W0403 13:42:39.858344 1053214 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
– The kubelet is not running
– The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
– ‘systemctl status kubelet’
– ‘journalctl -xeu kubelet’
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
– ‘crictl –runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause’
Once you have found the failing container, you can inspect its logs with:
– ‘crictl –runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID’
error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster
To see the stack trace of this error execute with –v=5 or higher
[root@vm105 ~]#
Hi,
Please check the firewall. The required ports are permitted at the firewall level. Also, verify the DNS.
Thanks
DineshReddy