If You want to deploy RedHat OpenShift Cluster (OCP) on Bare-Metal, vSphere, or any other platform, you can follow this document for deploying the OpenShift.

Document Prepared by using vSphere Platform. But following the below steps, you can deploy OpenShift on any other platform.

Basic Requirements: –

  1. Bastion Machine – Any Linux/windows/macOS Machine – For deploying the OpenShift cluster.
  2. Temporary Bootstrap Machine – Once Cluster is deployed, you can remove the Bootstrap machine.
  3. Min Three Master nodes.
  4. Recommend at least two worker nodes.
  5. HA proxy Load Balancer and HTTP Server (You can also use any existing load balancers)
  6. One Network DHCP or static
  7. DNS
  8. NTP – Optional
  9. License (you can try 60 days evaluation license)

Recommended Resources for cluster machines: –

MY Deployment Environment Overview: –

vSphere Version – 8.0

Network – Static IPs

Cluster name – ocp4

DNS – 192.168.5.150, DNS Name – kdinesh.in

Redhat OpenShift (OCP) Version – 4.11

Deployment Overview: –

  1. Download Required installation Packages (OpenShift installer, CLI tool, Pull secret).
  2. Configure required DNS records in DNS server    ## For OpenShift Create DNS records with cluster name. Ex: –

Hostname.Clustername.Base domain –  bastion.ocp4.kdinesh.in

3. Deploy the Bastion machine and copy the downloaded packages.

4. Deploy and Configuring Ha Proxy Load Balancer.

5. Generate SSH key, Install OpenShift Installer and OpenShift CLI tool, and generate manifest and ignition files.

6. Deploy, Configure Httpd service and copy Ignition .ign files to httpd.

7. Download RHCOS ISO.

8. Creating machines using RHCOS

9. Configure OpenShift Cluster.

  1. Downloading required Packages: –

Download OpenShift Installer, Client Cli and pull secret from OpenShift Portal – https://access.redhat.com/downloads/content/290

2. Configure DNS records: –

Forward lookup records

PTR Records

Check whether DNS Records are Working Or not

3. Deploy Bastion Machine:-  

       Created Centos 8 Stream VM

Copy Downloaded RedHat OpenShift 4.11 Installer, Client CLI Files to Bastion Machine

4. Deploying and Configuring HA Proxy Load Balancer: –

The HA proxy was deployed on the Bastion machine in my deployment environment. Suppose in your env that you’re using Load Balancer, you can use it, or you can configure the dedicated machine as a HA Proxy Load Balancer.

Ha proxy installation command – yum install haproxy

You can backup the default haproxy configuration and create a new file as haproxy.cfg and paste the haproxy config

Sample Ha proxy Config file, you can customize and use it – https://github.com/Dineshk1205/OCP/blob/main/haproxy.cfg

Save the configuration file and start, check the haproxy status and enable haproxy

systemctl start haproxy           ## For starting the Haproxy service

systemctl enable haproxy       ## After reboot Haproxy will start automatically

systemctl status haproxy         ## Check haproxy running or any errors

5. Generating SSH key, Install OpenShift Installer and OpenShift CLI tool, and generate manifest and ignition files: –

Generate SSH Key

ssh-keygen -t ed25519 -N ‘’ -f ~/.ssh/id_ed25519 

eval “$(ssh-agent -s)”                     ## check ssh-agent running or not

 ssh-add ~/.ssh/id_ed25519   ## add key to agent

Extract copied OpenShift installer file

tar -vxf filename

Extract the OpenShift Client CLI file and copy it to /usr/local/bin

tar xvzf filename

cp kubectl oc /usr/local/bin/

Create New Directory

mkdir directory name – mkdir ocp

change to a new directory – cd directory name – cd ocp

create new file – install-config.yaml

Sample install-config. yaml file – https://github.com/Dineshk1205/OCP/blob/main/install-config.yaml  

You can update the above sample install-config. yaml based on your configuration and requirements

Save the install-config.yaml file on the newly created directory (In my deployment, it’s ocp )

Switch back to the openshift installer extracted directory and run the below command

./openshift-install create manifests –dire path  ## Command will generate manifest  on specified directory 

Verify the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false. This setting prevents pods from being scheduled on the control plane machines

Generate ignition config

./openshift-install create ignition-configs –dir path   ## command will generate bootstarap.ign, master.ign, worker.ign ..etc. files.

6. Deploy, Configure HTTPd service and copy ign Files to HTTP:-  

Yum -y install httpd    

After installation, httpd changed HTTP port 80 to 8080

Vi /etc/httpd/conf/httpd.conf

Start and enable the httpd service

Copy previously generated ign files to httpd directory – /var/www/html

Cp bootstrap.ign master.ign worker.ign /var/www/html

Assign permission to copied ign files and check whether ign files are accessible using URL or not

7. Download the RHCOS ISO file:-

./openshift-install cores print-stream-json | grep  ‘\.iso[^.]’   ## command will show installer compatible iso File Links

You can choose and Download using the above links based on your architecture.

  • Deploying bootstrap, master, and worker machines using RHCOS   ## you can skip this step if you are deploying on BareMetal:-

In My env, I am using a vSphere environment, so first, I will upload ISO into the vSphere datastore and create VM using ISO. Suppose you are doing Brae metal Deployment; you can boot the Server using RHCOS ISO.

Uploaded ISO into datastore

For creating VMs – Click on vCenter Cluster/Host > New Virtual Machine

Create a New Virtual machine > NEXT

Enter Virtual Machine name > Next

Select compute Resource > Click on Next

Select the datastore where you want to deploy the VM > Click next

Click on Next

Select OS Family as Linux > OS version as Other Linux 64 > Click on Next

Allocate at least 2VCPUS ,8 to 12Gb RAM, 100 GB Storage and mount RHCOS ISO > Click on Next

Review selected details > Click on Finish

OCP-Bootstrap VM Created Successfully

Similarly, you can create master and worker VMs (In my env – created – 1 bootstrap, three masters, two workers.” if you want to deploy a more significant number of masters or workers nodes, you can make)

Power on Bootstrap VM First

Right Click on VM > Power > Power On

8. Configuring OpenShift Cluster: –

After powering on VM or booting the BareMetal Server with RHCOS ISO, First Boot, the server will automatically log in with core os User.

First, Configure static Ip and Hostname to nodes. (If you’re using DHCP Network, you can ignore static IP configuration)

Run – sudo nmtui (command will show network and hostname configuration setting options)

Click on Edit a connection.

It will show all available NIC cards. Choose NIC card > Click on Edit

Change IPv4 Configuration automatic to manual > click on Automatic

Select Manual option

Assign IP address, Gateway, DNS Servers, Search Domains > Click on OK

Network configuration completed > Click on Back

Click on Active a connection (To check configured network active or not)

You can see indication * means the network is active > click on Back

Click on Set system hostname (Hostnameiguring Hostnameame)

Enter Hostname> Click on Ok

Network and Hostname configuration completed > Click on Quit 

Next, configure OpenShift

Sudo coreos-installer install –copy-network (For saving the network and hostname configuration) –ignition-url=httpdurl/ignition file(For bootstrap node specify bootstrap.ign , for master nodes specify master.ign and for worker nodes specify worker.ign file)  /dev/sda ( check available disk based on the disk you can specify ) –insecure-ignition 

On the Bootstrap node run

$ sudo coreos-installer install –copy-network –ignition-url=http://172.90.0.25:8080/bootstrap.ign /dev/sda –insecure-ignition

On Master node

$ sudo coreos-installer install –copy-network –ignition-url=http://172.90.0.25:8080/master.ign /dev/sda –insecure-ignition

On worker node

$ sudo coreos-installer install –copy-network –ignition-url=http://172.90.0.25:8080/worker.ign /dev/sda –insecure-ignition

It will take some time to install the core os, and configure the ignition config  > Once installation is completed, reboot the node

After reboot, you canHostnamehe the network, Hostname, and ignition config applied to the node

Similarly, you can configure all remaining nodes

Configure Ip and Hostname using the sudo nmtui command

Install coreos and ignition config, and once configuration is completed, you reboot, and cluster provision will start.

Cluster creation will take some time based on the infra

You can run the below command on bastion to check the cluster provision status

./openshift-install –dir path wait-for bootstrap-complete –log-level=infor/debug 

You can check node status after a few minutes using kubectl get nodes /oc get nodes

Also, you check CSR status – any pending CSR. Approve any CSR pending

Oc get csr (command will show csr with status)

For approving all csr run the following command

oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{“\n”}}{{end}}{{end}}’ | xargs oc adm certificate approve

Check OpenShift Console URL and copy the HostName (for accessing OpenShift Console )

Copy the above host and paste on browser > login with user and password – username – kubeadmin, password – check on bastion node – cat /path(In my env OpenShift files generated on cop directory )/auth/kubeadmin-password.

In OpenShift GUI also checks node status

Optional – If you want to change the OpenShift branding logo on the login console. Copy Logo to bastion host (logo size max height 60)

Create cofigmap

Oc crate configmap configmap name –from-file path -n openshift-config

Ex:

Oc crate configmap ocplogo –from-file /root/ocplogo.png -n openshift-config

Edit console operator config

oc edit consoles.operator.openshift.io cluster

Add the following highlighted details

Refresh console -the above configured custom log will show.