OpenShift supports multiple deployment methods, including UPI, IPI, and the Assisted Installer.

Previously, if you wanted to deploy OpenShift using an existing external load balancer, UPI (User-Provisioned Infrastructure) was the only supported deployment method. This approach involved more manual steps and was typically used when a high level of customization was required.

In the latest OpenShift versions, the Assisted Installer also supports deployments using an external load balancer. This makes the Assisted Installer a simpler and more recommended option for most use cases, especially when extensive customization is not needed, while still allowing deployments with an external load balancer.

  1. DNS
  2. Load Balancer (LB) (Optional you can also use ocp cluster internal lb)
  3. Internet
  4. Redhat Subscription
  5. Necessary resources, including Memory, CPU, storage, and Networks, must be sufficient, and the required firewall rules should be enabled
  6. Jump Server/Bastion Server – To access nodes and OCP CLI.
  7. DNS

A base domain name. You must ensure that the following requirements are met:

  • There is no wildcard, such as *.<cluster_name>.<base_domain>, or the installation will not proceed.
  • A DNS A/AAAA record for api.<cluster_name>.<base_domain>.
  • A DNS A/AAAA record with a wildcard for *.apps.<cluster_name>.<base_domain>

You can install High available two node LB using Two servers/vm using keepalive and haproxy or you can use any existing LB

Please find the sample LB configuration files

haproxy config file

https://github.com/Dineshk1205/ocp4.20/blob/main/haproxy.cfg

keepalive config

https://github.com/Dineshk1205/ocp4.20/blob/main/keepalived.conf

DNS – 172.16.16.100, azurelocal.lab

LB01 – 172.16.16.101, LB02 – 172.16.16.102

Api VIP – 172.16.16.103

Apps* VIP – 172.16.16.104

Cluster name – ocp4

Node0 – controlplane0.ocp4.azurelocal.lab – 172.16.16.106

Node1 – controlplane1.ocp4.azurelocal.lab – 172.16.16.107

Node2 – controlplane2.ocp4.azurelocal.lab – 172.16.16.108

Disks per node – 150 GB for OS ,3 500 Gb disk for ODF

NIC per node – 6 – NIC1, NIC2- bond0 (mgmt), NIC3, NIC4- Bond1(ODF storage), NIC5, NIC6 – Bond2 (Virtualization traffic)

Node Mgmt network – 172.16.16.0/24 – Untagged network

ODF Public Network VLAN ID – 40 – 10.10.0.0/22

ODF Cluster Network VLAN ID – 50 – 10.20.20.0/22 (ODF (OpenShift Data Foundation) cluster network does not require routing to any other network)

VM network VLAN ID – 20 – 10.10.20.0/24

VM Live migration VLAN ID – 30 – 172.16.30.0/24

Login to the Openshift portal

https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://access.redhat.com/products/red-hat-hybrid-cloud-console/&ved=2ahUKEwjmo532ueCRAxXqWHADHaVOHiAQFnoECAwQAQ&usg=AOvVaw3g9I0xkJsCkN0bPOizdU1M

After login, under cluster management, select Cluster list option and click on create cluster option.

Select data center option

Under Assistant Installer, click on create cluster option

Enter Cluster Name, Base Domain, select Openshift version 4.20 and select network configuration static IP, bridges and bond option.

(Kindly verify that the required DNS entries have been created and that the same cluster name and base domain are in use)

Click on next

Select Network stack type ipv4 and enter DNS, Subnet and default gateway. Click on next

For mgmt network we can configure a NIC bond. Select use Bond option

Enter NIC1 and NIC2 mac address and enter node1 mgmt IP.

Please note: – If you are deploying on bare metal, copy NIC1 and NIC2 MAC address, which you will use for the management network. If you are deploying in a virtual environment, create a VM and copy the NIC MAC address.

Click on add another host configuration option and enter host2 mac address and host/node ip.

Similarly, select add another host configuration option and enter host3 mac address and IP. Once double check mac and IP details of the nodes and click on next.

We can install the operator once deployment is complete. Click next to continue.

Click on add hosts option .

Select Provisioning type – Full image file.

Generate SSH in you jump box/bastion host and copy and paste ssh pub key.

(SSH has been injected into all the OCP clusters nodes, allowing you to access nodes from the jump box or bastion server)

Click on Generate Discovery ISO

Download Discovery ISO

After downloading the ISO, boot the server using the ISO.

Below is a screenshot showing that bond0 has been configured automatically according to the specified configuration.

It will take a few minutes to discover in the Red Hat portal. This depends on your network.

3 nodes discovered and status is ready.

If you have a three-node cluster, you can keep the auto-assignment of roles. The roles of control plane and worker are assigned automatically to the nodes.

Click on Next

Automatically it will select first disk as an OS disk. Click on Next

You have two option for network

1. Cluster-Managed Networking

    2. User-Managed Networking

    If you have already set up a load balancer for your cluster, you can choose the User-Managed network. If you have not configured any external load balancer for the cluster, you can opt for the Cluster-Managed network, Enter VIP for API IP and ingress IP, which will automatically deploy the required pods on the cluster.

    I have already deployed the necessary load balancer. Therefore, I will proceed with the user-managed networking. You can find the reference and keepalive reference files in the prerequisites section, which you can use for the HA Proxy setup.

    Check host status before proceeding to next. Click on Next

    Check all the configuration settings and click on Install cluster.

    Depending on your network and infrastructure, the installation may take a while.

    After the installation is finished, you can access the Openshift UI by using the web console URL

    You can obtain the kubeconfig, copy it to the jumpbox or bastion server, and install the OCP client to access the cluster CLI.

    Export the kubeconfig path, or alternatively, you can add the path permanently to the .bashrc file.

    You can find the OpenShift cluster UI URL, username, and password on the Red Hat portal.

    You can check the next post regarding the ODF setup and Virtualization configuration