Configuring the Tanzu Kubernetes as a service in VRA, First, we need to deploy VMware Tanzu Supervisor Cluster.
Deployment and Configuration Overview: –
- VMware Tanzu Supervisor Cluster (TKGS) Deployment and Configuration.
- Registering and configuring Tanzu Kubernetes Cluster with VRA.
- Deploying Tanzu Workload clusters using VRA.
- VMware Tanzu Supervisor Cluster (TKGS) Deployment and Configuration: –
Requirement for deploying vSphere with Tanzu: –
ESXi, vCenter Server 7.0 or above
HA and DRS need to be enabled on vCenter
Load Balancer – HA proxy or AVI
Network – VDI or NSX-T
DNS and NTP
Network requirements for deploying Tanzu supervisor and workload Kubernetes clusters: –
- Management Network – A block of 5 consecutive static IP addresses to be assigned from the Management Network to the Kubernetes control plane VMs in the Supervisor Cluster. The network must be able to access an image registry and have Internet connectivity.
- Workload Networks – For deploying workload Kubernetes clusters.
- Front-end/Load balancer network – The load balancer Virtual IPs (VIPs) are assigned from this network.
The Management Network and the Workload Network must be on different subnets.
Routability between Networks.
My Deployment Environment: –
I used AVi as a Load Balancer and VDS Network.
Version | IP | |
ESXi | 7.0.3 | 192.168.5.151 |
vCenter Server | 7.0.3 | 192.168.5.152 |
Windows DNS Server | 2022 | 192.168.5.150 – Domain -kdinesh.in |
AVi Controller | 22.1.2 | 172.90.0.6 |
VMware vRealize Automation (VRA) | 8.10.2 | 172.90.0.9 |
VMware vRealize Identity Manager (VIDM) | 3.3.6 | 172.90.0.8 |
NTP | 0.in.pool.ntp.org |
Network | VDS Port Name | CIDR | Comments |
Management | vlan- 91 | 172.91.0.1/24 | The supervisor cluster will use only 5 IPs |
Workload | vlan-92 | 172.92.0.1/24 | The workload cluster will use IPs from this subnet. |
Frontend/Load Balancer | vlan-93 | 172.93.0.1/24 | IPs used for AVI Service engines and VIPs 172.93.0.10-200 – VIP 172.93.0.201-250 – Service Engines
|
1.1 vSphere Tanzu Kubernetes Cluster deploy Steps: –
- AVi Controller deployment and Configuration.
- Configuring vSphere Storage Policy and Tags.
- Deploying vSphere management/supervisor cluster deployment.
- AVi Controller deployment and configuration: –
Download the AVI controller OVA File from VMware AVi Portal – https://portal.avipulse.vmware.com/software/vantage
Deploy AVI Deploy on vSphere
Log in to vCenter Server > Right Click on Cluster/Host > Choose to Deploy OVF Template option.

Select Local file > Upload Files > Select downloaded OVA Controller OVA and click NEXT.

Enter AVI virtual machine name > Select Location of Virtual machine > click on Next.

Select compute resource. Click on Next.

Click on Next.

Select datastore > Select Disk format. Click on next.

Choose network from the drop-down list. Click on next.

Enter AVi Management IP, Subnet mask, Gateway, and Hostname. Click on next.

Review all the entered/selected details or correct and click on finish.

Once OVA deployment is completed (check status on vCenter recent tasks). Power on AVI VM – Right-click on VM> Power> Power on.

After powering on Vm, .it will take a few minutes to run all internal services. Open any browser and enter the URL – https://avihostname/IP . Create an admin password.

Enter Passphrase, DNS, DNS Search Domain, and click on next.

You can select and enter SMTP server details if you want to connect AVi with SMTP/email. I’m not using any SMTP/mail server, So I choose none.

You can choose the below option base on your infrastructure and requirements. Click on Save.

Select infrastructure > Clouds > Click on the Default clouds setting icon. (Note: – if you want to use Avi for vSphere with Tanzu, you have to keep the Default-cloud and configure that)

Choose Cloud type as a VMware vCenter/vSphere ESX. Click on Yes, Continue.

Click on the SET CREDENTIALS option.

Enter vCenter Hostname/Ip, Username, and Password. Click on Connect.

Once AVi is connected with vCenter, it will fetch all the information from vCenter. Select DC and Select content Library. (You can choose any content library from vCenter, in my env, already “avi” content library is selected, in the case of your environment content library I present, you can create and use it). Click on SAVE & RELAUNCH.

Select AVI Management network. In my case – Select Frontend/load balancer Network. And configured static IP pools – 172.93.0.10-200 used for Avi VIPs and 200-250 used for Avi Service Engines. Click on Save.

Next, Crate/configure IPAM, DNS, and DNS Resolver.
Click on the IPAM profile three dots to create the IPAM profile.

Enter IPAM Name, Select Cloud as default-cloud, and Select Network as Frontend/load balancer network. Click on Save.

Click DNS profile Three dots for creating the DNS profile.

Enter DNS name (Any name); enter DNS. Click on Save.

Click in DNS resolver and Enter DNS Server IP. Click on Save.

IPAM/DNS, DNS resolver configuration completed. Click on Save.

You can check the Cloud status. It’s showing green means AVi is successfully integrated with vCenter.

Next, select infrastructure> Service engine group > click on the Pencil icon.

Select the cluster (On the selected cluster, avi services engines are deployed) and click save.

Select infrastructure > Network > Frontend/load balancer network (valn -93) and click the pencil button.

Select 172.93.0.10-200 IP pool – Use for VIPs and 172.93.0.200-250 IP pool – Use For service engine. Click on save.

Select infrastructure > VRF context > Global.

Create static route management to frontend/load balancer subnet. (The Service Engines and the workloads on different networks, we need to provide a route between them)

Next, generate a self-signed certificate – to deploy Tanzu Supervisor Cluster; you need to create your self-signed cert instead of the default one.
Click on administrator > access setting, and click on the pencil button.

Unselect the existing SSL/TLS certificate. Click on the crate certificate.

Enter the required details. Click on Save.


Select allow Basic Authentication option and click on save.

AVI Deployment and Configuration Completed.
- Configuring vSphere Storage Policy and Tags: –
Log in to vCenter Server > Click on the Tags & Custom Attributes option.

Click on Tags > Select Categories > Click on new.

Enter Category name. Click on Create.

Next, click on the TAGS option.

Click on New.

Enter the tag name and Select the previously created category. Click on Create.

Assign a tag to Datastore (You can assign a tag to multiple datastore). Select datastore > Tags & Custom Attributes.

Select the tag and click on Assign.

Click on the Policies and profiles option.

Select the VM Storage Policies option > click on Create.

Enter the policy name, and Select vCenter. Click on next.

Select Enable tag-based placement rules. Click on next.

Select Tag category; Tags. Click on save.

You can see in compatibility showing the tagged datastore. Click next.

Click on finish.

Next, Configure the Content Library. Select the content library option > click on create.

Enter Name, and Select vCenter. Click on next.

Select the Subscription content library option > Enter the official Tanzu Subscription URL – https://wp-content.vmware.com/v2/latest/lib.json (From the link, Tanzu ova files are synchronized regularly). Choose the option Download content immediately /When needed. Click on next.

Click on Next. (you may also select Apply security policy )

Select datastore. Click on next. (Approx. Tanzu templates size is 132 Gb)

Click on Finish.

You can see a total of 32 Templates are synchronized and occupy 132 Gb of storage.

- Deploying vSphere management/supervisor cluster deployment: –
Click on the Workload Management option.

Click on Get Started.

Select vCenter Server, and Select network stack as VDS. Click on Next.

Select vCenter cluster. Click on next.

Select the previously created Storage policy. Click on next.

Load Balancer select Type NSX Advanced Lod Balancer.

Enter Name, AVI Controller Ip Username, and Password. Ans also need to provide a server certificate.

Log to AVi > templates> SSL/TLS Certificates > Select previously generated self-certificate, and click on the download icon.

Click on copy to clipboard.

Paste the Certificate on the Server Certificate box.

Enter Management network details –
network mode – static
Network – valn-91
Starting IP Address – 172.91.0.25, Sub netmask – 255.255.255.0, Gateway – 172.91.0.1, DNS Server – 192.168.5.150, DNS Domain – kdinesh.in, NTP – 0.in.pool.ntp.org
Click on Next.

Enter Workload Management Network –
Network mode – static
Internal network for Kubernetes service – 10.96.0.0/23
Port group – vlan-92
IP address range – 172.92.0.10 – 172.92.0.250, Subnet – 255.255.255.0, Gateway – 172.92.0.1, DNS – 192.168.5.150, NTP – 0.in.pool.ntp.org.
Click on next.

Select previously created Content Library. Click on next.

Choose the Control plane node size based on your infrastructure and requirement.

In my case selected a small size. Click on Finish.

You can check the cluster configuration status. It will take a few minutes based on your infrastructure.

You can see that my supervisor cluster deployment was completed. Cluster Endpoint IP – 172.93.0.11.

You can check on the AVI cluster Virtual service configured.


For Accessing the Cluster. You Enter the Supervisor cluster endpoint on Any browser, and you can download Kubectl vSphere login, and you can access Cluster.

- Registering and configuring Tanzu Kubernetes Cluster with VRA: –
Login to VRA.


After login, Select the Cloud Assembly option.

Create a new project, or you can also use existing projects. In my case – I am creating a new project. click on – Infrastructure > projects> New project.

Enter the project name. Click on create.

You can see new project development created successfully.

An open newly created development project. I selected sharing projects with all users, but you choose a specific Group/User.

Click on infrastructure > Kubernetes > Supervisor cluster > click on ADD Supervisor cluster.

Select the vSphere account. Select Supervisor Cluster.

Click on ADD.

You can see successfully the Supervisor cluster integrated with VRA.

You can open and check the address and other supervisor details.

Now, we can create a Supervisor Namespace (A Namespace (ns) – an isolated pool of resources that the VMware administrator makes for Kubernetes developers and users to access, build and manage their container environments. In many ways, it is similar to a vSphere Resource Group. The Namespace also serves as the gatekeeper for user access/denial).
Before creating the supervisor namespace, you can see on my vCenter that no Namespace has been created.

In VRA – select infrastructure > Kubernetes> Supervisor Namespaces > Click on New Supervisor Namespace ( Note: You can also create a new namespace using Blueprint options )

Enter Namespace name, select account vSphere Supervisor cluster, project, and add storage policy as previously created policy. Click on create.

You can see it on vCenter. Supervisor Namespace development was created. Next, add permission, VM classes, and Associates content library details using vSphere.

Click on ADD permission based on your requirements (Owner, Edit, or read the consent to user/group).

By default, VM classes will show different VM classes. You can select multiple VM classes based on your requirements. (VM class – number of CPU, Ram ..etc. configuration details)

You also select Associate content library as a previously created library.

Now Go back to the VRA console. Click infrastructure> Kubernetes Zones > click on New Kubernetes zone.

In the summary section – Select the account as vSphere. Enter zone name, Assign tag. Click on provisioning.

Select Compute as a newly created namespace development. Click on add.

Assign Tag to compute. Check whether the status is ready.

You can see the Kubernetes zone configured successfully.

Next, create a cluster Plan. Select infrastructure > configure> cluster plans > click on new cluster plan.

Select the vSphere account, Enter the plan name, and Select the version (Automatically, VRA will fetch the supported Kubernetes version based on the Supervisor cluster version).

Select machine class, storage class, and network setting based on your requirements.

You can also specify network custom settings. I am using default network settings.

You can see my cluster plan – Account – vSphere, Name – small, Kubernetes version – 1.21, Control and worker nodes – 1, machine xsmall, Storage – Selected tag. For pvc also, I am using the same storage.
Click on create.

The cluster plan was created successfully.

Next, Assign Kubernetes zone to Projects. Select infrastructure> Administration> Project > open development project.

Select the Kubernetes Provisioning option > Add zone > select Kubernetes one as the previously created development zone. Click on save.

Click on Save.

Next, Design a cloud template for the Tanzu workload cluster.
Select Design > cluster Templates> New forms option.

Select Blank Canvas.

Enter the template name, Select the project, and Share options. I am sharing the Cloud template with the development project only. Click on create.

Left side, you can see the Kubernetes section. In the Kubernetes section, select the k8s cluster. Drag and drop the k8s cluster icon.

Right side, you can code the option and prepare code based on your requirements/infrastructure.

I prepared a cloud template code. You can download the cloud template using the below link, Update it based on your infra, and import and use it. URL – https://github.com/Dineshk1205/VRA/blob/main/tkgs.yaml

Click on Version. Enter Version, and click on create.

Creating Cloud template completed. Next, we can publish the blueprint to the catalog.
Click the top right boxes. Click on the Service broker option.

Click on Content & Policies > content Sources > click on new.

Select content Sources as VMware Cloud Templates.

Enter Name, Select Source project, validate, and click on crate & import.


(Optional) Click on content > select crated content and Click on Configure item.

You can upload the item Icon. I uploaded the Kubernetes icon. Click on Save.

Create Definition – Select Definition> click on New Policy.

Select policy type content sharing policy.

Enter the policy name, select scope, and the content sharing item as the newly created item. Click on create. (I like to share with all project users ser/groups, you can choose to specify group/users)

You check Blueprint successfully published to catalog items. The end user can see the tkg template item, and they can deploy it based on their requirements.
- Registering and configuring Tanzu Kubernetes Cluster with VRA
Click the request option for deploying the cluster.

Enter the Deployment name, TKG Cluster name, and the number of worker nodes. Select project, Cluster Plan, and Target namespace from the drop-down list.
Click on Submit.

You can check the deployment status—open deployments item.

You can see it is in progress. It will take a few minutes to provision the Kubernetes cluster based on a number of nodes and infrastructure.

You can see my cluster deployment status – created successfully.

You can also see on my vCenter in the development namespace that two nodes, tkube-clu0 cluster nodes, were created.

In AVI VIP crated for Workload cluster.

After Deploying the Cluster. Using VRA, you can directly apply yaml, Scale/Upgrade worker nodes, VM classes, and Kubernetes version.

You can also check on VRA infrastructure > Kubernetes > cluster section.


You can access the Deployed cluster using Windows/Linux Machines.
Download vSphere and Kubernetes cli tools using the Supervisor endpoint.

I am using Linux machines for accessing clusters. I downloaded the CLI tool directly from Linux.

Unzip downloaded file

Copy the kubectl and kubectl-vsphere files to – /usr/local/bin.

Log in to Cluster using vCenter SSO use.
Command – Kubectl vsphere login –server=”Supervisorendpointip” –insecure-skip-tls-verify –tanzu-kuberntes-cluster-namespace “namespacename” –tanzu-kuberntes-cluster-name “clustername” –vsphere-username user@domain
Kubectl vsphere login –server=https://172.93.0.11 –insecure-skip-tls=verify –tanzu-kuberntes-cluster-namespace development –tanzu-kuberntes-cluster-name tkube-clu0 –vsphere-username administrator@kdinsh.in

After login in, you can check the nodes’ status.

Next, Create cluster role binding. After creating roll binding, we can deploy pods on the cluster.

I Deployed a sample nginx pod on a newly created Kubernetes cluster. The pod is running successfully.

