Prerequisites
- Red Hat OpenShift cluster deployed and operational
You can refer to my earlier post on how to deploy and configure a secondary network for VMs.
OpenShift Virtualization Configuration, Secondary VM Network Setup, and Live Migration configuration
- VMware vSphere 6.5+ environment with API access
- VMware VDDK (Virtual Disk Development Kit) 7.0 or 8.0
- Container registry configured (internal or external)
- vSphere credentials (You can follow following link and create a new user and assign required permissions – https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.11/html/planning_your_migration_to_red_hat_openshift_virtualization/assembly_provider-specific-requirements-for-migration_mtv#vmware-privileges_mtv)
- Sufficient storage capacity in OpenShift
Step 1: Verify OpenShift Cluster Health
Before beginning the migration process, you must verify that your OpenShift cluster is healthy, has sufficient capacity, and is running the required version. The cluster should have at least 3 control plane nodes and sufficient worker nodes for your workloads.
Check Cluster Nodes
Execute the following command to verify all nodes are in Ready state:
oc get nodes

You should see all nodes in “Ready” status with appropriate roles (control-plane, worker). In this example, we have 3 control plane nodes (master0, master1, master2).
Step 2: Install Migration Toolkit Operators
The Migration Toolkit for Virtualization comes bundled with several operators that must be installed from the OpenShift OperatorHub. These operators handle networking, storage, logging, and virtualization features required for VM migration.
Required Operators
- Migration Toolkit for Virtualization (MTV)
- Kubernetes NMState Operator (network configuration)
- OpenShift Virtualization

All operators should show “Succeeded” status in the operator list. They are pre-configured with all necessary settings and handle migration-related tasks automatically.
Step 3: Configure VDDK & Registry Configuration (CRITICAL FIRST STEP)
⚠️ WARNING: This is the most critical step. VDDK configuration must be completed BEFORE creating providers or migration plans. Failure to properly configure VDDK will result in failed migrations and data transfer errors.
Understanding VDDK
The VMware Virtual Disk Development Kit (VDDK) is the library that enables direct, efficient access to VM disk data at the block level. Without VDDK, the MTV toolkit cannot efficiently transfer VM disks from vSphere to OpenShift. VDDK provides:
- Direct block-level access to VM storage
- Optimized data transfer protocols
- Change block tracking for incremental migrations
- Network-optimized data streaming
Step 3.1: Download VDDK
First, download the VDDK tar file from VMware. The version must match your vSphere version.

The file shown is vmware-vix-disklib-8.0.3, which is the VDDK library for vSphere 8.0. This must be extracted and made available to the MTV toolkit.
Step 3.2: Create VDDK Image and Register with Registry
The VDDK tar file must be converted into a container image and pushed to your container registry. The register.sh script automates this process:
- Extract VDDK tar file
- Create a container image containing VDDK
- Push the image to your container registry
- Configure MTV to use the registry image
You can download the script using the link below and run it. It automatically builds and pushes the image to the registry.
https://github.com/Dineshk1205/vsphere-to-Openshift.git
Step 4: Create VMware Provider with VDDK
Now create the VMware provider. This provider connection allows MTV to communicate with your vSphere environment and inventory your VMs.
Provider Configuration Form
The provider requires the following information:
- Provider Name: Descriptive name for your vSphere environment
- Provider Type: VMware vSphere
- API Endpoint URL: vCenter API endpoint (https://vcenter.example.com/sdk)
- Credentials: vSphere username and password
- Certificate Validation: SSL certificate verification settings
- VDDK Image: URL to your configured VDDK image

The certificate dialog shows the vSphere certificate details. You must verify and trust this certificate for secure communication.


Click the create provider option. After successful creation, your provider will appear in the Providers list:

Step 5: Create OpenShift Virtualization Provider as a Target
Similarly, you can add OpenShift Virtualization as a provider.

Step 6: VM Migration
Make sure that the required network is created in OpenShift before creating a migration plan.

Following is the VM I am planning to migrate from vSphere to OpenShift.

In the OpenShift UI, select Migration Plan option and click on Create Plan option.

Enter plan name and select project. Select source as vSphere provider and target as OpenShift provider and click on next.

Select VM which you want to migrate.

I am migrating linuxtest2 VM, so selected linuxtest2 VM. Based on your requirements, you can select multiple VMs or a single VM.

You can pre-configure network map and use it, or you can create a new mapping. I am creating a new mapping.
Source network: linuxtest2 VM connects to VLAN 30. Target network: In OpenShift, I created the same network. I select VLAN 30 as the target network.
Click on next.

Similar to network, you can configure storage mapping. Click on next.

Choose migration type. I am doing cold migration. For cold migration, the VM is in shutdown state. For warm migration, the VM is in powered-on mode, and you need to enable CBT on the disk.
Aspect | Cold Migration | Warm Migration |
|---|---|---|
VM Power State |
|
|
Approx. User Downtime |
|
|
Total Migration Time |
|
|
Prerequisites |
|
|
Advantages |
|
|
Click on next.

Optional settings: I am going with default settings and not passing any optional settings.



Review all the options and click on Create Plan.

Click on Migrations Start option to start VM migration.

Click on Start.

Migration started.




Migration completed successfully.

Go to the Virtualization section and check the VM and start the VM.


VM static IP preserved and able to ping gateway.


