Setting up a 100GbE PVRDMA Network on vCenter 7

After writing my last article on Getting NVIDIA NGC containers to work with VMware PVRDMA networks I had a couple of people ask me “How do I set up PVRDMA networking on vCenter?” These are the steps that I took to set up PVRDMA networking in my lab.

RDMA over Converged Ethernet (RoCE) is a network protocol that allows remote direct memory access (RDMA) over an Ethernet network. It works by encapsulating an Infiniband (IB) transport packet and sending it over Ethernet. If you’re working with network applications that require high bandwidth and low latency, RDMA will give you lower latency, higher bandwidth, and a lower CPU load than an API such as Berkeley sockets.

Full disclosure: I used to work for a startup called Bitfusion, and that startup was bought by VMware, so I now work for VMware. At Bitfusion we developed a technology for accessing hardware accelerators, such as NVIDIA GPUs, remotely across networks using TCP/IP, Infiniband, and PVRDMA. I still work on the Bitfusion product at VMware, and spend a lot of my time getting AI and ML workloads to work across networks on virtualized GPUs.

In my lab I’m using Mellanox Connect/X5 and ConnectX/6 cards on hosts that are running ESXi 7.0.2 and vCenter 7.0.2. The cards are connected to a Mellanox Onyx MSN2700 100GbE switch.

Since I’m working with Ubuntu 18.04 and 20.04 virtual machines (VMs) in a vCenter environment, I have a couple of options for high-speed networking:

  • I can use PCI passthrough to pass the PCI network card directly through to the VM and use the network card’s native drivers on the VM to set up a networking stack. However this means that my network card is only available to a single VM on the host, and can’t be shared between VMs. It also breaks vMotion (the ability to live-migrate the VM to another host) since the VM is tied to a specific piece of hardware on a specific host. I’ve set this up in my lab but stopped doing this because of the lack of flexibility and because we couldn’t identify any performance difference compared to SR-IOV networking.
  • I can use SR-IOV and Network Virtual Functions (NVFs) to make the single card appear as if it’s multiple network cards with multiple PCI addresses, pass those through to the VM, and use the network card’s native drivers on the VM to set up a networking stack. I’ve set this up in my lab as well. I can share a single card between multiple VMs and the performance is similar to PCI passthough. The disadvantages are that setting up SR-IOV and configuring the NVFs is specific to a card’s model and manufacturer, so what works in my lab might not work in someone else’s environment.
  • I can set up PVRDMA networking and use the PVRDMA driver that comes with Ubuntu. This is what I’m going to show how to do in this article.

Set up your physical switch

First, make sure that your switch is set up correctly. On my Mellanox Onyx MSN2700 100GbE switch that means:

  • Enable the ports you’re connecting to.
  • Set the speed of each port to 100G.
  • Set auto-negotiation for each link.
  • MTU: 9000
  • Flowcontrol Mode: Global
  • LAG/MLAG: No
  • LAG Mode: On

Set up your virtual switch

vCenter supports Paravirtual RDMA (PVRDMA) networking using Distributed Virtual Switches (DVS). This means you’re setting up a virtual switch in vCenter and you’ll connect your VMs to this virtual switch.

In vCenter navigate to Hosts and Clusters, then click the DataCenter icon (looks like a sphere or globe with a line under it). Find the cluster you want to add the virtual switch to, right click on the cluster and select Distributed Switch > New Distributed Switch.

  • Name: “rdma-dvs”
  • Version: 7.0.2 – ESXi 7.0.2 and later
  • Number of uplinks: 4
  • Network I/O control: Disabled
  • Default port group: Create
  • Port Group Name: “VM 100GbE Network”

Figure out which NIC is the right NIC

  • Go to Hosts and Clusters
  • Select the host
  • Click the Configure tab, then Networking > Physical adapters
  • Note which NIC is the 100GbE NIC for each host

Add Hosts to the Distributed Virtual Switch

  • Go to Hosts and Clusters
  • Click the DataCenter icon
  • Select the Networks top tab and the Distributed Switches sub-tab
  • Right click “rdma-dvs”
  • Click “Add and Manage Hosts”
  • Select “Add Hosts”
  • Select the hosts. Use “auto” for uplinks.
  • Select the physical adapters based on the list you created in the previous step, or find the Mellanox card in the list and add it. If more than one is listed, look for the card that’s “connected”.
  • Manage VMkernel adapters (accept defaults)
  • Migrate virtual machine networking (none)

Tag a vmknic for PVRDMA

  • Select an ESXi host and go to the Configure tab
  • Go to System > Advanced System Settings
  • Click Edit
  • Filter on “PVRDMA”
  • Set Net.PVRDMAVmknic = "vmk0"

Repeat for each ESXi host.

Set up the firewall for PVRDMA

  • Select an ESXi host and go to the Configure tab
  • Go to System > Firewall
  • Click Edit
  • Scroll down to find pvrdma and check the box to allow PVRDMA traffic through the firewall.

Repeat for each ESXi host.

Set up Jumbo Frames for PVRDMA

To enable jumbo frames a vCenter cluster using virtual switches you have to set MTU 9000 on the Distributed Virtual Switch.

  • Click the Data Center icon.
  • Click the Distributed Virtual Switch that you want to set up, “rdma-dvs” in this example.
  • Go to the Configure tab.
  • Select Settings > Properties.
  • Look at Properties > Advanced > MTU. This should be set to 9000. If it’s not, click Edit.
  • Click Advanced.
  • Set MTU to 9000.
  • Click OK.

Add a PVRDMA NIC to a VM

  • Edit the VM settings
  • Add a new device
  • Select “Network Adapter”
  • Pick “VM 100GbE Network” for the network.
  • Connect at Power On (checked)
  • Adapter type PVRDMA (very important!)
  • Device Protocol: RoCE v2

Configure the VM

For Ubuntu:

sudo apt-get install rdma-core infiniband-diags ibverbs-utils

Tweak the module load order

In order for RDMA to work the vmw_pvrdma module has to be loaded after several other modules. Maybe someone else knows a better way to do this, but the method that I got to work was adding a script /usr/local/sbin/ to ensure that Infiniband modules are loaded on boot, then calling that from /etc/rc.local so it gets executed at boot time.

# modules that need to be loaded for PVRDMA to work
/sbin/modprobe mlx4_ib
/sbin/modprobe ib_umad
/sbin/modprobe rdma_cm
/sbin/modprobe rdma_ucm

# Once those are loaded, reload the vmw_pvrdma module
/sbin/modprobe -r vmw_pvrdma
/sbin/modprobe vmw_pvrdma

Once that’s done just set up the PVRDMA network interface the same as any other network interface.

Testing the network

To verify that I’m getting something close to 100Gbps on the network I use the perftest package.

To test bandwith I pick two VMs on different hosts. On one VM I run:

$ ib_send_bw --report_gbits

On the other VM I run the same command plus I add the IP address of the PVRDMA interface on the first machine:

$ ib_send_bw --report_gbits

That sends a bunch of data across the network and reports back:

So I’m getting an average of 96.31Gbps over the network connection.

I can also check the latency using the ib_send_lat:

Hope you find this useful.

Share Button

Upgrading vCenter 7 via the command line

I have vCenter installed and I want to update to When I run Update Planner > Interoperability it reports that all of my ESXi hosts are running ESXi 7.0.1. If I run the pre-update checks I get “No issues found”. When I go to the appliance to do the upgrade, both “Stage Only” and “Stage and Install” are greyed-out and unselectable.

vCenter 7 Appliance Available Updates screen

I tried a dozen different tricks, including ssh-ing into the appliance as root and editing the /etc/applmgmt/appliance/software_update_state.conf file, but nothing could enable the “Stage Only” and “Stage and Install” buttons.

Use the command line

I finally decided to try upgrading via the command line. I have backups going back 30 days. I even double-checked and yes, my NFS server has files in the backup directory for each of the past 30 days and they have data in them. There’s probably even a way to restore one of those backups if something goes horribly wrong. Onwards!

I was already logged into the vCenter appliance shell as root. The next thing I needed to do was to figure out where the command line tools were hidden. I found them in /usr/lib/applmgmt/support/scripts.

Disclaimer: I work at VMware, but I have no idea if the following is an “acceptable practice” or not. If your production vCenter is broken and you have a support contract, call support. If you’re messing around on a home or test system and you don’t care how badly you screw it up, feel free to try the command line tools.

root@vcenter [ ~ ]# cd /usr/lib/applmgmt/support/scripts
root@vcenter [ /usr/lib/applmgmt/support/scripts ]# ls -al
total 108
drwxr-xr-x 4 root root  4096 Aug 30 18:18 .
drwxr-xr-x 4 root root  4096 Aug 30 18:18 ..
-r-xr-xr-x 1 root root   205 Aug 15 07:16
-r-xr-xr-x 1 root root   633 Aug 15 07:16 manifest-verification
-r-xr-xr-x 1 root root   286 Aug 15 07:16
-r-xr-xr-x 1 root root  2056 Aug 15 07:16
-r-xr-xr-x 1 root root  3396 Aug 15 07:16
drwxr-xr-x 2 root root  4096 Aug 30 18:18 postinstallscripts
-r-xr-xr-x 1 root root  5207 Aug 15 07:16
-r-xr-xr-x 1 root root  4171 Aug 15 07:16
-r-xr-xr-x 1 root root   251 Aug 15 07:16
-r-xr-xr-x 1 root root  4001 Aug 15 07:16
-r-xr-xr-x 1 root root  3910 Aug 15 07:16
-r-xr-xr-x 1 root root 35773 Aug 15 07:16
-r-xr-xr-x 1 root root  8085 Aug 15 07:16
drwxr-xr-x 2 root root  4096 Aug 30 18:18 tests

These are the Python scripts that are linked to the Command shell. I’m actually in the root shell. I can run these directly from the root shell, or exit back to the Command shell and use them in the “official” way. In case I need to pull in support let’s do this the official way.

The script is what does the upgrade. Let’s exit back to the Command shell and see what it says it supports.

root@vcenter [ /usr/lib/applmgmt/support/scripts ]# exit
Command> software-packages
usage: software-packages [-h] {stage,unstage,validate,install,list} ...

optional arguments:
  -h, --help            show this help message and exit

    stage               Stage software update packages
    unstage             Purge staged software update packages
    validate            Validate software update packages
    install             Install software update packages
    list                List details of software update packages

Stage the packages for the update

Since the appliance wasn’t letting me upgrade, I thought I’d first check to see if I already have upgrades staged.

Command> software-packages list --staged
 [2021-01-22T21:45:41.022] : Packages not staged

OK. Nothing staged. How do I stage packages?

Command> software-packages stage --help
usage: software-packages stage [-h] [--url [URL]] [--iso] [--acceptEulas] [--thirdParty]

optional arguments:
  -h, --help     show this help message and exit
  --url [URL]    Download software update package from URL. If no url is specified,
                 catalog/valm/vmw/8dc0de9a-feedl-1337-be0a-6ddeadbeefa3/ is used.
  --iso          Load software update packages from CD/DVD drive attached to the appliance
  --acceptEulas  accept all Eulas
  --thirdParty   Stage third party packages.--thirdParty should only be usedwith --url.

Sounds clear enough. I’ll try that:

Command> software-packages stage --url --acceptEulas
 [2021-01-22T21:46:28.022] : Latest updates already installed on VCSA, Nothing to stage

Well that’s not correct. There’s definitely an update available. Re-reading help again I notice that the default URL looks something like:

I’ve obfuscated the actual URL, but that’s a vCenter 6.7.0 URL, I’m using 7.0.0, and I want 7.0.1.

I go back to the appliance web UI and click the Update > Settings button.

vCenter 7 Appliance Update screen

Settings shows a different URL for 7.0.1, so I copy and paste that into the command line:

Command> software-packages stage --acceptEulas --url
 [2021-01-22T21:48:28.022] : Target VCSA version =
 [2021-01-22 21:48:28,781] : Running requirements script.....

Trust but verify

A little while later everything was staged. I decided to validate everything.

Command> software-packages validate
 [2021-01-22T21:50:11.022] : For the first instance of the identity domain, this is the password given to the Administrator account.  Otherwise, this is the password of the Administrator account of the replication partner.
Enter Single Sign-On administrator password:

 [2021-01-22T21:50:22.022] : Validating software update payload
 [2021-01-22 21:50:22,327] : Running validate script.....
 [2021-01-22T21:50:26.022] : Validation successful
 [2021-01-22T21:50:26.022] : Validation process completed successfully

Then I check to see what’s staged:

Command> software-packages list --staged
 [2021-01-22T21:50:45.022] :
        category: Bugfix
        leaf_services: ['vmware-pod', 'vsphere-ui', 'wcp']
        vendor: VMware, Inc.
        name: VC-7.0U1c
        size in MB: 5107
        tags: []
        version_supported: []
        productname: VMware vCenter Server
        releasedate: December 17, 2020
        updateversion: True
        allowedSourceVersions: [,]
        buildnumber: 17327517
        rebootrequired: False
        summary: {'id': 'patch.summary', 'translatable': 'In-place upgrade for vCenter appliances.', 'localized': 'In-place upgrade for vCenter appliances.'}
        type: Update
        severity: Critical
        TPP_ISO: False
        thirdPartyAvailable: False
        nonThirdPartyAvailable: True
        thirdPartyInstallation: False
        timeToInstall: 0
        requiredDiskSpace: {'/storage/core': 30.353511543273928, '/storage/seat': 32.21015625}
        eulaAcceptTime: 2021-01-22 21:48:37 UTC

Well, that shows:


Which is the version I’ve been trying to upgrade to, so that looks good.

Did I mention that I have backup copies of vCenter going back 30 days? Well I do. If this goes really sideways I’m going to have to restore one of them.

Let’s do the update!

Command> software-packages install --staged
 [2021-01-22T21:51:23.022] : For the first instance of the identity domain, this is the password given to the Administrator account.  Otherwise, this is the password of the Administrator account of the replication partner.
Enter Single Sign-On administrator password:

 [2021-01-22T21:51:43.022] : Validating software update payload
 [2021-01-22 21:51:43,716] : Running validate script.....
 [2021-01-22T21:51:47.022] : Validation successful
 [2021-01-22 21:51:47,730] : Copying software packages 251/251
 [2021-01-22 21:55:37,642] : Running system-prepare script.....
 [2021-01-22 21:55:42,661] : Running test transaction ....
 [2021-01-22 21:55:44,678] : Running prepatch script...
 [2021-01-22 21:58:27,896] : Upgrading software packages ....
 [2021-01-22T22:02:10.022] : Setting appliance version to build 17327517
 [2021-01-22 22:02:10,242] : Running patch script.....
 [2021-01-22 22:11:34,245] : Starting all services ....
 [2021-01-22T22:11:35.022] : Services started.
 [2021-01-22T22:11:35.022] : Installation process completed successfully

That was it. The actual update took about 20 minutes, and although the UI said no reboot was necessary vCenter did reboot during the update. When it was done vCenter was running version

The vCenter appliance Update “Stage Only” and “Stage and Install” buttons are still greyed-out and unselectable, but right now there are no updates available so that’s how they should be. I’ll have to wait for the next update to see if they’re working again. If the buttons are still broken, at least now I know how to use the command line to install an update.

Hope you find this useful.

Update as of 2021-06-30: I have successfully upgraded a couple of times since I wrote this article using the GUI and the “Stage Only” and “Stage and Install” buttons are no longer greyed out when an update is available.

I did run into an issue upgrading from to where I got the error “Package discrepency error, Cannot resume!” [sic] when I tried to stage the update. Also when upgrading from to Both times I resolved the error and got the upgrades to install by following the steps in William Lam’s article Stage Only & Stage and Install buttons disabled when updating to vSphere 7.0 Update 2a. According to William these steps will need to be repeated until 7.0 u3 is released.

Share Button

Updating the vCenter appliance root password

If you’re like me, you rarely ssh into your vCenter appliance as “root”. However, the time comes when you need to update vCenter, you run the “Pre-Update Checks” — and because you never log into the appliance — you get the message that your root password needs to be updated before you can install the update.

So… log into the vCenter Service Management Console (https://your-vcenter:5480), click Access and then Edit. Make sure that SSH Login, DCLI, Console CLI, and BASH access are all enabled. Set the BASH timeout to 15 minutes so it gets disabled automatically when you’re done.

Once you’ve done that, ssh to the appliance.

$ ssh

VMware vCenter Server

Type: vCenter Server with an embedded Platform Services Controller

Received disconnect from port 22:2: Too many authentication failures
Disconnected from port 22

Did you get a “Received disconnect … Too many authentication failures” message? Don’t worry, no one is hacking into your vCenter, it’s just that you have more than one ssh key on your keyring and for some reason someone at VMware thought that it would be a great idea to set the vCenter ssh setting MaxAuthTries = 2. Your first ssh key counts as one try, your second ssh key counts as attempt number 2, and… you’re done. vCenter won’t let you log in.

To bypass public key authentication checks entirely use the -o PubkeyAuthentication=no parameter for ssh:

$ ssh -o PubkeyAuthentication=no

VMware vCenter Server

Type: vCenter Server with an embedded Platform Services Controller's password:
Connected to service

    * List APIs: "help api list"
    * List Plugins: "help pi list"
    * Launch BASH: "shell"


Now get to the bash shell by typing shell, then passwd to set the new password, and you can update the root password:

Command> shell
Shell access is granted to root
root@vcenter [ ~ ]# passwd
New password:
Retype new password:
passwd: password updated successfully
root@vcenter [ ~ ]# exit
Command> exit
Connection to closed.

Before you log out, run the Pre-Update Check again to verify that vCenter sees that the password has been updated. This time you should get the message “No issues found. Pre-update checks have passed.”

Hope you find this useful.

Share Button