

In addition, all 4 VMs are in the same layer 2 network to remove any potential bottleneck when perform the network throughput testing using IPerf3.Ġ0:00.0 Host bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX Host bridge (rev 01)Ġ0:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX AGP bridge (rev 01)Ġ0:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)Ġ0:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)Ġ0:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)Ġ0:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)Ġ0:0f.0 VGA compatible controller: VMware SVGA II AdapterĠ0:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)Ġ0:11.0 PCI bridge: VMware PCI bridge (rev 02)Ġ0:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)Ġ0:15.1 PCI bridge: VMware PCI Express Root Port (rev 01)Ġ0:15.2 PCI bridge: VMware PCI Express Root Port (rev 01)Ġ0:18.7 PCI bridge: VMware PCI Express Root Port (rev 01)Ġ3:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev ~]# ethtool -i ens160Ġ3:00.0 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)-–same as the ethernet controller (X540-AT2) of vSphere ESXi ~]# ethtool -i ens160Ġ3:00.0 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev ~]# ethtool -i ens160 Note: I have only 4 VMs running on 2 vSphere ESXi hosts in my testing environment to remove the impact for resource contention.

Reserve All Guest Memory (this is a mandatory requirment for SR-IOV but I enable it for all testing VMs)ĮSXi hosts: we use 2 ESX hosts (host10 and host11) for our testing.Regarding the vSphere 6.5 support to SR-IOV, please refer the link below: This blog is to demonstrate network performance (network throughput here only) for a SR-IOV enabled Centos7 virtual machine which is running on vSphere 6.
