10G NIC performance: VFIO vs virtio: Difference between revisions
(More fine tune) |
m (Add networking category) |
||
(4 intermediate revisions by 2 users not shown) | |||
Line 4: | Line 4: | ||
2 Intel Grantley-EP platforms (Xeon E5-2697 v3) connected by 10G link; memory 96 G.<br /> | 2 Intel Grantley-EP platforms (Xeon E5-2697 v3) connected by 10G link; memory 96 G.<br /> | ||
NIC: Intel 82599ES<br /> | NIC: Intel 82599ES [http://ark.intel.com/products/41282/Intel-82599ES-10-Gigabit-Ethernet-Controller]<br /> | ||
Test Tool: iperf <br /> | Test Tool: iperf <br /> | ||
OS: RHEL 7.1 <br /> | OS: RHEL 7.1 <br /> | ||
== Result summary == | == Result summary == | ||
*In native environment, iperf can get 9.4 Gbps throughput. Since iperf is a SW packet generator and normal process shall it be, this a reasonable number. | *In native environment, iperf can get '''9.4''' Gbps throughput. Since iperf is a SW packet generator and normal process shall it be, this a reasonable number. | ||
*With VFIO passthrough, network performance is also 9.4 Gbps; i.e., we cannot observe overhead in virtualization environment with VFIO passthrough method, in context of typical SW network user application. | *With VFIO passthrough, network performance is also '''9.4''' Gbps; i.e., we cannot observe overhead in virtualization environment with VFIO passthrough method, in context of typical SW network user application. | ||
*With virtio approach, if proper configured (details see below), network performance can also achieve 9.4 Gbps. | *With virtio approach, if proper configured (details see below), network performance can also achieve '''9.4''' Gbps; otherwise, poor performance will be '''3.6''' Gbps. | ||
=== Some references first === | |||
SR-IOV[http://www.intel.com/content/www/us/en/network-adapters/virtualization.html]<br /> | |||
vt-d assignment[[How_to_assign_devices_with_VT-d_in_KVM]] | |||
---- | ---- | ||
Line 24: | Line 27: | ||
=== Check if your NIC supports SR-IOV === | === Check if your NIC supports SR-IOV === | ||
lspci -s <NIC_BDF> -vvv | grep -i "Single Root I/O Virtualization" | '''<nowiki>lspci -s <NIC_BDF> -vvv | grep -i "Single Root I/O Virtualization"</nowiki>''' | ||
=== Assign the VF to a guest === | === Assign the VF to a guest === | ||
Line 62: | Line 65: | ||
<br />'''<nowiki>qemu-kvm -m 16G -smp 8 -net nic,model=virtio -net tap,script=/etc/qemu-ifup -drive file=/var/lib/libvirt/images/rhel7.1.img,if=virtio –nographic</nowiki>''' | <br />'''<nowiki>qemu-kvm -m 16G -smp 8 -net nic,model=virtio -net tap,script=/etc/qemu-ifup -drive file=/var/lib/libvirt/images/rhel7.1.img,if=virtio –nographic</nowiki>''' | ||
<br />If not use <nowiki>'-device virtio-net-pci'</nowiki> option, performance will be 3.6 Gbps. | <br />If not use <nowiki>'-device virtio-net-pci'</nowiki> option, performance will be 3.6 Gbps. | ||
[[Category:VFIO]][[Category:Results]][[Category:Virtio]][[Category:Networking]] |
Latest revision as of 15:52, 16 May 2015
We did some experiment trying to measure network performance overhead in virtualization environment, comparing between VFIO passthrough and virtio approaches.
Test Topology
2 Intel Grantley-EP platforms (Xeon E5-2697 v3) connected by 10G link; memory 96 G.
NIC: Intel 82599ES [1]
Test Tool: iperf
OS: RHEL 7.1
Result summary
- In native environment, iperf can get 9.4 Gbps throughput. Since iperf is a SW packet generator and normal process shall it be, this a reasonable number.
- With VFIO passthrough, network performance is also 9.4 Gbps; i.e., we cannot observe overhead in virtualization environment with VFIO passthrough method, in context of typical SW network user application.
- With virtio approach, if proper configured (details see below), network performance can also achieve 9.4 Gbps; otherwise, poor performance will be 3.6 Gbps.
Some references first
SR-IOV[2]
vt-d assignmentHow_to_assign_devices_with_VT-d_in_KVM
Here is the details about each kind of configuration.
VFIO passthrough VF (SR-IOV) to guest
Requirements
- You NIC supports SR-IOV (how to check? see below)
- driver (usually igb or ixgb) loaded with 'max_vfs=<num>' (better to modinfo to check accurate parameter name)
- kernel modules needed: NIC driver, vfio-pci module, intel-iommu module
Check if your NIC supports SR-IOV
lspci -s <NIC_BDF> -vvv | grep -i "Single Root I/O Virtualization"
Assign the VF to a guest
Unbind from igbvf driver and Bind to VFIO driver
- unbind from previous driver (take igbvf device for example)
- echo <vf_BDF> > /sys/bus/pci/device/<vf_BDF>/driver/unbind
- lspci -s <vf_BDF> -n //to get its number
- //it will return like below
- 0a:13.3 0200: 8086:1520 (rev 01)
- //8086 1520 is its numeric number
- bind to vfio-pci driver
- echo 8086 1520 > /sys/bus/pci/drivers/vfio-pci/new_id
Now you can see this device is bound to vfio-pci driver
lspci -s <vf_BDF> -k
Create guest with direct passthrough via VFIO framework
qemu-kvm -m 16G -smp 8 -net none -device vfio-pci,host=81:10.0 -drive file=/var/lib/libvirt/images/rhel7.1.img,if=virtio -nographic
'-net none' tells qemu not emulate network devices
'-device vfio-pci,host=' designate a vfio-pci device and the device's host BDF
Virtio
Requirements
virtio compiled in kernel (RHEL7.1 native kernel already have them) CONFIG_VIRTIO=m CONFIG_VIRTIO_RING=m CONFIG_VIRTIO_PCI=m CONFIG_VIRTIO_BALLOON=m CONFIG_VIRTIO_BLK=m CONFIG_VIRTIO_NET=m
Create guest with direct passthrough via VFIO framework
qemu-kvm -m 16G -smp 8 -device virtio-net-pci,netdev=net0 -netdev tap,id=net0,script=/etc/qemu-ifup -drive file=/var/lib/libvirt/images/rhel7.1.img,if=virtio –nographic
Poor performance configuration
qemu-kvm -m 16G -smp 8 -net nic,model=virtio -net tap,script=/etc/qemu-ifup -drive file=/var/lib/libvirt/images/rhel7.1.img,if=virtio –nographic
If not use '-device virtio-net-pci' option, performance will be 3.6 Gbps.