10G NIC performance: VFIO vs virtio: Difference between revisions
(VFIO method completes) |
(virtio part complete) |
||
Line 15: | Line 15: | ||
*With VFIO passthrough, network performance is also 9.4 Gbps; i.e., we cannot observe overhead in virtualization environment with VFIO passthrough method, in context of typical SW network user application. | *With VFIO passthrough, network performance is also 9.4 Gbps; i.e., we cannot observe overhead in virtualization environment with VFIO passthrough method, in context of typical SW network user application. | ||
*With virtio approach, if proper configured (details see below), network performance can also achieve 9.4 Gbps. | *With virtio approach, if proper configured (details see below), network performance can also achieve 9.4 Gbps. | ||
---- | |||
Here is the details about each kind of configuration. | Here is the details about each kind of configuration. | ||
Line 45: | Line 47: | ||
<br />'-net none' tells qemu not emulate network devices | <br />'-net none' tells qemu not emulate network devices | ||
<br />'-device vfio-pci,host=' designate a vfio-pci device and the device's host BDF | <br />'-device vfio-pci,host=' designate a vfio-pci device and the device's host BDF | ||
== Virtio == | |||
=== Requirements === | |||
virtio compiled in kernel (RHEL7.1 native kernel already have them) | |||
CONFIG_VIRTIO=m | |||
CONFIG_VIRTIO_RING=m | |||
CONFIG_VIRTIO_PCI=m | |||
CONFIG_VIRTIO_BALLOON=m | |||
CONFIG_VIRTIO_BLK=m | |||
CONFIG_VIRTIO_NET=m | |||
Create guest with direct passthrough via VFIO framework<br /> | |||
'''<nowiki>qemu-kvm -m 16G -smp 8 -device virtio-net-pci,netdev=net0 -netdev tap,id=net0,script=/etc/qemu-ifup -drive file=/var/lib/libvirt/images/rhel7.1.img,if=virtio –nographic</nowiki>''' | |||
==== Poor performance configuration ====<br /> | |||
'''<nowiki>qemu-kvm -m 16G -smp 8 -net nic,model=virtio -net tap,script=/etc/qemu-ifup -drive file=/var/lib/libvirt/images/rhel7.1.img,if=virtio –nographic</nowiki>''' | |||
If not use <nowiki>'-device virtio-net-pci'</nowiki> option, performance will be 3.6 Gbps. |
Revision as of 05:50, 11 May 2015
We did some experiment trying to measure network performance overhead in virtualization environment, comparing between VFIO passthrough and virtio approaches.
Test Topology
2 Intel Grantley-EP platforms (Xeon E5-2697 v3) connected by 10G link; memory 96 G.
NIC: Intel 82599ES
Test Tool: iperf
OS: RHEL 7.1
Result summary
- In native environment, iperf can get 9.4 Gbps throughput. Since iperf is a SW packet generator and normal process shall it be, this a reasonable number.
- With VFIO passthrough, network performance is also 9.4 Gbps; i.e., we cannot observe overhead in virtualization environment with VFIO passthrough method, in context of typical SW network user application.
- With virtio approach, if proper configured (details see below), network performance can also achieve 9.4 Gbps.
Here is the details about each kind of configuration.
VFIO passthrough VF (SR-IOV) to guest
Requirements
- You NIC supports SR-IOV (how to check? see below)
- driver (usually igb or ixgb) loaded with 'max_vfs=<num>' (better to modinfo to check accurate parameter name)
- kernel modules needed: NIC driver, vfio-pci module, intel-iommu module
Check if your NIC supports SR-IOV
lspci -s <NIC_BDF> -vvv | grep -i "Single Root I/O Virtualization"
Assign the VF to a guest
Unbind from igbvf driver and Bind to VFIO driver
- unbind from previous driver (take igbvf device for example)
- echo <vf_BDF> > /sys/bus/pci/device/<vf_BDF>/driver/unbind
- lspci -s <vf_BDF> -n //to get its number
- //it will return like below
- 0a:13.3 0200: 8086:1520 (rev 01)
- //8086 1520 is its numeric number
- bind to vfio-pci driver
- echo 8086 1520 > /sys/bus/pci/drivers/vfio-pci/new_id
Now you can see this device is bound to vfio-pci driver
lspci -s <vf_BDF> -k
Create guest with direct passthrough via VFIO framework
qemu-kvm -m 16G -smp 8 -net none -device vfio-pci,host=81:10.0 -drive file=/var/lib/libvirt/images/rhel7.1.img,if=virtio -nographic
'-net none' tells qemu not emulate network devices
'-device vfio-pci,host=' designate a vfio-pci device and the device's host BDF
Virtio
Requirements
virtio compiled in kernel (RHEL7.1 native kernel already have them) CONFIG_VIRTIO=m CONFIG_VIRTIO_RING=m CONFIG_VIRTIO_PCI=m CONFIG_VIRTIO_BALLOON=m CONFIG_VIRTIO_BLK=m CONFIG_VIRTIO_NET=m
Create guest with direct passthrough via VFIO framework
qemu-kvm -m 16G -smp 8 -device virtio-net-pci,netdev=net0 -netdev tap,id=net0,script=/etc/qemu-ifup -drive file=/var/lib/libvirt/images/rhel7.1.img,if=virtio –nographic
==== Poor performance configuration ====
qemu-kvm -m 16G -smp 8 -net nic,model=virtio -net tap,script=/etc/qemu-ifup -drive file=/var/lib/libvirt/images/rhel7.1.img,if=virtio –nographic
If not use '-device virtio-net-pci' option, performance will be 3.6 Gbps.