Virtio: Difference between revisions
From KVM
No edit summary |
No edit summary |
||
Line 2: | Line 2: | ||
* Virtio was chosen to be the main platform for IO virtualization in KVM | * Virtio was chosen to be the main platform for IO virtualization in KVM | ||
* The idea behind it is to have a common framework for hypervisors for IO virtualization | * The idea behind it is to have a common framework for hypervisors for IO virtualization | ||
* More information (although not uptodate) can be found in | * More information (although not uptodate) can be found in kvm [http://kvm.qumranet.com/kvmwiki/KvmForum2007?action=[[AttachFile]]&do=get&target=kvm_pv_drv.pdf pv driver] | ||
* At the moment network/block/balloon devices are suported for kvm | * At the moment network/block/balloon devices are suported for kvm | ||
* The host | * The host implementation is in userspace - qemu, so no driver is needed in the host. | ||
= How to use Virtio = | = How to use Virtio = | ||
* Get kvm version >= 60 | * Get kvm version >= 60 | ||
* Get Linux kernel with virtio drivers for the guest | * Get Linux kernel with virtio drivers for the guest | ||
** Soon | ** Either build it around Rusty's tree [http://ozlabs.org/~rusty/kernel/hg/ repo] | ||
** Or git clone git://kvm.qumranet.com/home/dor/src/linux-2.6-nv use branch rusty | |||
** Soon an official repository will be released | |||
** As an alternative one can use a standard guest kernel for the guest > 2.6.18 and make use sync backward compatibility option | ** As an alternative one can use a standard guest kernel for the guest > 2.6.18 and make use sync backward compatibility option | ||
** Backport and instructions can be found in Anthony Liguori's [http://codemonkey.ws/virtio-ext-modules virtio-ext-modules] | |||
** At the moment it's broken since the guest got developed, soon update | |||
* Use model=virtio for the network devices. | * Use model=virtio for the network devices. | ||
** Example | |||
<pre><nowiki> | |||
qemu/x86_64-softmmu/qemu-system-x86_64 -boot c -hda /images/xpbase.qcow2 -m 384 -net nic,model=virtio -net tap,script=/etc/kvm/qemu-ifup | |||
</nowiki></pre> | |||
* At the moment the kernel modules are automatically loaded in the guest but the interface should be started manually (dhclient/ifconfig) | * At the moment the kernel modules are automatically loaded in the guest but the interface should be started manually (dhclient/ifconfig) | ||
* Currently performance is much better when using a host kernel configured with CONFIG_HIGH_RES_TIMERS. Another option is use HPET/RTC and -clock= qemu option. | |||
* Expected performance | |||
** Performance varies from host to host, kernel to kernel | |||
** On my laptop I measured 1.1Gbps rx throughput using 2.6.23, 850Mbps tx. | |||
** Ping latency is 300-500 usec | |||
* Enjoy, more to come :) | |||
__NOTOC__ | __NOTOC__ |
Revision as of 18:27, 31 January 2008
Paravirtualized drivers for kvm/Linux
- Virtio was chosen to be the main platform for IO virtualization in KVM
- The idea behind it is to have a common framework for hypervisors for IO virtualization
- More information (although not uptodate) can be found in kvm AttachFile&do=get&target=kvm_pv_drv.pdf pv driver
- At the moment network/block/balloon devices are suported for kvm
- The host implementation is in userspace - qemu, so no driver is needed in the host.
How to use Virtio
- Get kvm version >= 60
- Get Linux kernel with virtio drivers for the guest
- Either build it around Rusty's tree repo
- Or git clone git://kvm.qumranet.com/home/dor/src/linux-2.6-nv use branch rusty
- Soon an official repository will be released
- As an alternative one can use a standard guest kernel for the guest > 2.6.18 and make use sync backward compatibility option
- Backport and instructions can be found in Anthony Liguori's virtio-ext-modules
- At the moment it's broken since the guest got developed, soon update
- Use model=virtio for the network devices.
- Example
qemu/x86_64-softmmu/qemu-system-x86_64 -boot c -hda /images/xpbase.qcow2 -m 384 -net nic,model=virtio -net tap,script=/etc/kvm/qemu-ifup
- At the moment the kernel modules are automatically loaded in the guest but the interface should be started manually (dhclient/ifconfig)
- Currently performance is much better when using a host kernel configured with CONFIG_HIGH_RES_TIMERS. Another option is use HPET/RTC and -clock= qemu option.
- Expected performance
- Performance varies from host to host, kernel to kernel
- On my laptop I measured 1.1Gbps rx throughput using 2.6.23, 850Mbps tx.
- Ping latency is 300-500 usec
- Enjoy, more to come :)