Tuning KVM: Difference between revisions

From KVM
No edit summary
(no btrfs on the host)
Line 50: Line 50:


   qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,'''if=virtio'''
   qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,'''if=virtio'''
Don't use the linux filesystem btrfs on the host for the image files. It will result in low IO performance. The kvm guest may even freeze when high IO traffic is done on the guest.

Revision as of 04:43, 20 August 2011

CPU Performance

Modern processors come with a wide variety of performance enhancing features such as streaming instructions sets (sse) and other performance-enhancing instructions. These features vary from processor to processor.

QEMU and KVM default to a compatible subset of cpu features, so that if you change your host processor, or perform a live migration, the guest will see its cpu features unchanged. This is great for compatibility but comes at a performance cost.

To pass all available host processor features to the guest, use the command line switch

 qemu -cpu host

if you wish to retain compatibility, you can expose selected features to your guest. If all your hosts have these features, compatibility is retained:

 qemu -cpu qemu64,+ssse3,+sse4.1,+sse4.2,+x2apic

To see the difference between the capabilities of the host CPU versus the guest, just compare the output of the following commands on each system:

  cat /proc/cpuinfo | grep flags | uniq

For example, the default setting on a 64 bit host machine is "-cpu qemu64".
This includes the following flags:

   fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx lm up pni hypervisor

The host itself might support other flags like cx16, mmxext, and so on.

See CPU Feature Flags And Their Meanings (and other resources on the web) for more information.


Networking

QEMU defaults to user-mode networking (slirp), which is available without prior setup and without administrative privileges on the host. It is also unfortunately very slow. To get high performance networking, switch to a bridged setup via the -net tap command line switches.

 qemu -net nic,model=virtio,mac=... -net tap,ifname=...

QEMU also defaults to the RTL8139 network interface card (NIC) model. Again this card is compatible with most guests, but does not offer the best performance. If your guest supports it, switch to the virtio model:

 qemu -net nic,model=virtio,mac=... -net tap,ifname=...


Storage

QEMU supports a wide variety for storage formats and back-ends. Easiest to use are the raw and qcow2 formats, but for the best performance it is best to use a raw partition. You can create either a logical volume or a partition and assign it to the guest:

 qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio

QEMU also supports a wide variety of caching modes. Writeback is useful for testing but does not offer storage guarantees. Writethrough (the default) is safer, and relies on the host cache. If you're using raw volumes or partitions, it is best to avoid the cache completely, which reduces data copies and bus traffic:

 qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio

As with networking, QEMU supports several storage interfaces. The default, IDE, is highly supported by guests but may be slow, especially with disk arrays. If your guest supports it, use the virtio interface:

 qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio

Don't use the linux filesystem btrfs on the host for the image files. It will result in low IO performance. The kvm guest may even freeze when high IO traffic is done on the guest.