MigrationTodo: Difference between revisions
From KVM
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
This page reports qemu migration ToDo list. It includes: | This page reports qemu migration ToDo list. It includes: | ||
== Support for big memory machine == | |||
* Port all missing cpus to VMState | |||
Current code fails with 64GB/256GB guests. Things that need to improve are: | |||
* dirty migration bitmap: we need to split it from normal TCG bitmap | |||
(bitmap has 16bits by page of memory, and kvm only ever uses | |||
the migration bit) | |||
* TLB handling: related with previous one, kvm don't need TLB handling in qemu. | |||
* detecting if we converge/not during migration | |||
* improve measurements to be able to give good information to management | |||
== Port all missing cpus to VMState == | |||
This means that nothing out of hw/ will use the old qemu migration stuff. | |||
== Port all missing devices to VMState == | |||
We need to finish this one to be able to change the migration implementation/protocal. Current problems include that we do too many copies to be able to saturate a 10G network. |
Latest revision as of 09:25, 26 October 2010
This page reports qemu migration ToDo list. It includes:
Support for big memory machine
Current code fails with 64GB/256GB guests. Things that need to improve are:
- dirty migration bitmap: we need to split it from normal TCG bitmap
(bitmap has 16bits by page of memory, and kvm only ever uses the migration bit)
- TLB handling: related with previous one, kvm don't need TLB handling in qemu.
- detecting if we converge/not during migration
- improve measurements to be able to give good information to management
Port all missing cpus to VMState
This means that nothing out of hw/ will use the old qemu migration stuff.
Port all missing devices to VMState
We need to finish this one to be able to change the migration implementation/protocal. Current problems include that we do too many copies to be able to saturate a 10G network.