KVM Autotest Refactor page: Difference between revisions

From KVM
No edit summary
 
No edit summary
 
Line 3: Line 3:
== Suggested approach ==
== Suggested approach ==


    1) Move code that has negligible impact [see table below] from 'client/tests/kvm' to 'client' ASAP
1) Move code that has negligible impact [see table below] from 'client/tests/kvm' to 'client' ASAP
      + This would immediatelly benefit xen autotest, since it is frequently lagging behing features and fixes. One example is newer syntax in kvm_config.py that was not available in the xen autotest counterpart.


    2) Synchronize that code (step 1) usage between kvm and xen autotest
+ This would immediatelly benefit xen autotest, since it is frequently lagging behing features and fixes. One example is newer syntax in kvm_config.py that was not available in the xen autotest counterpart.
      + Rewrite current tests, changing imports of modules from 'kvm_*' and 'xen_' to a common prefix ('virt_*' ?).
      + This can be done by an automated script and verified by the unit tests.


    3) Work gradually on code that would generate more impact, implementing abstractions and other mechanisms
2) Synchronize that code (step 1) usage between kvm and xen autotest
      + The config file and thus the params passed to tests already add a lot of flexibility [see example snippet #1]
+ Rewrite current tests, changing imports of modules from 'kvm_*' and 'xen_' to a common prefix ('virt_*' ?).
      + Improve monitor functionality, introducing methods that wrap the hypervisor instead of sending specific commands [see example snippet #2, #3].  
+ This can be done by an automated script and verified by the unit tests.
      + This would eventually become a rather large API. libvirt API should be evaluated and if considered sound and appropriate for autotest, could be made similar.
      + Modify tests so that they make use of improved Monitor/VM API


    4) Publish resulting API and best practices documentation so that new tests writers may submit  
3) Work gradually on code that would generate more impact, implementing abstractions and other mechanisms
+ The config file and thus the params passed to tests already add a lot of flexibility [see example snippet #1]
+ Improve monitor functionality, introducing methods that wrap the hypervisor instead of sending specific commands [see example snippet #2, #3].
+ This would eventually become a rather large API. libvirt API should be evaluated and if considered sound and appropriate for autotest, could be made similar.
+ Modify tests so that they make use of improved Monitor/VM API
 
4) Publish resulting API and best practices documentation so that new tests writers may submit  




== Other notes ==
== Other notes ==


  * Regarding the prefix (virt_* ?), some functionality is not even tied to virtualization (such as the configuration format/parser) and could benefit other complex test modules;
* Regarding the prefix (virt_* ?), some functionality is not even tied to virtualization (such as the configuration format/parser) and could benefit other complex test modules;


  * kvm_vm:
* kvm_vm:
    - There are significant differences in (KVM) VM class and (xen) XendDomain. Methods such as create() are quite different in parameters in implementation
- There are significant differences in (KVM) VM class and (xen) XendDomain. Methods such as create() are quite different in parameters in implementation
    - (KVM) VM uses a 'monitor' to interact with qemu, whether (xen) XenDomain executes 'xm' commands  
- (KVM) VM uses a 'monitor' to interact with qemu, whether (xen) XenDomain executes 'xm' commands  
    - the 'monitor' abstraction is valid, but should have clear boundaries  
- the 'monitor' abstraction is valid, but should have clear boundaries  
    - xen autotest should also get a 'monitor' implementation
- xen autotest should also get a 'monitor' implementation
    - this even paves the way to a libvirt based monitor implementation
- this even paves the way to a libvirt based monitor implementation
    - probably a capabilities mechanism would be necessary to acomodate diferences between virt technologies (see kvm_monitor below)
- probably a capabilities mechanism would be necessary to acomodate diferences between virt technologies (see kvm_monitor below)


  * kvm_monitor
* kvm_monitor
  - There are methods and parameters which do not map completely to Xen, specially PV guests (screendump, migrate with full disk copy, mouse move, etc)
- There are methods and parameters which do not map completely to Xen, specially PV guests (screendump, migrate with full disk copy, mouse move, etc)
  - Again, a capabilities mechanism could be used
- Again, a capabilities mechanism could be used


    
    
Line 40: Line 41:
Parameters provide a lot of flexibility and diminish the need to implement a interface on a variant that does not need or cannot provide it. This code snippet:
Parameters provide a lot of flexibility and diminish the need to implement a interface on a variant that does not need or cannot provide it. This code snippet:


26    # Modprobe the module if specified in config file
26    # Modprobe the module if specified in config file
27    module = params.get("modprobe_module")
27    module = params.get("modprobe_module")
28    if module:
28    if module:
29        session.cmd("modprobe %s" % module)
29        session.cmd("modprobe %s" % module)


could be kept exactly like that, loading 'acpiphp' as it does now for RHEL guests onon kvm and nothing for RHEL xen domU kernels (with XEN_PCIDEV_FRONTEND=y).  
could be kept exactly like that, loading 'acpiphp' as it does now for RHEL guests onon kvm and nothing for RHEL xen domU kernels (with XEN_PCIDEV_FRONTEND=y).  


If someone wants to test a xen domU kernel with modular XEN_PCIDEV_FRONTEND, only this parameter would have to be adjusted.
If someone wants to test a xen domU kernel with modular XEN_PCIDEV_FRONTEND, only this parameter would have to be adjusted.


== Example #2 (pci_hotplug.py) ==
== Example #2 (pci_hotplug.py) ==
Line 56: Line 56:
Curent code:
Curent code:


31    # Get output of command 'info pci' as reference
31    # Get output of command 'info pci' as reference
32    info_pci_ref = vm.monitor.info("pci")
32    info_pci_ref = vm.monitor.info("pci")
33
33
34    # Get output of command as reference
34    # Get output of command as reference
35    reference = session.cmd_output(params.get("reference_cmd"))
35    reference = session.cmd_output(params.get("reference_cmd"))
    
    
Suggested code:
Suggested code:


31    # Get output of command 'info pci' as reference
31    # Get output of command 'info pci' as reference
32    info_pci_ref = vm.monitor.get_pci_info()
32    info_pci_ref = vm.monitor.get_pci_info()
33
33
34    # Get output of command as reference
34    # Get output of command as reference
35    reference = session.cmd_output(params.get("reference_cmd"))
35    reference = session.cmd_output(params.get("reference_cmd"))




Line 75: Line 75:
Current test code not only interacts directly with qemu but also deals with different versions and syntaxes:
Current test code not only interacts directly with qemu but also deals with different versions and syntaxes:


41    # Probe qemu to verify what is the supported syntax for PCI hotplug
41    # Probe qemu to verify what is the supported syntax for PCI hotplug
42    cmd_output = vm.monitor.cmd("?")
42    cmd_output = vm.monitor.cmd("?")
43    if len(re.findall("\ndevice_add", cmd_output)) > 0:
43    if len(re.findall("\ndevice_add", cmd_output)) > 0:
44        cmd_type = "device_add"
44        cmd_type = "device_add"
45    elif len(re.findall("\npci_add", cmd_output)) > 0:
45    elif len(re.findall("\npci_add", cmd_output)) > 0:
46        cmd_type = "pci_add"
46        cmd_type = "pci_add"
47    else:
47    else:
48        raise error.TestError("Unknow version of qemu")
48        raise error.TestError("Unknow version of qemu")


Again, this should be abstracted in the monitor implementation, and consumed by the test code (this is even commented in the current code):
Again, this should be abstracted in the monitor implementation, and consumed by the test code (this is even commented in the current code):


59        # Execute pci_add (should be replaced by a proper monitor method call)
59        # Execute pci_add (should be replaced by a proper monitor method call)
60        add_output = vm.monitor.cmd(pci_add_cmd)
60        add_output = vm.monitor.cmd(pci_add_cmd)
61        if not "OK domain" in add_output:
61        if not "OK domain" in add_output:
62            raise error.TestFail("Add PCI device failed. "
62            raise error.TestFail("Add PCI device failed. "
63                                "Monitor command is: %s, Output: %r" %
63                                "Monitor command is: %s, Output: %r" %
64                                (pci_add_cmd, add_output))
64                                (pci_add_cmd, add_output))
65        after_add = vm.monitor.info("pci")
65        after_add = vm.monitor.info("pci")


This should become:
This should become:


59        # Execute pci_add  
59        # Execute pci_add  
60        if not vm.monitor_pci_add(model):
60        if not vm.monitor_pci_add(model):
61            raise error.TestFail("Add PCI device failed.")
61            raise error.TestFail("Add PCI device failed.")
62        after_add = vm.monitor.get_pci_info()
62        after_add = vm.monitor.get_pci_info()




Line 106: Line 106:
The componentes are sorted by depencies, so that working down the list would solve all depencies, except when circular dependencies exist.
The componentes are sorted by depencies, so that working down the list would solve all depencies, except when circular dependencies exist.


Filename Impact Dependencies Spec  Tied  Xen
Filename Impact Dependencies Spec  Tied  Xen
-------- ------ ------------ ----  ----  ---
-------- ------ ------------ ----  ----  ---
kmv_config.py Negligible kvm_utils (logging) No    No    Yes
kvm_config.py Negligible kvm_utils (logging) No    No    Yes
kvm_subprocess.py      Negligible none   No    No    Yes
kvm_subprocess.py      Negligible none   No    No    Yes
scan_results.py Negligible none No    No    Yes
scan_results.py Negligible none No    No    Yes
cd_hash.py Negligible kvm_utils (logging) No    No    No
cd_hash.py Negligible kvm_utils (logging) No    No    No
rss_file_transfer.py Negligible none No    No    No
rss_file_transfer.py Negligible none No    No    No


kvm_monitor.py High kvm_utils (random_str) No    Yes  No[1]
kvm_monitor.py High kvm_utils (random_str) No    Yes  No[1]
kvm_vm.py High kvm_utils No    Yes  Yes
kvm_vm.py High kvm_utils No    Yes  Yes
kvm_subprocess
kvm_subprocess
kvm_monitor
kvm_monitor
rss_file_transfer
rss_file_transfer


kvm_scheduler.py Medium kvm_utils No[2] Yes[3] No  
kvm_scheduler.py Medium kvm_utils No[2] Yes[3] No  
kvm_preprocessing.py Medium kvm_vm   No    Yes[4] Yes
kvm_preprocessing.py Medium kvm_vm   No    Yes[4] Yes
kvm_utils
kvm_utils
kvm_subprocess
kvm_subprocess
kvm_monitor
kvm_monitor
kvm_utils.py High kvm_subprocess No    Yes[5] Yes
kvm_utils.py High kvm_subprocess No    Yes[5] Yes


kvm_test_utils.py High kvm_utils No    Yes    Yes
kvm_test_utils.py High kvm_utils No    Yes    Yes
kvm_vm
kvm_vm
kvm_subprocess
kvm_subprocess
scan_results
scan_results


Legend:
Caption:
-------
-------
- Spec: Specific for KVM (that is, not applicable to other virt technologies)
- Spec: Specific for KVM (that is, not applicable to other virt technologies)

Latest revision as of 20:16, 7 December 2010

KVM autotest refactor

Suggested approach

1) Move code that has negligible impact [see table below] from 'client/tests/kvm' to 'client' ASAP

+ This would immediatelly benefit xen autotest, since it is frequently lagging behing features and fixes. One example is newer syntax in kvm_config.py that was not available in the xen autotest counterpart.

2) Synchronize that code (step 1) usage between kvm and xen autotest + Rewrite current tests, changing imports of modules from 'kvm_*' and 'xen_' to a common prefix ('virt_*' ?). + This can be done by an automated script and verified by the unit tests.

3) Work gradually on code that would generate more impact, implementing abstractions and other mechanisms + The config file and thus the params passed to tests already add a lot of flexibility [see example snippet #1] + Improve monitor functionality, introducing methods that wrap the hypervisor instead of sending specific commands [see example snippet #2, #3]. + This would eventually become a rather large API. libvirt API should be evaluated and if considered sound and appropriate for autotest, could be made similar. + Modify tests so that they make use of improved Monitor/VM API

4) Publish resulting API and best practices documentation so that new tests writers may submit


Other notes

  • Regarding the prefix (virt_* ?), some functionality is not even tied to virtualization (such as the configuration format/parser) and could benefit other complex test modules;
  • kvm_vm:

- There are significant differences in (KVM) VM class and (xen) XendDomain. Methods such as create() are quite different in parameters in implementation - (KVM) VM uses a 'monitor' to interact with qemu, whether (xen) XenDomain executes 'xm' commands - the 'monitor' abstraction is valid, but should have clear boundaries - xen autotest should also get a 'monitor' implementation - this even paves the way to a libvirt based monitor implementation - probably a capabilities mechanism would be necessary to acomodate diferences between virt technologies (see kvm_monitor below)

  • kvm_monitor

- There are methods and parameters which do not map completely to Xen, specially PV guests (screendump, migrate with full disk copy, mouse move, etc) - Again, a capabilities mechanism could be used


Example #1 (pci_hotplug.py)

Parameters provide a lot of flexibility and diminish the need to implement a interface on a variant that does not need or cannot provide it. This code snippet:

26    # Modprobe the module if specified in config file
27    module = params.get("modprobe_module")
28    if module:
29        session.cmd("modprobe %s" % module)

could be kept exactly like that, loading 'acpiphp' as it does now for RHEL guests onon kvm and nothing for RHEL xen domU kernels (with XEN_PCIDEV_FRONTEND=y).

If someone wants to test a xen domU kernel with modular XEN_PCIDEV_FRONTEND, only this parameter would have to be adjusted.

Example #2 (pci_hotplug.py)

Only operations from the monitor side (vm.monitor.*) would have to be implemented for each virtualization technology. On the guest side (session.cmd_output() and the like), things could go unchanged, helped by parameters when necessary.

Curent code:

31    # Get output of command 'info pci' as reference
32    info_pci_ref = vm.monitor.info("pci")
33
34    # Get output of command as reference
35    reference = session.cmd_output(params.get("reference_cmd"))
  

Suggested code:

31    # Get output of command 'info pci' as reference
32    info_pci_ref = vm.monitor.get_pci_info()
33
34    # Get output of command as reference
35    reference = session.cmd_output(params.get("reference_cmd"))


Example #3 (pci_hotplug.py)

Current test code not only interacts directly with qemu but also deals with different versions and syntaxes:

41    # Probe qemu to verify what is the supported syntax for PCI hotplug
42    cmd_output = vm.monitor.cmd("?")
43    if len(re.findall("\ndevice_add", cmd_output)) > 0:
44        cmd_type = "device_add"
45    elif len(re.findall("\npci_add", cmd_output)) > 0:
46        cmd_type = "pci_add"
47    else:
48        raise error.TestError("Unknow version of qemu")

Again, this should be abstracted in the monitor implementation, and consumed by the test code (this is even commented in the current code):

59        # Execute pci_add (should be replaced by a proper monitor method call)
60        add_output = vm.monitor.cmd(pci_add_cmd)
61        if not "OK domain" in add_output:
62            raise error.TestFail("Add PCI device failed. "
63                                 "Monitor command is: %s, Output: %r" %
64                                 (pci_add_cmd, add_output))
65        after_add = vm.monitor.info("pci")

This should become:

59        # Execute pci_add 
60        if not vm.monitor_pci_add(model):
61            raise error.TestFail("Add PCI device failed.")
62        after_add = vm.monitor.get_pci_info()


Library code under client/tests/kvm

The componentes are sorted by depencies, so that working down the list would solve all depencies, except when circular dependencies exist.

Filename		Impact		Dependencies		Spec  Tied   Xen
--------		------		------------		----  ----   ---
kvm_config.py		Negligible	kvm_utils (logging)	No    No     Yes
kvm_subprocess.py       Negligible	none	  		No    No     Yes
scan_results.py		Negligible	none			No    No     Yes
cd_hash.py		Negligible	kvm_utils (logging)	No    No     No
rss_file_transfer.py	Negligible	none			No    No     No
kvm_monitor.py		High		kvm_utils (random_str)	No    Yes   No[1]
kvm_vm.py 		High		kvm_utils 		No    Yes   Yes

kvm_subprocess kvm_monitor rss_file_transfer

kvm_scheduler.py	Medium		kvm_utils		No[2] Yes[3] No 
kvm_preprocessing.py	Medium		kvm_vm 	  		No    Yes[4] Yes

kvm_utils kvm_subprocess kvm_monitor

kvm_utils.py		High		kvm_subprocess		No    Yes[5] Yes
kvm_test_utils.py	High		kvm_utils		No    Yes    Yes

kvm_vm kvm_subprocess scan_results

Caption:


- Spec: Specific for KVM (that is, not applicable to other virt technologies) - Tied: Currently tied to KVM in its current implementation - Xen: Currently used by xen autotest or has equivalent/derived implementation

[1] - Xen autotest currently does not implement a monitor, but this analysis recommends one to be implemented [2] - Currently xen autotest does not have a control file that uses parallel tests, so it's low priority but could be useful [3] - Tied only to kvm.VM and vm.monitor, used as abstractions [4] - Tied only to kvm.VM and vm.monitor, used as abstractions [5] - There's both kvm specific code and code that should be shared between xen and kvm autotest