Giter Site home page Giter Site logo

tp-libvirt's Introduction

Autotest: Fully automated tests under the linux platform

Autotest is a framework for fully automated testing. It is designed primarily to test the Linux kernel, though it is useful for many other functions such as qualifying new hardware. It's an open-source project under the GPL and is used and developed by a number of organizations, including Google, IBM, Red Hat, and many others.

Autotest is composed of a number of modules that will help you to do stand alone tests or setup a fully automated test grid, depending on what you are up to. A non extensive list of modules is:

  • Autotest client: The engine that executes the tests (dir client). Each autotest test is a directory inside (client/tests) and it is represented by a python class that implements a minimum number of methods. The client is what you need if you are a single developer trying out autotest and executing some tests. Autotest client executes ''client side control files'', which are regular python programs, and leverage the API of the client.
  • Autotest server: A program that copies the client to remote machines and controls their execution. Autotest server executes ''server side control files'', which are also regular python programs, but leverage a higher level API, since the autotest server can control test execution in multiple machines. If you want to perform tests slightly more complex involving more than one machine you might want the autotest server
  • Autotest database: For test grids, we need a way to store test results, and that is the purpose of the database component. This DB is used by the autotest scheduler and the frontends to store and visualize test results.
  • Autotest scheduler: For test grids, we need an utility that can schedule and trigger job execution in test machines, the autotest scheduler is that utility.
  • Autotest web frontend: For test grids, A web app, whose backend is written in django (http://www.djangoproject.com/) and UI written in gwt (http://code.google.com/webtoolkit/), lets users to trigger jobs and visualize test results
  • Autotest command line interface: Alternatively, users also can use the autotest CLI, written in python

Getting started with autotest client

For the impatient:

http://autotest.readthedocs.org/en/latest/main/local/ClientQuickStart.html

Installing the autotest server

For the impatient using Red Hat:

http://autotest.readthedocs.org/en/latest/main/sysadmin/AutotestServerInstallRedHat.html

For the impatient using Ubuntu/Debian:

http://autotest.readthedocs.org/en/latest/main/sysadmin/AutotestServerInstall.html

You are advised to read the documentation carefully, specially with details regarding appropriate versions of Django autotest is compatible with.

Main project page

http://autotest.github.com/

Documentation

Autotest comes with in tree documentation, that can be built with sphinx. A publicly available build of the latest master branch documentation and releases can be seen on read the docs:

http://autotest.readthedocs.org/en/latest/index.html

It is possible to consult the docs of released versions, such as:

http://autotest.readthedocs.org/en/0.16.0/

If you want to build the documentation, here are the instructions:

  1. Make sure you have the package python-sphinx installed. For Fedora:

    $ sudo yum install python-sphinx
    
  2. For Ubuntu/Debian:

    $ sudo apt-get install python-sphinx
    
  3. Optionally, you can install the read the docs theme, that will make your in-tree documentation to look just like in the online version:

    $ sudo pip install sphinx_rtd_theme
    
  4. Build the docs:

    $ make -C documentation html
    
  5. Once done, point your browser to:

    $ [your-browser] docs/build/html/index.html
    

Mailing list and IRC info

http://autotest.readthedocs.org/en/latest/main/general/ContactInfo.html

Grabbing the latest source

https://github.com/autotest/autotest

Hacking and submitting patches

http://autotest.readthedocs.org/en/latest/main/developer/SubmissionChecklist.html

Downloading stable versions

https://github.com/autotest/autotest/releases

Next Generation Testing Framework

Please check Avocado, a next generation test automation framework being developed by several of the original Autotest team members:

http://avocado-framework.github.io/

tp-libvirt's People

Contributors

cevich avatar chloerh avatar chunfuwen avatar cliping avatar dzhengfy avatar ehabkost avatar hahakiki2010 avatar hao-liu avatar hs0210 avatar jferlan avatar kylazhang avatar ldoktor avatar lento-sun avatar liuyd96 avatar lmr avatar meinali avatar mxie91 avatar nanli1 avatar pandawei avatar smitterl avatar vwu-vera avatar waynesun09 avatar will-do avatar xiaodwan avatar yafu-1 avatar yalzhang avatar yanqzhan avatar yingshun avatar ypu avatar zhouqt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tp-libvirt's Issues

Please make sure vm has booted when positive attach/detach test

The following ceph case is failed because missing check for vm has boot or not

virtual_disks.ceph.hot_plug.without_auth.disk_attach.attach_device
virtual_disks.ceph.hot_plug.with_auth.domain_operation.snapshot.disk_readonly
virtual_disks.ceph.hot_plug.with_auth.domain_operation.snapshot.disk_internal
virtual_disks.ceph.hot_plug.with_auth.disk_attach.json_pseudo_protocol
virtual_disks.ceph.hot_plug.with_auth.disk_attach.disk_shareable
virtual_disks.ceph.hot_plug.with_auth.disk_attach.attach_device
virtual_disks.ceph.cold_plug.without_auth.domain_operation.snapshot.disk_snap_with_sanlock
virtual_disks.ceph.cold_plug.without_auth.disk_attach.disk_shareable
virtual_disks.ceph.cold_plug.with_auth.disk_attach.disk_shareable
virtual_disks.ceph.hot_plug.without_auth.disk_attach.disk_shareabe

After adding vm.wait_for_login(timeout=600).close() before disk attach, the case will pass.
For positive live attach/detach tests, you should make sure the vm is on. The reason I will provide later

Add method and test case for setvcpu

Planning to add method and test case for setvcpu, this issue is just to track.

virsh help setvcpu
  NAME
    setvcpu - attach/detach vcpu or groups of threads

  SYNOPSIS
    setvcpu <domain> <vcpulist> [--enable] [--disable] [--config] [--live] [--current]

  DESCRIPTION
    Add or remove vcpus

  OPTIONS
    [--domain] <string>  domain name, id or uuid
    [--vcpulist] <string>  ids of vcpus to manipulate
    --enable         enable cpus specified by cpumap
    --disable        disable cpus specified by cpumap
    --config         affect next boot
    --live           affect running domain
    --current        affect current domain


Fix the copy failed issue of virtual_disks.ceph.hot_plug.with_auth.default.rbd_blockcopy.slices_reuse_external

This case failed after qemu 5.1

commit e83dd6808c
Author: Kevin Wolf <[email protected]>
Date:   Mon May 11 15:58:24 2020 +0200

    mirror: Make sure that source and target size match
    
    If the target is shorter than the source, mirror would copy data until
    it reaches the end of the target and then fail with an I/O error when
    trying to write past the end.
    
    If the target is longer than the source, the mirror job would complete
    successfully, but the target wouldn't actually be an accurate copy of
    the source image (it would contain some additional garbage at the end).
    
    Fix this by checking that both images have the same size when the job
    starts.
    
    Signed-off-by: Kevin Wolf <[email protected]>
    Reviewed-by: Eric Blake <[email protected]>
    Message-Id: <[email protected]>
    Reviewed-by: Max Reitz <[email protected]>
    Reviewed-by: Vladimir Sementsov-Ogievskiy <[email protected]>
    Signed-off-by: Kevin Wolf <[email protected]>

So we should make sure the src image size (created by qemu-img create) and the dest image size (the size of slice.size) are equal.
@chunfuwen

Enable the test for transient disk

The transient disk will be enabled after the release of libvirt v6.9 (see the news ):
Here is a example to test the transient disk:

1. Start an VM with a transient disk:
# virsh dumpxml fedora31
...
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/tmp/test'/>
      <target dev='vdb' bus='virtio'/>
      <transient/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </disk>
...

# virsh start fedora31                                                 
Domain fedora31 started


2. Check the live xml
# virsh dumpxml fedora31|xmllint --xpath //disk -
<disk type="file" device="disk">
      <driver name="qemu" type="qcow2"/>
      <source file="/tmp/test.TRANSIENT" index="3"/>
      <backingStore type="file" index="1">
        <format type="raw"/>
        <source file="/tmp/test"/>
        <backingStore/>
      </backingStore>
      <target dev="vdb" bus="virtio"/>
      <transient/>
      <alias name="virtio-disk1"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x09" function="0x0"/>
    </disk>

3. Destory the VM and check if TRANSIENT file is there:
# virsh destroy fedora31                         
Domain fedora31 destroyed

# file /tmp/test.TRANSIENT
/tmp/test.TRANSIENT: cannot open `/tmp/test.TRANSIENT' (No such file or directory)

4. Start VM. And live detach the vm, then check if TRANSIENT file is there:
# virsh detach-disk fedora31 vdb
Disk detached successfully

# file /tmp/test.TRANSIENT      
/tmp/test.TRANSIENT: cannot open `/tmp/test.TRANSIENT' (No such file or directory)

Notifications:

  1. current checking for transient disk is not correct because it is not functionally enabled before v6.9
  2. According to the validating code of transient disk, it works only on the following conditions:
  • blockdev is used (v5.10 or later)
  • the disk source is a local file
  • the disk type is 'disk'
  • the disk is not readonly

create_lxc test failure

A tp-libvirt commit without a pull request added the create_lxc test, see:

48a3505

This test fails in my f19/f20 environment (with both the provided and an upstream libvirt git head build).

These need to be resolved. The failures are as follows:

Failure 1:

04:48:43 DEBUG| KVM userspace version: Unknown
04:48:43 INFO | Libvirt VM 'lxc_test_vm1', driver 'lxc', uri 'lxc:///'
04:48:43 DEBUG| device is [{'xml': <open file '/tmp/xml_utils_temp_CmEfKE.xml', mode 'wb+' at 0x464deb0>, 'virsh': <module 'virttest.virsh' from '/home/virt-test/virttest/virsh.pyc'>, 'validates': None, 'device_tag': 'emulator', 'xmltreefile': None}, {'xml': <open file '/tmp/xml_utils_temp_czDymv.xml', mode 'wb+' at 0x464d108>, 'virsh': <module 'virttest.virsh' from '/home/virt-test/virttest/virsh.pyc'>, 'validates': None, 'device_tag': 'console', 'xmltreefile': None}]
04:48:43 DEBUG| xml is <?xml version='1.0' encoding='UTF-8'?>
<domain type="lxc"><name>lxc_test_vm1</name><memory>500000</memory><currentMemory>500000</currentMemory><vcpu>1</vcpu><os><type arch="x86_64">exe</type><init>/bin/sh</init></os><devices><emulator>/usr/libexec/libvirt_lxc</emulator><console type="pty" /></devices></domain>
04:48:44 INFO | Domain lxc_test_vm1 created, will check with console
04:48:46 DEBUG| Sending command: lsof|grep '^sh.*/tmp/foo'
04:48:46 DEBUG| Sending command: echo $?
04:48:46 DEBUG| Destroying VM
04:48:46 DEBUG| Trying to shutdown VM with shell command
04:48:46 DEBUG| No MAC defined for NIC #0
04:48:46 DEBUG| Releasing MAC addresses for vm lxc_test_vm1.
04:48:46 DEBUG| Checking image file /home/virt-test/shared/data/images/jeos-19-64.qcow2
04:48:46 DEBUG| Running '/bin/qemu-img info /home/virt-test/shared/data/images/jeos-19-64.qcow2'
04:48:46 DEBUG| Running '/bin/qemu-img check /home/virt-test/shared/data/images/jeos-19-64.qcow2'
04:48:46 ERROR|
04:48:46 ERROR| Traceback (most recent call last):
04:48:46 ERROR|   File "/home/virt-test/virttest/standalone_test.py", line 213, in run_once
04:48:46 ERROR|     run_func(self, params, env)
04:48:46 ERROR|   File "/home/virt-test/test-providers.d/downloads/io-github-autotest-libvirt/libvirt/tests/src/virsh_cmd/domain/virsh_create_lxc.py", line 83, in run
04:48:46 ERROR|     raise error.TestFail("Can not find file %s in container" % i)
04:48:46 ERROR| TestFail: Can not find file /tmp/foo in container
04:48:46 ERROR|
04:48:46 ERROR| FAIL type_specific.io-github-autotest-libvirt.virsh.create_lxc.with_passfds.with_none -> TestFail: Can not find file /tmp/foo in container
04:48:46 INFO |

Failure 2:

04:48:47 DEBUG| KVM userspace version: Unknown
04:48:47 DEBUG| device is [{'xml': <open file '/tmp/xml_utils_temp_McvyOf.xml', mode 'wb+' at 0x464d780>, 'virsh': <module 'virttest.virsh' from '/home/virt-test/virttest/virsh.pyc'>, 'validates': None, 'device_tag': 'emulator', 'xmltreefile': None}, {'xml': <open file '/tmp/xml_utils_temp_T44oBj.xml', mode 'wb+' at 0x464dbd0>, 'virsh': <module 'virttest.virsh' from '/home/virt-test/virttest/virsh.pyc'>, 'validates': None, 'device_tag': 'console', 'xmltreefile': None}]
04:48:47 DEBUG| xml is <?xml version='1.0' encoding='UTF-8'?>
<domain type="lxc"><name>lxc_test_vm1</name><memory>500000</memory><currentMemory>500000</currentMemory><vcpu>1</vcpu><os><type arch="x86_64">exe</type><init>/bin/sh</init></os><devices><emulator>/usr/libexec/libvirt_lxc</emulator><console type="pty" /></devices></domain>
04:48:49 DEBUG| Sending command: lsof|grep '^sh.*/tmp/foo'
04:48:49 DEBUG| Sending command: echo $?
04:48:49 DEBUG| Destroying VM
04:48:49 DEBUG| Trying to shutdown VM with shell command
04:48:49 DEBUG| No MAC defined for NIC #0
04:48:49 DEBUG| Releasing MAC addresses for vm lxc_test_vm1.
04:48:49 DEBUG| Checking image file /home/virt-test/shared/data/images/jeos-19-64.qcow2
04:48:49 DEBUG| Running '/bin/qemu-img info /home/virt-test/shared/data/images/jeos-19-64.qcow2'
04:48:49 DEBUG| Running '/bin/qemu-img check /home/virt-test/shared/data/images/jeos-19-64.qcow2'
04:48:50 ERROR|
04:48:50 ERROR| Traceback (most recent call last):
04:48:50 ERROR|   File "/home/virt-test/virttest/standalone_test.py", line 213, in run_once
04:48:50 ERROR|     run_func(self, params, env)
04:48:50 ERROR|   File "/home/virt-test/test-providers.d/downloads/io-github-autotest-libvirt/libvirt/tests/src/virsh_cmd/domain/virsh_create_lxc.py", line 83, in run
04:48:50 ERROR|     raise error.TestFail("Can not find file %s in container" % i)
04:48:50 ERROR| TestFail: Can not find file /tmp/foo in container
04:48:50 ERROR|
04:48:50 ERROR| FAIL type_specific.io-github-autotest-libvirt.virsh.create_lxc.with_passfds.with_console -> TestFail: Can not find file /tmp/foo in container

Failure 3:

04:48:50 DEBUG| KVM version: 3.12.8-200.fc19.x86_64
04:48:50 DEBUG| KVM userspace version: Unknown
04:48:50 DEBUG| device is [{'xml': <open file '/tmp/xml_utils_temp_s511s7.xml', mode 'wb+' at 0x464dbd0>, 'virsh': <module 'virttest.virsh' from '/home/virt-test/virttest/virsh.pyc'>, 'validates': None, 'device_tag': 'emulator', 'xmltreefile': None}, {'xml': <open file '/tmp/xml_utils_temp_Hhaw9Z.xml', mode 'wb+' at 0x464d780>, 'virsh': <module 'virttest.virsh' from '/home/virt-test/virttest/virsh.pyc'>, 'validates': None, 'device_tag': 'console', 'xmltreefile': None}]
04:48:50 DEBUG| xml is <?xml version='1.0' encoding='UTF-8'?>
<domain type="lxc"><name>lxc_test_vm1</name><memory>500000</memory><currentMemory>500000</currentMemory><vcpu>1</vcpu><os><type arch="x86_64">exe</type><init>/bin/sh</init></os><devices><emulator>/usr/libexec/libvirt_lxc</emulator><console type="pty" /></devices></domain>
04:48:52 DEBUG| Sending command: lsof|grep '^sh.*/tmp/foo'
04:48:52 DEBUG| Sending command: echo $?
04:48:52 DEBUG| Destroying VM
04:48:52 DEBUG| Trying to shutdown VM with shell command
04:48:52 DEBUG| No MAC defined for NIC #0
04:48:53 DEBUG| Releasing MAC addresses for vm lxc_test_vm1.
04:48:53 DEBUG| Checking image file /home/virt-test/shared/data/images/jeos-19-64.qcow2
04:48:53 DEBUG| Running '/bin/qemu-img info /home/virt-test/shared/data/images/jeos-19-64.qcow2'
04:48:53 DEBUG| Running '/bin/qemu-img check /home/virt-test/shared/data/images/jeos-19-64.qcow2'
04:48:53 ERROR|
04:48:53 ERROR| Traceback (most recent call last):
04:48:53 ERROR|   File "/home/virt-test/virttest/standalone_test.py", line 213, in run_once
04:48:53 ERROR|     run_func(self, params, env)
04:48:53 ERROR|   File "/home/virt-test/test-providers.d/downloads/io-github-autotest-libvirt/libvirt/tests/src/virsh_cmd/domain/virsh_create_lxc.py", line 83, in run
04:48:53 ERROR|     raise error.TestFail("Can not find file %s in container" % i)
04:48:53 ERROR| TestFail: Can not find file /tmp/foo in container
04:48:53 ERROR|
04:48:53 ERROR| FAIL type_specific.io-github-autotest-libvirt.virsh.create_lxc.with_passfds.with_autodestroy_console -> TestFail: Can not find file /tmp/foo in container

snapshot_edit test failures

Hello,

maybe the problem is only at my side (but actually it was working before the xml comparison changes) and I don't fully get the logic of the last changes in xml comparison code applied in the commit a1b76af , but currently some of the snapshot_edit subtests fail with the error TestFail: Failed xml before/after comparison . In the past they were working fine.

Currently I have failures in
snapshot_edit.positive_tests.edit_option_clone.edit_option_current.*
snapshot_edit.positive_tests.edit_option_clone.edit_option_current.disk_snapshot.with_desc.*
snapshot_edit.positive_tests.edit_option_clone.edit_option_current.mem_snapshot.*
snapshot_edit.positive_tests.edit_option_clone.edit_option_snapname.internal_snapshot.*
....
....
and etc. in some other subtests.

Could you please review it or advise me with it ? (maybe there is something related to the changes which I don't do correctly ). I am running those tests in Avocado.

Thanks

Enchance PoolVolumeTest class

Then I think it is time for us to enhance the PoolVolumeTest class if possible. In this way, our shared libraries will get stronger than before.
However, if you think it is too difficult to enhance this class, free feel to make your comments, thanks

[Migration] Enable stress, stress-ng, iozone, kselftest, LTP to be executed inside guest and migrate

MultiVM guest boot without guest import got enabled in avocado-vt framework to clone libvirt based VM,

avocado-framework/avocado-vt#1498
avocado-framework/avocado-vt#1562
avocado-framework/avocado-vt#1564
avocado-framework/avocado-vt#1569

MultiVM stress migration Test enablement patch,
#1565

Recent patch work on avocado-vt PR avocado-framework/avocado-vt#1598, would enable us to run any tool (stress, stress-ng, iozone, kselftest, LTP) inside guest or host or in remote host based on configurable params.

Plan is to use this and develop scenarios to run,

  1. Transactional memory testcases available in kselftest for migration (guest / host / remote host).
  2. iozone tests which uncovered recent migration issues (guest / host / remote host).
  3. LTP, stress-ng tests to be explored and enabled with migration (guest / host /remote host).

Move non-core virsh API to tp-libvirt

This is the list of virttest.virsh APIs used by libvirt_vm:

from virttest.virsh import command
from virttest.virsh import attach_interface
from virttest.virsh import canonical_uri
from virttest.virsh import define
from virttest.virsh import destroy
from virttest.virsh import detach_interface
from virttest.virsh import domain_exists
from virttest.virsh import domblklist
from virttest.virsh import domid
from virttest.virsh import dominfo
from virttest.virsh import domjobabort
from virttest.virsh import domstate
from virttest.virsh import domuuid
from virttest.virsh import driver
from virttest.virsh import dump
from virttest.virsh import dumpxml
from virttest.virsh import is_alive
from virttest.virsh import is_dead
from virttest.virsh import migrate
from virttest.virsh import qemu_monitor_command
from virttest.virsh import restore
from virttest.virsh import resume
from virttest.virsh import save
from virttest.virsh import screenshot
from virttest.virsh import shutdown
from virttest.virsh import start
from virttest.virsh import suspend
from virttest.virsh import undefine
from virttest.virsh import vcpuinfo
from virttest.virsh import vcpupin

So, 29 functions that are necessary to the core. As of this writing, there are 179 functions that are implemented in virttest.virsh, meaning we could move 150 functions to tp-libvirt.

Close pull request created more than two years old

In the currently pull request pool, there are quite number of pull requests, which were created more than 2 years ago, but no any updates after creation. Those pull requests bring noise to upstream PR tracking, and somehow distract reviewers' effort as well.Therefore we would like to close those pull requests once there is no author's feedback within two weeks after @author in PR.

Freeze utils_test/libvirt.py

@sathnaga @balamuruhans @smitterl ,

Here we have a new plan for virttest/utils_test/libvirt.py.
As this file is becoming bigger and bigger, there are too many kinds of functions mixed up in this file. The readability and usability is also getting worse and worse. So we plan to lock down this file that no more new functions are allowed to be added into it. And at same time, we plan to create a new folder named with virttest/utils_libvirt, and will create multiple files under utils_libvirt/ by feature on demand, such as memory, cpu, migration and so on. Those files are used to contain the utility functions which are mostly virsh commands related.

So welcome your comments. And if no objection, please help avoid of new functions in libvirt.py any more.
Thanks a lot.

Add library for hotplug related methods

Currently we have multiple testcases uses repeated coding to manipulate data before and
after hotplug events, this library is mainly aimed to bring most of the common code in here as
generic methods.

E:g like below samples:-

is_hotplug supported(vm, mtype):
"""
return if hotplug supported in given machine type
"""

get_cpu_xmldata(vm_name, options):
"""
return all cpu related xml data from vm xml
"""

check_vcpucount(vm_name, count, options):
"""
Return True if vcpu count matches from different
libvirt API and guest
vcpucount
vcpuinfo
vcpupin
guestvcpus etc
"""

[Migration] Enable SAN based guest to adopt migration test cases to leverage

Currently all the migration scenarios are using NFS shared storage to configure and perform migration, but ideally guest would be based on disk from LUN storage shared across mutiple host. Working on it to bring up the guest with LUN through Avocado-VT and sent a recent PR - avocado-framework/avocado-vt#1593.

Planned to make use of the existing testcases to make use of this guest to perform migration and we need few more code changes in,

  • tp-libvirt to avoid guest xml changes currently being done for NFS migration.
  • avocado-vt to avoid some migration pre-setups that are not required for SAN migration
    by this changes it is expected to run migration testcases seamlessly on NFS or in SAN based on the configuration params.

Root logger is not avilible to avocado tests after the 92.0 LTS

In b2459dd the avocado stopped using the logs from root logger. Every avocado log are under avocado namespace now. This change was already accepted in avocado-vt. With those changes, every avocado-vt test has to use avocado namespace for logging, otherwise the logs will be lost, because the logs to root logger won't be used by avocado output. This is also part of Avocado 92.0 LTS version. As a solution, might be approach from avocado-vt adbfa56

Discuss tp-libvirt scripts support for python 2

Everybody may know Python 2 End of Life is January 1st 2020.
I raise one issue for discussion, do we need still support python 2 in tp-libvirt scripts?
Anyone feel free to contribute your ideas?

[Migration] scenarios that excercise for ibm, dynamic-memory-v2 implementation

There is Kernel and Qemu development work on ibm, dynamic-memory-v2 for PowerPC,
Kernel: https://patchwork.ozlabs.org/cover/828666/
Qemu: https://patchwork.kernel.org/patch/10263343/

As far as I discussed with feature developer, it is affected and exercised during boot time of the guest, so explore and implement test scenarios to verify the functionality with migration,

  • Boot, hotplug, migrate
  • Boot, hotplug, migrate, unplug, reboot
  • Boot, hotplug, migrate, unplug, migrate
  • Boot, migrate, reboot
  • Coldplug, migrate, unplug, reboot
  • Boot, migrate, hotplug, reboot

Fix false failure of virtual_disks.multidisks.coldplug.single_disk_test.disk_bus_device_option.disk_bus_usb.device_disk.error_test4

Error Failed to define domain from /tmp/xml_utils_temp_bs3scmwt.xmlerror: internal error: unexpected address type for usb disk happened when define the xml(pci addr in usb disk):

<?xml version='1.0' encoding='UTF-8'?><domain type="kvm">  <name>avocado-vt-vm1</name>  <uuid>598729f8-af94-4a5e-b56b-117cadd6c933</uuid>  <metadata>    <ns0:libosinfo xmlns:ns0="http://libosinfo.org/xmlns/libvirt/domain/1.0">      <ns0:os id="http://redhat.com/rhel/8.3" />    </ns0:libosinfo>  </metadata>  <memory unit="KiB">1048576</memory>  <currentMemory unit="KiB">1048576</currentMemory>  <vcpu placement="static">2</vcpu>  <os>    <type arch="x86_64" machine="pc-q35-rhel8.4.0">hvm</type>    <boot dev="hd" />  </os>  <features>    <acpi />    <apic />  </features>  <cpu check="partial" mode="host-model">    <feature name="vmx" policy="disable" />  </cpu>  <clock offset="utc">    <timer name="rtc" tickpolicy="catchup" />    <timer name="pit" tickpolicy="delay" />    <timer name="hpet" present="no" />  </clock>  <on_poweroff>destroy</on_poweroff>  <on_reboot>restart</on_reboot>  <on_crash>destroy</on_crash>  <pm>    <suspend-to-mem enabled="no" />    <suspend-to-disk enabled="no" />  </pm>  <devices><emulator>/usr/libexec/qemu-kvm</emulator><disk device="disk" type="file">      <driver name="qemu" type="qcow2" />      <source file="/var/lib/avocado/data/avocado-vt/images/jeos-27-x86_64.qcow2" />      <target bus="virtio" dev="vda" />      <address bus="0x04" domain="0x0000" function="0x0" slot="0x00" type="pci" />    </disk><controller index="0" type="sata">      <address bus="0x00" domain="0x0000" function="0x2" slot="0x1f" type="pci" />    </controller><controller index="0" model="pcie-root" type="pci" /><controller index="1" model="pcie-root-port" type="pci">      <model name="pcie-root-port" />      <target chassis="1" port="0x10" />      <address bus="0x00" domain="0x0000" function="0x0" multifunction="on" slot="0x02" type="pci" />    </controller><controller index="2" model="pcie-root-port" type="pci">      <model name="pcie-root-port" />      <target chassis="2" port="0x11" />      <address bus="0x00" domain="0x0000" function="0x1" slot="0x02" type="pci" />    </controller><controller index="3" model="pcie-root-port" type="pci">      <model name="pcie-root-port" />      <target chassis="3" port="0x12" />      <address bus="0x00" domain="0x0000" function="0x2" slot="0x02" type="pci" />    </controller><controller index="4" model="pcie-root-port" type="pci">      <model name="pcie-root-port" />      <target chassis="4" port="0x13" />      <address bus="0x00" domain="0x0000" function="0x3" slot="0x02" type="pci" />    </controller><controller index="5" model="pcie-root-port" type="pci">      <model name="pcie-root-port" />      <target chassis="5" port="0x14" />      <address bus="0x00" domain="0x0000" function="0x4" slot="0x02" type="pci" />    </controller><controller index="6" model="pcie-root-port" type="pci">      <model name="pcie-root-port" />      <target chassis="6" port="0x15" />      <address bus="0x00" domain="0x0000" function="0x5" slot="0x02" type="pci" />    </controller><controller index="7" model="pcie-root-port" type="pci">      <model name="pcie-root-port" />      <target chassis="7" port="0x16" />      <address bus="0x00" domain="0x0000" function="0x6" slot="0x02" type="pci" />    </controller><controller index="0" type="virtio-serial">      <address bus="0x03" domain="0x0000" function="0x0" slot="0x00" type="pci" />    </controller><interface type="network">      <mac address="52:54:00:3f:2f:b2" />      <source network="default" />      <model type="virtio" />      <address bus="0x01" domain="0x0000" function="0x0" slot="0x00" type="pci" />    </interface><serial type="pty">      <target port="0" type="isa-serial">        <model name="isa-serial" />      </target>    </serial><console type="pty">      <target port="0" type="serial" />    </console><channel type="unix">      <target name="org.qemu.guest_agent.0" type="virtio" />      <address bus="0" controller="0" port="1" type="virtio-serial" />    </channel><input bus="ps2" type="mouse" /><input bus="ps2" type="keyboard" /><graphics autoport="yes" port="-1" type="vnc">      <listen type="address" />    </graphics><video>      <model heads="1" primary="yes" ram="65536" type="qxl" vgamem="16384" vram="65536" />      <address bus="0x00" domain="0x0000" function="0x0" slot="0x01" type="pci" />    </video><memballoon model="virtio">      <address bus="0x05" domain="0x0000" function="0x0" slot="0x00" type="pci" />    </memballoon><rng model="virtio">      <backend model="random">/dev/urandom</backend>      <address bus="0x06" domain="0x0000" function="0x0" slot="0x00" type="pci" />    </rng><disk device="disk" type="file"><source file="/tmp/avocado_pl4pcr19/disk.raw" /><target bus="usb" dev="sda" /><driver cache="none" name="qemu" type="raw" /><address bus="0x00" domain="0x0000" function="0x0" slot="0x09" type="pci" /></disk><controller index="0" model="piix3-uhci" type="usb" /><controller index="1" model="ich9-ehci1" type="usb" /><input bus="usb" type="tablet" /><hub type="usb" /></devices></domain>

The false failure is because libvirt move the validation code from domain start API to domain define API. See https://gitlab.com/libvirt/libvirt/-/commit/aa65f0f2f1
@chunfuwen Please move the error checking to virsh define after libvirt-6.9

snapshot_create_as: Fix negative test with quiesce without ga on running domain

Run the case:

./run -t libvirt --test type_specific.io-github-autotest-libvirt.virsh.snapshot_create_as.negative_tests.quiesce_without_ga.running_domain

Running setup. Please wait...
SETUP: PASS (19.56 s)
DATA DIR: /home/wayne/virt_test
DEBUG LOG: /home/wayne/workspace/virt-test/logs/run-2014-05-13-16.43.25/debug.log
TESTS: 1
(1/1) type_specific.io-github-autotest-libvirt.virsh.snapshot_create_as.negative_tests.quiesce_without_ga.running_domain: FAIL (26.53 s)
TOTAL TIME: 26.57 s
TESTS PASSED: 0
TESTS FAILED: 1
SUCCESS RATE: 0.00 %

logs:
16:43:44 DEBUG| Running virsh command: snapshot-create-as virt-tests-vm1 --quiesce --disk-only
16:43:44 DEBUG| Running '/usr/bin/virsh snapshot-create-as virt-tests-vm1 --quiesce --disk-only'
16:43:52 DEBUG| status: 0
16:43:52 DEBUG| stdout: Domain snapshot 1399970624 created
16:43:52 DEBUG| stderr:

16:43:57 ERROR| Traceback (most recent call last):
16:43:57 ERROR| File "/home/wayne/workspace/virt-test/virttest/standalone_test.py", line 219, in run_once
16:43:57 ERROR| run_func(self, params, env)
16:43:57 ERROR| File "/home/wayne/workspace/virt-test/test-providers.d/downloads/io-github-autotest-libvirt/libvirt/tests/src/virsh_cmd/snapshot/virsh_snapshot_create_as.py", line 404, in run
16:43:57 ERROR| raise error.TestFail("Run successfully with wrong command!")
16:43:57 ERROR| TestFail: Run successfully with wrong command!
16:43:57 ERROR|
16:43:57 ERROR| FAIL type_specific.io-github-autotest-libvirt.virsh.snapshot_create_as.negative_tests.quiesce_without_ga.running_domain -> TestFail: Run successfully with wrong command!

[Migration] Update cases for set migration bandwidth

There are already some existing cases for migration speed, like virsh.migrate_set_get_speed.
Here we have some more requirements, like

  • set/get migration speed for shutoff/running guest
  • set/get migration speed before or during migration
  • get default migration speed for shutoff/running guest + migration

'graphic_act' is not defined

Hello,

seems by accident the line graphic_act = vmxml_act.devices.by_device_tag('graphics')[0] was removed from virsh_domdisplay.py with the commit a1b76af , thus being a reason of the failure NameError: global name 'graphic_act' is not defined in .. src/virsh_cmd/domain/virsh_domdisplay.py , line 140

I am not sure what was exactly supposed to be fixed so I'd rather file an issue.

Thanks

About the usage of test.error()/test.fail()/test.cancel()

Recently we had some discussion to synchronize the same understanding for the usage of above functions in the subject.
Below is the current understanding we have now. Welcome to comment!

  • The basic strategy is that we should skip tests as little as we can

  • test.fail() - used only when failures happens for case checkpoints

  • test.error() - used when

    1. vm starts failed
    2. create disk failed
    3. required package missing
    4. not enough vms
    5. incorrect parameters
    6. and any failure for test environment setup
  • test.cancel() - used when

  1. The feature does not support with current libvirt version or architecture
  2. The hardware configuration does not support the test

numatune test failure with cgroup.stop

Not quite sure when this was introduced, but I don't believe the failures listed below should be failures - rather they should be PASS's. Both are failures only because of attempting numatune when cgroup service is stopped.

This has been an issue for a while, but I didn't want it lost/forgotten!

Failure 1

04:47:10 DEBUG| Running 'systemctl status cgconfig.service'
04:47:10 DEBUG| Running 'service libvirtd restart'
04:47:10 DEBUG| Running 'virsh list'
04:47:11 DEBUG| Running 'virsh list'
04:47:11 DEBUG| Restarted libvirtd successfully
04:47:11 DEBUG| Running 'virsh list'
04:47:11 DEBUG| Starting vm 'virt-tests-vm1'
04:47:11 DEBUG| waiting for domain virt-tests-vm1 to start (0.000017 secs)
04:47:11 DEBUG| Setting ignore_status to True.
04:47:11 DEBUG| Running 'systemctl status cgconfig.service'
04:47:11 DEBUG| Setting ignore_status to True.
04:47:11 DEBUG| Running 'systemctl start cgconfig.service'
04:47:12 DEBUG| * Command:
    systemctl start cgconfig.service
Exit status: 6
Duration: 0.00612497329712

stderr:
Failed to issue method call: Unit cgconfig.service failed to load: No such file or directory. See system logs and 'systemctl status cgconfig.service' for details.
04:47:12 DEBUG| Running 'service libvirtd restart'
04:47:12 DEBUG| Running 'virsh list'
04:47:13 DEBUG| Running 'virsh list'
04:47:13 DEBUG| Restarted libvirtd successfully
04:47:13 DEBUG| Running 'virsh list'
04:47:13 DEBUG| Undefine VM virt-tests-vm1
04:47:13 DEBUG| Define VM from /tmp/xml_utils_temp_r8xFOD.xml
04:47:13 WARNI| Requested MAC address release from persistent vm virt-tests-vm1. Ignoring.
04:47:13 DEBUG| Checking image file /home/virt-test/shared/data/images/jeos-19-64.qcow2
04:47:13 DEBUG| Running '/bin/qemu-img info /home/virt-test/shared/data/images/jeos-19-64.qcow2'
04:47:13 DEBUG| Running '/bin/qemu-img check /home/virt-test/shared/data/images/jeos-19-64.qcow2'
04:47:14 ERROR|
04:47:14 ERROR| Traceback (most recent call last):
04:47:14 ERROR|   File "/home/virt-test/virttest/standalone_test.py", line 213, in run_once
04:47:14 ERROR|     run_func(self, params, env)
04:47:14 ERROR|   File "/home/virt-test/test-providers.d/downloads/io-github-autotest-libvirt/libvirt/tests/src/virsh_cmd/domain/virsh_numatune.py", line 216, in run
04:47:14 ERROR|     get_numa_parameter(params)
04:47:14 ERROR|   File "/home/virt-test/test-providers.d/downloads/io-github-autotest-libvirt/libvirt/tests/src/virsh_cmd/domain/virsh_numatune.py", line 95, in get_numa_parameter
04:47:14 ERROR|     raise error.TestFail("Unexpected return code %d" % status)
04:47:14 ERROR| TestFail: Unexpected return code 0
04:47:14 ERROR|
04:47:14 ERROR| FAIL type_specific.io-github-autotest-libvirt.virsh.numatune.negative_testing.get_numa_parameter.running_guest.cgroup.stop -> TestFail: Unexpected return code 0

Failure 2:

04:48:06 DEBUG| waiting for domain virt-tests-vm1 to start (0.000020 secs)
04:48:06 DEBUG| Running 'true'
04:48:06 DEBUG| Running 'readlink /proc/1/exe'
04:48:06 DEBUG| Destroying VM
04:48:06 DEBUG| Trying to shutdown VM with shell command
04:48:06 DEBUG| Found/Verified IP 192.168.122.195 for VM virt-tests-vm1 NIC 0
04:48:06 DEBUG| Login command: 'ssh -o UserKnownHostsFile=/dev/null -o PreferredAuthentications=password -p 22 [email protected]'
04:48:09 DEBUG| Client process terminated    (status: 255,    output: 'ssh: connect to host 192.168.122.195 port 22: No route to host\n')
04:48:12 WARNI| Requested MAC address release from persistent vm virt-tests-vm1. Ignoring.
04:48:12 DEBUG| Setting ignore_status to True.
04:48:12 DEBUG| Running 'systemctl status cgconfig.service'
04:48:12 DEBUG| Running 'service libvirtd restart'
04:48:12 DEBUG| Running 'virsh list'
04:48:13 DEBUG| Running 'virsh list'
04:48:13 DEBUG| Restarted libvirtd successfully
04:48:13 DEBUG| Running 'virsh list'
04:48:13 DEBUG| Starting vm 'virt-tests-vm1'
04:48:14 DEBUG| waiting for domain virt-tests-vm1 to start (0.000016 secs)
04:48:14 DEBUG| Running virsh command: numatune virt-tests-vm1
04:48:14 DEBUG| Running '/usr/bin/virsh numatune virt-tests-vm1'
04:48:14 DEBUG| status: 0
04:48:14 DEBUG| stdout: numa_mode      : strict
numa_nodeset   : 0
04:48:14 DEBUG| stderr:
04:48:14 DEBUG| Setting ignore_status to True.
04:48:14 DEBUG| Running 'systemctl status cgconfig.service'
04:48:14 DEBUG| Setting ignore_status to True.
04:48:14 DEBUG| Running 'systemctl start cgconfig.service'
04:48:14 DEBUG| * Command:
    systemctl start cgconfig.service
Exit status: 6
Duration: 0.0063750743866

stderr:
Failed to issue method call: Unit cgconfig.service failed to load: No such file or directory. See system logs and 'systemctl status cgconfig.service' for details.
04:48:14 DEBUG| Running 'service libvirtd restart'
04:48:14 DEBUG| Running 'virsh list'
04:48:15 DEBUG| Running 'virsh list'
04:48:15 DEBUG| Restarted libvirtd successfully
04:48:15 DEBUG| Running 'virsh list'
04:48:15 DEBUG| Undefine VM virt-tests-vm1
04:48:15 DEBUG| Define VM from /tmp/xml_utils_temp_U0UrPb.xml
04:48:15 WARNI| Requested MAC address release from persistent vm virt-tests-vm1. Ignoring.
04:48:16 DEBUG| Checking image file /home/virt-test/shared/data/images/jeos-19-64.qcow2
04:48:16 DEBUG| Running '/bin/qemu-img info /home/virt-test/shared/data/images/jeos-19-64.qcow2'
04:48:16 DEBUG| Running '/bin/qemu-img check /home/virt-test/shared/data/images/jeos-19-64.qcow2'
04:48:16 ERROR|
04:48:16 ERROR| Traceback (most recent call last):
04:48:16 ERROR|   File "/home/virt-test/virttest/standalone_test.py", line 213, in run_once
04:48:16 ERROR|     run_func(self, params, env)
04:48:16 ERROR|   File "/home/virt-test/test-providers.d/downloads/io-github-autotest-libvirt/libvirt/tests/src/virsh_cmd/domain/virsh_numatune.py", line 218, in run
04:48:16 ERROR|     set_numa_parameter(params)
04:48:16 ERROR|   File "/home/virt-test/test-providers.d/downloads/io-github-autotest-libvirt/libvirt/tests/src/virsh_cmd/domain/virsh_numatune.py", line 139, in set_numa_parameter
04:48:16 ERROR|     raise error.TestFail("Unexpected return code %d" % status)
04:48:16 ERROR| TestFail: Unexpected return code 0
04:48:16 ERROR|
04:48:16 ERROR| FAIL type_specific.io-github-autotest-libvirt.virsh.numatune.negative_testing.set_numa_parameter.running_guest.cgroup.stop -> TestFail: Unexpected return code 0

Discuss where to put helper function in python file, within run() or same level of run()?

This issue comes from #3256 (comment).
I'd like to create this issue to specifically talk about this.

In the existing codes, some helper functions are defined within run() function, such as in libvirt/tests/src/usb_device.py, other helper function are defined at the same level of run() function, such as in libvirt/tests/src/multifunction.py. We do not have a strict/recommended rule for this in the past, so contributors wrote codes in their preferred way.

In PR 3256, @smitterl suggested we'd better use the latter way which is to define at same level of run() function because
a) avoid using variables that are not passed, b) improve readability

I hope we can reach an agreement in this issue, then the contributors will follow the conclusion in future pull requests.

Welcome to add your opinions. @chunfuwen @kylazhang @chloerh @Yingshun @yafu-1 @smitterl @kvarga, @fangge1212

Enable spell check in CI workflow

This practice is enabled in avocado-vt project, avocado-framework/avocado-vt#3186.
It also help improve health of tp-libvirt project if we adopt it in tp-libvirt as well:
- name: Run spellchecker
run: pylint --errors-only --disable=all --enable=spelling --spelling-dict=en_US --spelling-private-dict-file=spell.ignore *

Report on recent CI failures

Most of current opening pull requests are marked as CI failure. This is caused by:

  1. There is an internal Jenkins CI server tracking new pull requests of virt-test and tp-libvirt. It runs tests related to the PR and generate report.
  2. To make this work, it needs an OAuth token, which is created by me, to access these repos. Since I was just a normal contributer, Jenkins just use these credentials to trigger test build and so far so good.
  3. Then I became a maintainer and restarted our Jenkins server. It worked as what it should be and used the OAuth token set the CI status. All failed because of configuration problem.

To fix this issue, I've removed the token from the Jenkins server. So CI will pass if you rebase the PR. I will also find a way to clean the CI status manually.

Sorry for the inconvenience caused by this impact.

dompmsuspend test failures

Prior to the split of virt-test/tp-libvirt, the following pull request:

autotest/virt-test#1242

added the test for 'dompmsuspend'. Running that under f19 (or f20) using either the provided libvirt or a git top of trunk installed libvirt results in two failures in my testing which need to be resolved. Whether it's a libvirt or test bug I'm not quite sure yet, but I wanted to be sure to log it as an issue to ensure followup!

Errors are as follows from debug.log output:

Failure 1:

04:24:37 DEBUG| Sending command: rpm -q qemu-guest-agent || yum install -y qemu-guest-agent
04:24:38 DEBUG| Sending command: echo $?
04:24:38 DEBUG| Sending command: ps aux |grep [q]emu-ga
04:24:38 DEBUG| Sending command: echo $?
04:24:38 DEBUG| Sending command: rpm -q pm-utils || yum install -y pm-utils
04:24:38 DEBUG| Sending command: echo $?
04:24:38 DEBUG| Running virsh command: dompmsuspend virt-tests-vm1 disk --duration 0
04:24:38 DEBUG| Running '/usr/bin/virsh dompmsuspend virt-tests-vm1 disk --duration 0'
04:24:39 DEBUG| status: 1
04:24:39 DEBUG| stdout:
04:24:39 DEBUG| stderr: error: Domain virt-tests-vm1 could not be suspended
error: internal error: unable to execute QEMU agent command 'guest-suspend-disk': child process has failed to suspend
04:24:41 DEBUG| Undefine VM virt-tests-vm1
04:24:41 DEBUG| Define VM from /tmp/xml_utils_temp_S3dKiq.xml
04:24:41 DEBUG| Checking image file /home/virt-test/shared/data/images/jeos-19-64.qcow2
04:24:41 DEBUG| Running '/bin/qemu-img info /home/virt-test/shared/data/images/jeos-19-64.qcow2'
04:24:41 DEBUG| Running '/bin/qemu-img check /home/virt-test/shared/data/images/jeos-19-64.qcow2'
04:24:42 ERROR|
04:24:42 ERROR| Traceback (most recent call last):
04:24:42 ERROR|   File "/home/virt-test/virttest/standalone_test.py", line 213, in run_once
04:24:42 ERROR|     run_func(self, params, env)
04:24:42 ERROR|   File "/home/virt-test/test-providers.d/downloads/io-github-autotest-libvirt/libvirt/tests/src/virsh_cmd/domain/virsh_dompmsuspend.py", line 81, in run_virsh_dompmsuspend
04:24:42 ERROR|     raise error.TestFail("Run failed with right command")
04:24:42 ERROR| TestFail: Run failed with right command
04:24:42 ERROR|
04:24:42 ERROR| FAIL type_specific.io-github-autotest-libvirt.virsh.dompmsuspend.running.disk -> TestFail: Run failed with right command

Failure 2

04:25:10 DEBUG| Sending command: rpm -q qemu-guest-agent || yum install -y qemu-guest-agent
04:25:10 DEBUG| Sending command: echo $?
04:25:10 DEBUG| Sending command: ps aux |grep [q]emu-ga
04:25:10 DEBUG| Sending command: echo $?
04:25:10 DEBUG| Sending command: rpm -q pm-utils || yum install -y pm-utils
04:25:10 DEBUG| Sending command: echo $?
04:25:11 DEBUG| Running virsh command: dompmsuspend virt-tests-vm1 hybrid --duration 0
04:25:11 DEBUG| Running '/usr/bin/virsh dompmsuspend virt-tests-vm1 hybrid --duration 0'
04:25:12 DEBUG| status: 1
04:25:12 DEBUG| stdout:
04:25:12 DEBUG| stderr: error: Domain virt-tests-vm1 could not be suspended
error: internal error: unable to execute QEMU agent command 'guest-suspend-hybrid': child process has failed to suspend
04:25:13 DEBUG| Undefine VM virt-tests-vm1
04:25:13 DEBUG| Define VM from /tmp/xml_utils_temp_LzteEJ.xml
04:25:13 DEBUG| Checking image file /home/virt-test/shared/data/images/jeos-19-64.qcow2
04:25:13 DEBUG| Running '/bin/qemu-img info /home/virt-test/shared/data/images/jeos-19-64.qcow2'
04:25:13 DEBUG| Running '/bin/qemu-img check /home/virt-test/shared/data/images/jeos-19-64.qcow2'
04:25:13 ERROR|
04:25:13 ERROR| Traceback (most recent call last):
04:25:13 ERROR|   File "/home/virt-test/virttest/standalone_test.py", line 213, in run_once
04:25:13 ERROR|     run_func(self, params, env)
04:25:13 ERROR|   File "/home/virt-test/test-providers.d/downloads/io-github-autotest-libvirt/libvirt/tests/src/virsh_cmd/domain/virsh_dompmsuspend.py", line 81, in run_virsh_dompmsuspend
04:25:13 ERROR|     raise error.TestFail("Run failed with right command")
04:25:13 ERROR| TestFail: Run failed with right command
04:25:13 ERROR|
04:25:13 ERROR| FAIL type_specific.io-github-autotest-libvirt.virsh.dompmsuspend.running.hybrid -> TestFail: Run failed with right command

NFS shared storage configured unattended_install.import.import.default_install.aio_native fails

For the First time everything goes fine, if I re-run unattended_install.import.import.default_install.aio_native, guest doesn't boots as it creates rhel72-ppc64le.qcow2 image file in /var/lib/libvirt/images/sharing which is corrupted, and remains even after NFS is unmounted.

Everything works fine if I am not configuring NFS shared storage in base.cfg as it takes image from avocado/data/avocado-vt/images/rhel72-ppc64le.qcow2

avocado run --vt-type libvirt --vt-config /var/lib/libvirt/images/avocado/data/avocado-vt/backends/libvirt/cfg/boot.cfg --vt-only-filter 'scsi RHEL.7.2.ppc64le qcow2' --vt-no-filter 'macvtap user network'

image: /var/lib/libvirt/images/sharing/rhel72-ppc64le.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 1.8M
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: true
refcount bits: 16
corrupt: false

base.cfg:

Nfs support related params. Please fill them depends on your environment

For both nfs and local export nfs, following parammeters should be set:

storage_type = nfs
nfs_mount_dir = /var/lib/libvirt/images/sharing
nfs_mount_options = rw

For nfs you also need set this one:

For nfs export in local these options need be set:

export_dir = /var/lib/libvirt/images/NFS

nfs_mount_src: the nfs resource you use

nfs_mount_src = /var/lib/libvirt/images/NFS

At least one of the above parameters should be set. And id export_dir is

set, it will cover the value set in nfs_mount_src.

export_ip: optional.

export_options: optional.

boot.cfg:

File reserved for test runner (./run) use, don't modify.

include tests-shared.cfg

virt_install_binary = /usr/bin/virt-install
qemu_img_binary = /usr/bin/qemu-img
hvm_or_pv = hvm

Allow os_type + os_variant to choose this automatically

machine_type = pseries
use_os_variant = yes
use_os_type = yes
only qcow2

only bridge

only scsi

only spapr-vlan

only virtio_net

only virtio_blk

only smp2
only no_9p_export
only no_virtio_rng
only no_pci_assignable
only (image_backend=filesystem)
only smallpages

variants:
- boot_guest:
only unattended_install.import.import.default_install.aio_native

Don't use /tmp for disk attach

Since qemu v5.2.0-rc0~110^2~1 util: give a specific error message when O_DIRECT doesn't work, it will report error when try to live attach file disk source from /tmp:

internal error: unable to execute QEMU command 'blockdev-add': Could not open '/tmp/disk-raw': filesystem does not support O_DIRECT

It seems some disk sources still use /tmp:

libvirt/tests/src/virsh_cmd/domain/virsh_detach_device.py:30:    def create_device_file(device_source="/tmp/attach.img"):

Here is why tmpfs cannot support O_DIRECT: https://lists.archive.carbon60.com/linux/kernel/720702

@chunfuwen @Yingshun @dzhengfy

Give the reason of "expect fail" failure

The expect of command pass is common while expect fail is not. It is confused to debug a case failure when expect fail but success if no extra msg given:

2018-09-26 10:58:19,044 stacktrace       L0044 ERROR| Reproduced traceback from: /root/avocado/avocado/core/test.py:864
2018-09-26 10:58:19,044 stacktrace       L0047 ERROR| Traceback (most recent call last):
2018-09-26 10:58:19,044 stacktrace       L0047 ERROR|   File "/root/avocado-vt/avocado_vt/test.py", line 297, in runTest
2018-09-26 10:58:19,044 stacktrace       L0047 ERROR|     raise self.__status  # pylint: disable=E0702
2018-09-26 10:58:19,045 stacktrace       L0047 ERROR| TestFail: Run '/bin/virsh attach-disk --domain avocado-vt-vm1 --source /var/tmp/avocado_w4K5Mq/native.raw --target sda ${disk_attach_bus_option}' expect fail, but run successfully.

Could you please add extra msg why expect failure for all "expect fail" code?
@chunfuwen

numatune failures for cgroup.stop testcases

numatune.qemu.qcow2.scsi.smp2.virtio_net.Fedora.24.ppc64le.powerkvm-libvirt.virsh.numatune.negative_testing.set_numa_parameter.running_guest.cgroup.stop
numatune.qemu.qcow2.scsi.smp2.virtio_net.Fedora.24.ppc64le.powerkvm-libvirt.virsh.numatune.negative_testing.get_numa_parameter.running_guest.cgroup.stop

These testcases fails as, systemd mounts cgroup bydefault in the system and cgconfig service does not affect the mounted cgroups.

2016-09-26 07:41:26,296 process          L0421 DEBUG| [stdout] systemd
2016-09-26 07:41:26,297 service          L0471 DEBUG| Setting ignore_status to True.
2016-09-26 07:41:26,297 process          L0334 INFO | Running 'systemctl status cgconfig.service'
2016-09-26 07:41:26,307 process          L0421 DEBUG| [stdout] * cgconfig.service - Control Group configuration service
2016-09-26 07:41:26,307 process          L0435 INFO | Command 'systemctl status cgconfig.service' finished with 0 after 0.00750517845154s
2016-09-26 07:41:26,308 process          L0421 DEBUG| [stdout]    Loaded: loaded (/usr/lib/systemd/system/cgconfig.service; disabled; vendor preset: disabled)
2016-09-26 07:41:26,308 process          L0421 DEBUG| [stdout]    Active: active (exited) since Mon 2016-09-26 07:40:39 AKDT; 46s ago
2016-09-26 07:41:26,308 process          L0421 DEBUG| [stdout]   Process: 139135 ExecStart=/usr/sbin/cgconfigparser -l /etc/cgconfig.conf -L /etc/cgconfig.d -s 1664 (code=exited, status=0/SUCCESS)
2016-09-26 07:41:26,308 process          L0421 DEBUG| [stdout]  Main PID: 139135 (code=exited, status=0/SUCCESS)
2016-09-26 07:41:26,308 process          L0421 DEBUG| [stdout]    Memory: 0B
2016-09-26 07:41:26,308 process          L0421 DEBUG| [stdout]    CGroup: /system.slice/cgconfig.service
2016-09-26 07:41:56,362 libvirt_vm       L1787 DEBUG| VM is down
2016-09-26 07:41:56,363 service          L0471 DEBUG| Setting ignore_status to True.
2016-09-26 07:41:56,364 process          L0334 INFO | Running 'systemctl stop cgconfig.service'
2016-09-26 07:41:56,411 process          L0435 INFO | Command 'systemctl stop cgconfig.service' finished with 0 after 0.0437998771667s
2016-09-26 07:41:56,411 process          L0334 INFO | Running 'true'
2016-09-26 07:41:56,415 process          L0435 INFO | Command 'true' finished with 0 after 0.000420093536377s
2016-09-26 07:41:56,415 process          L0334 INFO | Running 'ps -o comm 1'
2016-09-26 07:41:56,479 process          L0421 DEBUG| [stdout] COMMAND
2016-09-26 07:41:56,479 process          L0435 INFO | Command 'ps -o comm 1' finished with 0 after 0.0614249706268s
2016-09-26 07:41:56,480 process          L0421 DEBUG| [stdout] systemd
2016-09-26 07:41:56,480 utils_libvirtd   L0321 WARNI| This function was deprecated, Please use class utils_libvirtd.Libvirtd to manage libvirtd service.
2016-09-26 07:41:56,480 service          L0471 DEBUG| Setting ignore_status to True.
2016-09-26 07:41:56,480 process          L0334 INFO | Running 'systemctl reset-failed libvirtd.service'
2016-09-26 07:41:56,486 process          L0435 INFO | Command 'systemctl reset-failed libvirtd.service' finished with 0 after 0.00320601463318s
2016-09-26 07:41:56,487 service          L0471 DEBUG| Setting ignore_status to True.
2016-09-26 07:41:56,487 process          L0334 INFO | Running 'systemctl restart libvirtd.service'
2016-09-26 07:41:56,526 process          L0435 INFO | Command 'systemctl restart libvirtd.service' finished with 0 after 0.0359261035919s
2016-09-26 07:41:56,526 process          L0334 INFO | Running 'virsh list'
2016-09-26 07:41:57,179 process          L0421 DEBUG| [stdout]  Id    Name                           State
2016-09-26 07:41:57,180 process          L0435 INFO | Command 'virsh list' finished with 0 after 0.650489091873s
2016-09-26 07:41:57,180 process          L0421 DEBUG| [stdout] ----------------------------------------------------
2016-09-26 07:41:57,180 process          L0421 DEBUG| [stdout]
2016-09-26 07:41:57,180 process          L0334 INFO | Running 'true'
2016-09-26 07:41:57,184 process          L0435 INFO | Command 'true' finished with 0 after 0.000392913818359s
2016-09-26 07:41:57,184 process          L0334 INFO | Running 'ps -o comm 1'
2016-09-26 07:41:57,244 process          L0421 DEBUG| [stdout] COMMAND
2016-09-26 07:41:57,244 process          L0435 INFO | Command 'ps -o comm 1' finished with 0 after 0.0573031902313s
2016-09-26 07:41:57,244 process          L0421 DEBUG| [stdout] systemd
2016-09-26 07:41:57,245 utils_libvirtd   L0321 WARNI| This function was deprecated, Please use class utils_libvirtd.Libvirtd to manage libvirtd service.
2016-09-26 07:41:57,245 service          L0471 DEBUG| Setting ignore_status to True.
2016-09-26 07:41:57,245 process          L0334 INFO | Running 'systemctl status libvirtd.service'
2016-09-26 07:41:57,255 process          L0421 DEBUG| [stdout] * libvirtd.service - Virtualization daemon
2016-09-26 07:41:57,255 process          L0421 DEBUG| [stdout]    Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
2016-09-26 07:41:57,255 process          L0435 INFO | Command 'systemctl status libvirtd.service' finished with 0 after 0.00542211532593s
2016-09-26 07:41:57,255 process          L0421 DEBUG| [stdout]    Active: active (running) since Mon 2016-09-26 07:41:56 AKDT; 728ms ago
2016-09-26 07:41:57,255 process          L0421 DEBUG| [stdout]      Docs: man:libvirtd(8)
2016-09-26 07:41:57,256 process          L0421 DEBUG| [stdout]            http://libvirt.org
2016-09-26 07:41:57,256 process          L0421 DEBUG| [stdout]  Main PID: 143933 (libvirtd)
2016-09-26 07:41:57,256 process          L0421 DEBUG| [stdout]    Memory: 33.9M
2016-09-26 07:41:57,256 process          L0421 DEBUG| [stdout]    CGroup: /system.slice/libvirtd.service
2016-09-26 07:41:57,256 process          L0421 DEBUG| [stdout]            |-  7783 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
2016-09-26 07:41:57,256 process          L0421 DEBUG| [stdout]            |-  7784 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
2016-09-26 07:41:57,766 utils_misc       L0607 DEBUG| waiting for domain virt-tests-vm1 to start (0.000013 secs)
2016-09-26 07:41:57,858 process          L0334 INFO | Running 'numactl --hardware'
2016-09-26 07:41:57,861 process          L0421 DEBUG| [stdout] available: 2 nodes (0,8)
2016-09-26 07:41:57,862 process          L0435 INFO | Command 'numactl --hardware' finished with 0 after 0.000995874404907s
2016-09-26 07:41:57,862 process          L0421 DEBUG| [stdout] node 0 cpus: 0 8 16 24 32 40 48 56 64 72
2016-09-26 07:41:57,862 process          L0421 DEBUG| [stdout] node 0 size: 262144 MB
2016-09-26 07:41:57,862 process          L0421 DEBUG| [stdout] node 0 free: 253192 MB
2016-09-26 07:41:57,862 process          L0421 DEBUG| [stdout] node 8 cpus: 80 88 96 104 112 120 128 136 144 152
2016-09-26 07:41:57,863 process          L0421 DEBUG| [stdout] node 8 size: 262144 MB
2016-09-26 07:41:57,863 process          L0421 DEBUG| [stdout] node 8 free: 254550 MB
2016-09-26 07:41:57,863 process          L0421 DEBUG| [stdout] node distances:
2016-09-26 07:41:57,863 process          L0421 DEBUG| [stdout] node   0   8
2016-09-26 07:41:57,863 process          L0421 DEBUG| [stdout]   0:  10  40
2016-09-26 07:41:57,863 process          L0421 DEBUG| [stdout]   8:  40  10
2016-09-26 07:41:57,864 process          L0334 INFO | Running 'numactl --hardware'
2016-09-26 07:41:57,867 process          L0421 DEBUG| [stdout] available: 2 nodes (0,8)
2016-09-26 07:41:57,867 process          L0435 INFO | Command 'numactl --hardware' finished with 0 after 0.000855207443237s
2016-09-26 07:41:57,868 process          L0421 DEBUG| [stdout] node 0 cpus: 0 8 16 24 32 40 48 56 64 72
2016-09-26 07:41:57,868 process          L0421 DEBUG| [stdout] node 0 size: 262144 MB
2016-09-26 07:41:57,868 process          L0421 DEBUG| [stdout] node 0 free: 253192 MB
2016-09-26 07:41:57,868 process          L0421 DEBUG| [stdout] node 8 cpus: 80 88 96 104 112 120 128 136 144 152
2016-09-26 07:41:57,868 process          L0421 DEBUG| [stdout] node 8 size: 262144 MB
2016-09-26 07:41:57,869 process          L0421 DEBUG| [stdout] node 8 free: 254550 MB
2016-09-26 07:41:57,869 process          L0421 DEBUG| [stdout] node distances:
2016-09-26 07:41:57,869 process          L0421 DEBUG| [stdout] node   0   8
2016-09-26 07:41:57,869 process          L0421 DEBUG| [stdout]   0:  10  40
2016-09-26 07:41:57,870 utils_misc       L1794 WARNI| Can not find the cpu list information from both numactl and sysfs. Please check your system.
2016-09-26 07:41:57,870 process          L0334 INFO | Running 'numactl --hardware'
2016-09-26 07:41:57,873 process          L0421 DEBUG| [stdout] available: 2 nodes (0,8)
2016-09-26 07:41:57,873 process          L0435 INFO | Command 'numactl --hardware' finished with 0 after 0.000849962234497s
2016-09-26 07:41:57,874 process          L0421 DEBUG| [stdout] node 0 cpus: 0 8 16 24 32 40 48 56 64 72
2016-09-26 07:41:57,874 process          L0421 DEBUG| [stdout] node 0 size: 262144 MB
2016-09-26 07:41:57,874 process          L0421 DEBUG| [stdout] node 0 free: 253192 MB
2016-09-26 07:41:57,874 process          L0421 DEBUG| [stdout] node 8 cpus: 80 88 96 104 112 120 128 136 144 152
2016-09-26 07:41:57,874 process          L0421 DEBUG| [stdout] node 8 size: 262144 MB
2016-09-26 07:41:57,874 process          L0421 DEBUG| [stdout] node 8 free: 254550 MB
2016-09-26 07:41:57,875 process          L0421 DEBUG| [stdout] node distances:
2016-09-26 07:41:57,875 process          L0421 DEBUG| [stdout] node   0   8
2016-09-26 07:41:57,875 process          L0421 DEBUG| [stdout]   0:  10  40
2016-09-26 07:41:57,875 process          L0421 DEBUG| [stdout]   8:  40  10
2016-09-26 07:41:57,875 process          L0334 INFO | Running 'numactl --hardware'
2016-09-26 07:41:57,879 process          L0421 DEBUG| [stdout] available: 2 nodes (0,8)
2016-09-26 07:41:57,879 process          L0435 INFO | Command 'numactl --hardware' finished with 0 after 0.000833034515381s
2016-09-26 07:41:57,879 process          L0421 DEBUG| [stdout] node 0 cpus: 0 8 16 24 32 40 48 56 64 72
2016-09-26 07:41:57,879 process          L0421 DEBUG| [stdout] node 0 size: 262144 MB
2016-09-26 07:41:57,879 process          L0421 DEBUG| [stdout] node 0 free: 253192 MB
2016-09-26 07:41:57,880 process          L0421 DEBUG| [stdout] node 8 cpus: 80 88 96 104 112 120 128 136 144 152
2016-09-26 07:41:57,880 process          L0421 DEBUG| [stdout] node 8 size: 262144 MB
2016-09-26 07:41:57,880 process          L0421 DEBUG| [stdout] node 8 free: 254550 MB
2016-09-26 07:41:57,880 process          L0421 DEBUG| [stdout] node distances:
2016-09-26 07:41:57,880 process          L0421 DEBUG| [stdout] node   0   8
2016-09-26 07:41:57,880 process          L0421 DEBUG| [stdout]   0:  10  40
2016-09-26 07:41:57,880 process          L0421 DEBUG| [stdout]   8:  40  10
2016-09-26 07:41:57,881 process          L0334 INFO | Running 'numactl --hardware'
2016-09-26 07:41:57,884 process          L0421 DEBUG| [stdout] available: 2 nodes (0,8)
2016-09-26 07:41:57,884 process          L0435 INFO | Command 'numactl --hardware' finished with 0 after 0.000823974609375s
2016-09-26 07:41:57,885 process          L0421 DEBUG| [stdout] node 0 cpus: 0 8 16 24 32 40 48 56 64 72
2016-09-26 07:41:57,885 process          L0421 DEBUG| [stdout] node 0 size: 262144 MB
2016-09-26 07:41:57,885 process          L0421 DEBUG| [stdout] node 0 free: 253192 MB
2016-09-26 07:41:57,885 process          L0421 DEBUG| [stdout] node 8 cpus: 80 88 96 104 112 120 128 136 144 152
2016-09-26 07:41:57,885 process          L0421 DEBUG| [stdout] node 8 size: 262144 MB
2016-09-26 07:41:57,885 process          L0421 DEBUG| [stdout] node 8 free: 254550 MB
2016-09-26 07:41:57,886 process          L0421 DEBUG| [stdout] node distances:
2016-09-26 07:41:57,886 process          L0421 DEBUG| [stdout] node   0   8
2016-09-26 07:41:57,886 process          L0421 DEBUG| [stdout]   0:  10  40
2016-09-26 07:41:57,886 process          L0421 DEBUG| [stdout]   8:  40  10
2016-09-26 07:41:57,886 process          L0334 INFO | Running 'numactl --hardware'
2016-09-26 07:41:57,889 process          L0421 DEBUG| [stdout] available: 2 nodes (0,8)
2016-09-26 07:41:57,890 process          L0435 INFO | Command 'numactl --hardware' finished with 0 after 0.000887870788574s
2016-09-26 07:41:57,890 process          L0421 DEBUG| [stdout] node 0 cpus: 0 8 16 24 32 40 48 56 64 72
2016-09-26 07:41:57,890 process          L0421 DEBUG| [stdout] node 0 size: 262144 MB
2016-09-26 07:41:57,890 process          L0421 DEBUG| [stdout] node 0 free: 253192 MB
2016-09-26 07:41:57,890 process          L0421 DEBUG| [stdout] node 8 cpus: 80 88 96 104 112 120 128 136 144 152
2016-09-26 07:41:57,891 process          L0421 DEBUG| [stdout] node 8 size: 262144 MB
2016-09-26 07:41:57,891 process          L0421 DEBUG| [stdout] node 8 free: 254550 MB
2016-09-26 07:41:57,891 process          L0421 DEBUG| [stdout] node distances:
2016-09-26 07:41:57,891 process          L0421 DEBUG| [stdout] node   0   8
2016-09-26 07:41:57,891 process          L0421 DEBUG| [stdout]   0:  10  40
2016-09-26 07:41:57,891 process          L0421 DEBUG| [stdout]   8:  40  10
2016-09-26 07:41:57,892 virsh_numatune   L0115 DEBUG| host node list is [0, 8]
2016-09-26 07:41:57,892 virsh            L0645 DEBUG| Running virsh command: numatune virt-tests-vm1
2016-09-26 07:41:57,892 process          L0334 INFO | Running '/bin/virsh numatune virt-tests-vm1'
2016-09-26 07:41:57,907 process          L0421 DEBUG| [stdout] numa_mode      : strict
2016-09-26 07:41:57,907 process          L0421 DEBUG| [stdout] numa_nodeset   :
2016-09-26 07:41:57,907 process          L0421 DEBUG| [stdout]
2016-09-26 07:41:57,907 process          L0435 INFO | Command '/bin/virsh numatune virt-tests-vm1' finished with 0 after 0.0130209922791s
2016-09-26 07:41:57,908 virsh            L0691 DEBUG| status: 0
2016-09-26 07:41:57,908 virsh            L0692 DEBUG| stdout: numa_mode      : strict
numa_nodeset   :
2016-09-26 07:41:57,908 virsh            L0693 DEBUG| stderr:
2016-09-26 07:41:58,231 virsh            L1332 DEBUG| Undefine VM virt-tests-vm1
2016-09-26 07:41:58,248 virsh            L1316 DEBUG| Define VM from /var/tmp/xml_utils_temp_RvA2SG.xml
2016-09-26 07:41:58,265 service          L0471 DEBUG| Setting ignore_status to True.
2016-09-26 07:41:58,265 process          L0334 INFO | Running 'systemctl start cgconfig.service'
2016-09-26 07:41:58,320 process          L0435 INFO | Command 'systemctl start cgconfig.service' finished with 0 after 0.0518870353699s
2016-09-26 07:41:58,321 process          L0334 INFO | Running 'true'
2016-09-26 07:41:58,324 process          L0435 INFO | Command 'true' finished with 0 after 0.000397920608521s
2016-09-26 07:41:58,324 process          L0334 INFO | Running 'ps -o comm 1'
2016-09-26 07:41:58,387 process          L0421 DEBUG| [stdout] COMMAND
2016-09-26 07:41:58,387 process          L0435 INFO | Command 'ps -o comm 1' finished with 0 after 0.0610570907593s
2016-09-26 07:41:58,388 process          L0421 DEBUG| [stdout] systemd
2016-09-26 07:41:58,388 utils_libvirtd   L0321 WARNI| This function was deprecated, Please use class utils_libvirtd.Libvirtd to manage libvirtd service.
2016-09-26 07:41:58,388 service          L0471 DEBUG| Setting ignore_status to True.
2016-09-26 07:41:58,388 process          L0334 INFO | Running 'systemctl reset-failed libvirtd.service'
2016-09-26 07:41:58,394 process          L0435 INFO | Command 'systemctl reset-failed libvirtd.service' finished with 0 after 0.00322294235229s
2016-09-26 07:41:58,395 service          L0471 DEBUG| Setting ignore_status to True.
2016-09-26 07:41:58,395 process          L0334 INFO | Running 'systemctl restart libvirtd.service'
2016-09-26 07:41:58,437 process          L0435 INFO | Command 'systemctl restart libvirtd.service' finished with 0 after 0.0389850139618s
2016-09-26 07:41:58,437 process          L0334 INFO | Running 'virsh list'
2016-09-26 07:41:59,107 process          L0421 DEBUG| [stdout]  Id    Name                           State
2016-09-26 07:41:59,107 process          L0421 DEBUG| [stdout] ----------------------------------------------------
2016-09-26 07:41:59,108 process          L0421 DEBUG| [stdout]
2016-09-26 07:41:59,108 process          L0435 INFO | Command 'virsh list' finished with 0 after 0.667789936066s
2016-09-26 07:41:59,160 qemu_storage     L0480 DEBUG| Checking image file /var/lib/libvirt/images/workspace/runAvocadoFVTTest/avocado-fvt-wrapper/data/avocado-vt/images/f24-ppc64le.qcow2
2016-09-26 07:41:59,339 process          L0334 INFO | Running 'true'
2016-09-26 07:41:59,342 process          L0435 INFO | Command 'true' finished with 0 after 0.00040602684021s
2016-09-26 07:41:59,343 process          L0334 INFO | Running 'ps -o comm 1'
2016-09-26 07:41:59,108 process          L0421 DEBUG| [stdout]
2016-09-26 07:41:59,108 process          L0435 INFO | Command 'virsh list' finished with 0 after 0.667789936066s
2016-09-26 07:41:59,160 qemu_storage     L0480 DEBUG| Checking image file /var/lib/libvirt/images/workspace/runAvocadoFVTTest/avocado-fvt-wrapper/data/avocado-vt/images/f24-ppc64le.qcow2
2016-09-26 07:41:59,339 process          L0334 INFO | Running 'true'
2016-09-26 07:41:59,342 process          L0435 INFO | Command 'true' finished with 0 after 0.00040602684021s
2016-09-26 07:41:59,343 process          L0334 INFO | Running 'ps -o comm 1'
2016-09-26 07:41:59,395 process          L0421 DEBUG| [stdout] COMMAND
2016-09-26 07:41:59,396 process          L0435 INFO | Command 'ps -o comm 1' finished with 0 after 0.0502989292145s
2016-09-26 07:41:59,396 process          L0421 DEBUG| [stdout] systemd
2016-09-26 07:41:59,519 stacktrace       L0038 ERROR|
2016-09-26 07:41:59,519 stacktrace       L0041 ERROR| Reproduced traceback from: /usr/lib/python2.7/site-packages/avocado_plugins_vt-41.0-py2.7.egg/avocado_vt/test.py:420
2016-09-26 07:41:59,519 stacktrace       L0044 ERROR| Traceback (most recent call last):
2016-09-26 07:41:59,520 stacktrace       L0044 ERROR|   File "/usr/lib/python2.7/site-packages/avocado_plugins_vt-41.0-py2.7.egg/avocado_vt/test.py", line 206, in runTest
2016-09-26 07:41:59,520 stacktrace       L0044 ERROR|     raise exceptions.TestFail(details)
2016-09-26 07:41:59,520 stacktrace       L0044 ERROR| TestFail: Unexpected return code 0

Add cpu, memory limit tests

Planning to add limit test(boundary values) for cpu and memory in libvirt.
Idea to have the boot test and run a simple stress inside guest with added with following configs

  1. maxvcpus(virsh maxvcpus) and 1x system memory
  2. maxvcpus(virsh maxvcpus) and 2x system memory
  3. maxvcpus(virsh maxvcpus) and 8-16G memory
  4. 1-4 vcpus and 2x system memory
    etc
    support to change the maxvcpu, maxmem from virt-install is already available
    avocado-framework/avocado-vt@659cbcb

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.