Giter Site home page Giter Site logo

ovirt / ovirt-ansible-collection Goto Github PK

View Code? Open in Web Editor NEW
73.0 15.0 87.0 1.63 MB

Ansible collection with official oVirt modules and roles

Shell 0.48% Python 97.67% Jinja 1.80% Makefile 0.04%
ovirt ovirt-ansible-collection ansible ansible-modules ansible-roles ansible-collection hacktoberfest

ovirt-ansible-collection's Introduction

ovirt-ansible-collection's People

Contributors

ahadas avatar alancoding avatar arachmani avatar avivtur avatar barpavel avatar bverschueren avatar didib avatar drokath avatar dupondje avatar erav avatar ganto avatar klettit avatar kobihk avatar michalskrivanek avatar mmartinv avatar mnecas avatar mrkev-gh avatar mwperina avatar nijinashok avatar olirdx avatar roniezr avatar s-hertel avatar saifabusaleh avatar seansackowitz avatar shanemcd avatar sjd78 avatar snecklifter avatar tinez avatar tiraboschi avatar vjuranek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ovirt-ansible-collection's Issues

Bond interfaces cannot actually have ipv4 or ipv6 addresses but 001_validate_network_interfaces.yml requires it?

SUMMARY

So I'm looking at the logic in 001_validate_network_interfaces.yml trying to figure out why my 802.3ad bond0 (with bridge0) doesn't work even though it seems to work fine on the host.

Looking at the logic it seems to require an ipv4 or ipv6 address. At least on Enterprise Linux 8, bonds seem to not be able to even have ip addresses, that's a property of any associated bridges.
Specifically I think a bond cannot with nmcli cannot have ipv4.* or ipv6.* parameters. You need a bridge for that.

I'm not exactly sure what the correct fix is, probably just to allow using a bridge on a bond that fits the supported criteria?

vm_infra role throws 'permission denied' when run from AWX

SUMMARY

Playbook to provision oVirt VM runs successfully when run from Ansible CLI, but same playbook fails with a permission denied error in the ovirt_vm task when run via AWX.

COMPONENT NAME

ovirt.ovirt.vm_infra role, suspect the ovirt_vm module

STEPS TO REPRODUCE

Host: CentOS-8 (python36) with Ansible 2.9.15, ovirt-engine-sdk-python 4.4.4, ovirt-ansible-collection 1.2.1
AWX: 14.1.0 (docker-compose install), custom virtualenv with python36, ansible 2.9.15, ovirt-engine-sdk-python 4.4.4
docker-compose.txt

Have tried the latest SDK from pip (4.4.7) as well, no change. Need to use a custom venv in AWX as the default environment only has SDK version 4.3.

Sample Playbook used:

- name: oVirt ansible collection
  hosts: localhost
  connection: local
  collections:
    - ovirt.ovirt
  vars_files:
    - vault.yml

  vars:
    vms:
      - name: c7_test1
        template: CentOS-7-x86_64-GenericCloud-2003
        cores: 2
        memory: 4GiB
        nics:
          - name: eth0
            network: Lab
        disks:
          - name: CentOS-7-x86_64-GenericCloud-2003
            size: 30GiB
            name_prefix: false
            interface: virtio_iscsi
        type: server
        cloud_init:
          dns_servers: '8.8.8.8 8.8.4.4'
          host_name: c7_test1.example.com
          custom_script: |
            users:
              - name: ansible
                groups: wheel
        cloud_init_nics:
          - nic_name: eth0
            nic_boot_protocol: static
            nic_ip_address: 172.22.4.66
            nic_netmask: 255.255.255.0
            nic_gateway: 172.22.4.1
            nic_on_boot: True

  tasks:
    - block:
        - name: Obtain SSO token with using username/password credentials
          ovirt.ovirt.ovirt_auth:
            url: https://testengine.lab.example.com/ovirt-engine/api
            username: admin@internal
            password: "{{ vault_ovirt_admin_password }}"

        - import_role:
            name: ovirt.ovirt.vm_infra

      always:
        - name: Always revoke the SSO token
          ovirt_auth:
            state: absent
            ovirt_auth: "{{ ovirt_auth }}"
EXPECTED RESULTS

VM is provisioned in oVirt both via Ansible CLI and also when run from AWX

ACTUAL RESULTS

VM provisioning successful when run via CLI, but fails from AWX with the output below

TASK [ovirt.ovirt.vm_infra : Wait for VMs to be added] *************************
task path: /tmp/awx_67_3yzkbikp/requirements_collections/ansible_collections/ovirt/ovirt/roles/vm_infra/tasks/vm_state_present.yml:8
Using module file /opt/venv/test2/lib/python3.6/site-packages/ansible/modules/utilities/logic/async_status.py
Pipelining is enabled.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c '/opt/venv/test1/bin/python && sleep 0'
failed: [localhost] (item={'started': 1, 'finished': 0, 'ansible_job_id': '387694935680.1896', 'results_file': '/var/lib/awx/.ansible_async/387694935680.1896', 'changed': False, 'failed': False}) => {
    "ansible_job_id": "387694935680.1896",
    "ansible_loop_var": "item",
    "attempts": 1,
    "changed": false,
    "cmd": "/tmp/ansible-tmp-1605164139.2402353-1885-54477151523365/AnsiballZ_ovirt_vm.py",
    "finished": 1,
    "invocation": {
        "module_args": {
            "_async_dir": "/var/lib/awx/.ansible_async",
            "jid": "387694935680.1896",
            "mode": "status"
        }
    },
    "item": {
        "ansible_job_id": "387694935680.1896",
        "changed": false,
        "failed": false,
        "finished": 0,
        "results_file": "/var/lib/awx/.ansible_async/387694935680.1896",
        "started": 1
    },
    "msg": "[Errno 13] Permission denied: '/tmp/ansible-tmp-1605164139.2402353-1885-54477151523365/AnsiballZ_ovirt_vm.py'",
    "outdata": "",
    "stderr": "",
    "stderr_lines": []
}
Read vars_file 'vault.yml'

Full debug output of this job attached
job_67.txt

'NoneType' object has no attribute 'disk_attachments' when modifying existing VMs

SUMMARY

When modifying an existing oVirt-VM ovirt_vm fails with error message 'NoneType' object has no attribute 'disk_attachments'.

I assume the bug was introduced by ansible/ansible@508ebf2#diff-9415a363b6dea8b0c2cb21bd497654a4. Applying this patch, in function __get_template_with_version there is only a code path left for new VMs and no code path for existing VMs at all, neither with a given template (previous working code), nor with template gathering using the oVirt API. As a result, template is None for all existing VMs.

(copied from ansible/ansible#71620)

ISSUE TYPE
  • Bug Report
COMPONENT NAME

plugins/modules/ovirt_vm.py

ANSIBLE VERSION

Bug was introduced in Ansible-2.9.3, reproducible with Ansible 2.9.12, probably also existent in newer versions.

CONFIGURATION
DEFAULT_STRATEGY_PLUGIN_PATH(/home/stefan/Dokumente/Develop/deployment/provisioning-ansible/ansible.cfg) = ['/home/stefan/Dokumente/Develop/deployment/provisioning-ansible/lib/plugins/strategy']
INTERPRETER_PYTHON(/home/stefan/Dokumente/Develop/deployment/provisioning-ansible/ansible.cfg) = auto_silent

OS / ENVIRONMENT

reproducible using Arch Linux, Ubuntu Linux, Debian Linux, ...

STEPS TO REPRODUCE
  • Create oVirt VM using ovirt_vm module
  • Change parameter of VM
  • Run ovirt_vm module again
EXPECTED RESULTS

Running VM with modified parameters

ACTUAL RESULTS
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'NoneType' object has no attribute 'disk_attachments'
fatal: [identity-production-worker-01.identity.production.intern.campact.de]: FAILED! => {"changed": false, "msg": "'NoneType' object has no attribute 'disk_attachments'"}

ovirt_vm: host devices cannot be configured due to pinning requirement

SUMMARY

When trying to set host devices, I received the following error:

Cannot add Host devices. VM must be pinned to a host.

This is despite setting host, placement policy as pinned and setting migrate to no.

COMPONENT NAME

ovirt_vm

STEPS TO REPRODUCE

Attempt to set host devices such as the documentation:

- name: Attach host devices to virtual machine
  ovirt_vm:
    name: myovirtvm
    host: myovirthost
    placement_policy: pinned
    host_devices:
      - name: pci_0000_00_07_0
        state: present
      - name: pci_0000_00_08_0
        state: present
EXPECTED RESULTS

Devices are passed through from host to VM.

ACTUAL RESULTS

Error above. I can configure manually in the RHV gui.

ovirt_engine_setup_restore_engine_cleanup param hangs the role execution

I tried running following playbook against running engine:

- hosts: jzmeskal_ansible_engine_1
  vars:
    ovirt_engine_setup_admin_password: <censored>
    ovirt_engine_setup_version: '4.4'
    ovirt_engine_setup_restore_engine_cleanup: true
    ovirt_engine_setup_restore_file: '/root/engine.backup'
    ovirt_engine_setup_restore_scopes:
      - 'all'
    ovirt_engine_setup_restore_options:
      log: '/root/ansible_restore.log'
      restore-permissions: ''
      provision-all-databases: ''

  roles:
    - ovirt.engine-setup

The result was that the role executing hung at task TASK [ovirt.engine-setup : Run engine cleanup command] for well over half an hour until I forcefully interrupted it with ^C. I suspect this is because engine-cleanup is an interactive command and expects some input from the user.
When I ran engine-cleanup manually and executed the above playbook again with ovirt_engine_setup_restore_engine_cleanup set to false, everything went fine.

Inclusion of ovirt.ovirt in Ansible 2.10

This collection will be included in Ansible 2.10 because it contains modules and/or plugins that were included in Ansible 2.9. Please review:

DEADLINE: 2020-08-18

The latest version of the collection available on August 18 will be included in Ansible 2.10.0, except possibly newer versions which differ only in the patch level. (For details, see the roadmap). Please release version 1.0.0 of your collection by this date! If 1.0.0 does not exist, the same 0.x.y version will be used in all of Ansible 2.10 without updates, and your 1.x.y release will not be included until Ansible 2.11 (unless you request an exception at a community working group meeting and go through a demanding manual process to vouch for backwards compatibility . . . you want to avoid this!).

Follow semantic versioning rules

Your collection versioning must follow all semver rules. This means:

  • Patch level releases can only contain bugfixes;
  • Minor releases can contain new features, new modules and plugins, and bugfixes, but must not break backwards compatibility;
  • Major releases can break backwards compatibility.

Changelogs and Porting Guide

Your collection should provide data for the Ansible 2.10 changelog and porting guide. The changelog and porting guide are automatically generated from ansible-base, and from the changelogs of the included collections. All changes from the breaking_changes, major_changes, removed_features and deprecated_features sections will appear in both the changelog and the porting guide. You have two options for providing changelog fragments to include:

  1. If possible, use the antsibull-changelog tool, which uses the same changelog fragment as the ansible/ansible repository (see the documentation).
  2. If you cannot use antsibull-changelog, you can provide the changelog in a machine-readable format as changelogs/changelog.yaml inside your collection (see the documentation of changelogs/changelog.yaml format).

If you cannot contribute to the integrated Ansible changelog using one of these methods, please provide a link to your collection's changelog by creating an issue in https://github.com/ansible-community/ansible-build-data/. If you do not provide changelogs/changelog.yml or a link, users will not be able to find out what changed in your collection from the Ansible changelog and porting guide.

Make sure your collection passes the sanity tests

Run ansible-test sanity --docker -v in the collection with the latest ansible-base or stable-2.10 ansible/ansible checkout.

Keep informed

Be sure you're subscribed to:

Questions and Feedback

If you have questions or want to provide feedback, please see the Feedback section in the collection requirements.

(Internal link to keep track of issues: ansible-collections/overview#102)

ovirt_disk: Provide option to configure direct LUN with SCSI Pass-Through

SUMMARY

When creating a direct LUN for a virtual machine, ovirt web interface provides the option to enable SCSI Pass-Through (enabled by default in this dialog):

ovirt_new_direct_LUN

ovirt_disk currently has no option to configure SCSI Pass-Through for a direct LUN. Can this option be added like with shareable?

COMPONENT NAME

ovirt_disk

ADDITIONAL INFORMATION

A "SCSI Pass-Through" option will be used to configure a direct LUN for SCSI Pass-Through.

  - name: VM -> Add SCSI Pass-Through Direct LUN
    ovirt_disk:
      auth:
        username: "admin@internal"
        password: "password"
        insecure: yes
        url: "https://ovirt.domain/ovirt-engine/api"

      name: "testvm_TEST01"
      host: "testhost"
      interface: virtio_scsi
      shareable: yes
      scsi_passthrough: yes
      vm_name: "testvm"
      format: raw
      logical_unit:
        id: "36000xxxxd38"
        storage_type: fcp

ovirt_disk: invalid Actual Size after uploading sparse qcow2 image to datastore

From @pacsikaz on Mar 02, 2020 16:20

SUMMARY

Using ovirt_disk ansible module to upload a sparse qcow2 image results in invalid actual size on a storage domain, backed by an iSCSI storage.
On source the qcow2 file size is less than 1G, but after uploading the actual size will be more than the virtual size given in the playbook. See details in attached logs.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

ovirt_disk

ANSIBLE VERSION
[root@FFM-MAINT ansible]# ansible --version
ansible 2.9.4
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
[root@FFM-MAINT ansible]#

CONFIGURATION
[root@FFM-MAINT ansible]# ansible-config dump --only-changed
[root@FFM-MAINT ansible]#

OS / ENVIRONMENT

RHEV version: 4.3.7.2-0.1.el7
Storage domain is backed by an iSCSI (HP MSA) storage device

rhevm_rpm_versions.log

STEPS TO REPRODUCE
  1. Create a sparse QCOW2 image, eg with;
    qemu-img create -f qcow2 /root/testimage.qcow2 1G

  2. check details of the created image with qemu-img info
    qemu-img info /root/testimage.qcow2
    image: /root/testimage.qcow2
    file format: qcow2
    virtual size: 1.0G (1073741824 bytes)
    disk size: 196K
    cluster_size: 65536
    Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

  3. use the below playbook to upload the image to Ovirt Engine (RHEV-M), specify the target storage domain, virtual size and source img file as mandatory parameters.

ansible-playbook -vvv --ask-vault-pass image_upload.yml --extra-vars "image_name=testvmdisk image_size=10GiB storage_domain_name=BACKUP image_full_path=/root/testimage.qcow2"

  1. Check the uploaded image Actual Size from Engine GUI - Storage - Domains - select target storage domain - Disks

EXAMPLE PLAYBOOK

"

  • hosts: localhost
    connection: smart

    vars_files:

    • vars/vars.yml
    • vault/vault_rhevm.yml

    vars:
    timestamp: "{{ansible_date_time.iso8601_basic_short}}"

    tasks:

    • name: Obtains SSO token
      ovirt_auth:
      url: "{{ url }}"
      username: "{{ username }}"
      password: "{{ password }}"
      insecure: "{{ insecure }}"
      tags:

      • auth
    • name: Upload backup vm disk
      ovirt_disk:
      auth: "{{ ovirt_auth }}"
      name: "{{ disk_image_name_var }}"
      format: cow
      bootable: yes
      storage_domain: "{{ storage_domain_name_var }}"
      size: "{{image_size_var }}"
      sparse: yes
      sparsify: yes
      timeout: 3600
      image_path: "{{ image_path_var }}"
      tags: upload_vm_disk
      "

EXPECTED RESULTS

When checking on RHEV-M GUI, Virtual Size shall be equal to the setting in playbook, while Actual size shall be aligned with actual size of the image (as also visible in qemu-img info / disk size)

ACTUAL RESULTS

Actual Size is invalid, become bigger than Virtual Size. Nor Sparse neither Sparsify module parameter makes any difference in the end result.

Note that uploading the very same image with upload_disk.py results in correct Actual Size on Storage Domain.

ansible_image_upload_logs.log

Result in RHEV-M GUI:

image




Copied from original issue: ansible/ansible#67933

License conflict?

SUMMARY

I noticed your license is Apache2 but a lot of plugins and content is under GPL. This needs clarification.

nic_on_boot boolean cannot be false

As per https://github.com/oVirt/ovirt-engine/blob/5b6a06433a5c352b53217e75b3219f84f4d41095/backend/manager/modules/common/src/main/java/org/ovirt/engine/core/common/utils/VmInitToOpenStackMetadataAdapter.java#L48 it seems that the cloud-init option to set nic_on_boot to false is not valid.

When I attempt to do so I see:

"msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot add VM. Start On Boot selection in cloud-init configuration is invalid. Only 'true' is supported.]\". HTTP response code is 400."

ovirt_vm cd_iso by name results in a 404 error

From @n0p90 on Jul 09, 2020 22:03

SUMMARY

Module ovirt_vm crashes with the error HTTP response code is 404 when starting a VM with an attached ISO provided by name through the cd_iso parameter.

Reverting commit 23761b98801ff6891d38af32d961daba1baf1bce fixes the issue.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

cd_iso parameter of the ovirt_vm module

ANSIBLE VERSION
ansible 2.9.10
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.7.7 (default, Jun  4 2020, 15:43:14) [GCC 9.3.1 20200408 (Red Hat 9.3.1-2)]
CONFIGURATION
OS / ENVIRONMENT
  • Ansible machine: Fedora 31
  • oVirt engine version: 4.3.10.4-1.el7
STEPS TO REPRODUCE

Upload an ISO to iso_storage storage domain (Domain Type: ISO, Storage Type: NFS):

ovirt-iso-uploader -i iso_storage --force upload centos-8.2.iso
  • Run playbook containing the following task
- ovirt_vm:
    state: running
    name: test-vm
    cd_iso: centos-8.2.iso
    boot_devices:
      - cdrom
    disks:
      - name: hdd1
        bootable: true
    nics:
      - name: nic1
        profile_name: deploynet
    cluster: my_cluster
    host: ovirt-node1.example.com
    auth: "{{ ovirt_auth }}"
EXPECTED RESULTS

The VM boots correctly using the provided ISO image.

ACTUAL RESULTS

The play crashes with the following stack trace:

Traceback (most recent call last):
  File "/tmp/ansible_ovirt_vm_payload_96vlgeqs/ansible_ovirt_vm_payload.zip/ansible/modules/cloud/ovirt/ovirt_vm.py", line 2469, in main
  File "/tmp/ansible_ovirt_vm_payload_96vlgeqs/ansible_ovirt_vm_payload.zip/ansible/modules/cloud/ovirt/ovirt_vm.py", line 1632, in post_present
  File "/tmp/ansible_ovirt_vm_payload_96vlgeqs/ansible_ovirt_vm_payload.zip/ansible/modules/cloud/ovirt/ovirt_vm.py", line 1695, in _attach_cd
  File "/tmp/ansible_ovirt_vm_payload_96vlgeqs/ansible_ovirt_vm_payload.zip/ansible/modules/cloud/ovirt/ovirt_vm.py", line 1688, in __get_cd_id
  File "/usr/local/lib64/python3.7/site-packages/ovirt_engine_sdk_python-4.3.3-py3.7-linux-x86_64.egg/ovirtsdk4/services.py", line 37145, in get
    return self._internal_get(headers, query, wait)
  File "/usr/local/lib64/python3.7/site-packages/ovirt_engine_sdk_python-4.3.3-py3.7-linux-x86_64.egg/ovirtsdk4/service.py", line 211, in _internal_get
    return future.wait() if wait else future
  File "/usr/local/lib64/python3.7/site-packages/ovirt_engine_sdk_python-4.3.3-py3.7-linux-x86_64.egg/ovirtsdk4/service.py", line 55, in wait
    return self._code(response)
  File "/usr/local/lib64/python3.7/site-packages/ovirt_engine_sdk_python-4.3.3-py3.7-linux-x86_64.egg/ovirtsdk4/service.py", line 208, in callback
    self._check_fault(response)
  File "/usr/local/lib64/python3.7/site-packages/ovirt_engine_sdk_python-4.3.3-py3.7-linux-x86_64.egg/ovirtsdk4/service.py", line 130, in _check_fault
    body = self._internal_read_body(response)
  File "/usr/local/lib64/python3.7/site-packages/ovirt_engine_sdk_python-4.3.3-py3.7-linux-x86_64.egg/ovirtsdk4/service.py", line 312, in _internal_read_body
    self._raise_error(response)
  File "/usr/local/lib64/python3.7/site-packages/ovirt_engine_sdk_python-4.3.3-py3.7-linux-x86_64.egg/ovirtsdk4/service.py", line 118, in _raise_error
    raise error
ovirtsdk4.NotFoundError: HTTP response code is 404.

Copied from original issue: ansible/ansible#70547

ovirt_vm: cloud_init_persist leads to changed always being True

SUMMARY

Multiple runs of ovirt_vm returns "changed: True", even when no changes (should) have been made, when cloud_init_persist is defined.

This is due to to line 1579 in ovirt_vm.py, in the boolean logic for the return value of _update_check:

not self.param('cloud_init_persist') and

_update_check will always return false if this param is defined. This means that self.changed = True is always met on line 617 of ovirt.py.

ISSUE TYPE
  • Bug Report
COMPONENT NAME
ANSIBLE VERSION
ansible 2.9.10
  config file = /home/jake2184/ansible.cfg
  configured module search path = ['/home/jake2184/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.6/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
CONFIGURATION
ANSIBLE_PIPELINING(/home/jake2184/ansible.cfg) = True
DEFAULT_FORKS(/home/jake2184/ansible.cfg) = 25
DEFAULT_REMOTE_USER(/home/jake2184/ansible.cfg) = root
DEFAULT_STDOUT_CALLBACK(/home/jake2184/ansible.cfg) = debug
HOST_KEY_CHECKING(/home/jake2184/ansible.cfg) = False
OS / ENVIRONMENT

Centos 8.2

STEPS TO REPRODUCE

Running the below always results in "changed: True"

- name: Deploy from template
  ovirt_vm:
    auth: "{{ ovirt_auth }}"
    template: Centos8
    cluster: mgmt1
    name: myhostname
    clone: True
    #state: present
    cloud_init_persist: True
    cloud_init:
      nic_boot_protocol: static
      nic_ip_address: 10.136.6.10
      nic_netmask: 255.255.255.0
      nic_gateway: 10.136.6.254
      nic_on_boot: true
      nic_name: enp1s0
      host_name: myhostname
EXPECTED RESULTS

The module to return changed:True only when a change occurs

ACTUAL RESULTS

The module returns changed:True with every iteration.

When running with -vvv to see the diff, the diffs can be compared and are equal

ovirt_quota : can't set cluster level quotas because it can't decide if global or cluster quotas

SUMMARY
Can't set cluster level quotas with ansible 2.9.7 and ovirtsdk 4.3.4 against RHV 4.3.9

ISSUE TYPE
Bug Report
COMPONENT NAME
ovirt_quota

ANSIBLE VERSION
'''
ansible 2.9.7
config file = None
configured module search path = ['/home/OAD/sloeuillet/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/OAD/sloeuillet/venv/lib/python3.6/site-packages/ansible
executable location = /home/OAD/sloeuillet/venv/bin/ansible
python version = 3.6.9 (default, Sep 11 2019, 16:40:19) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
'''

CONFIGURATION
''''''

OS / ENVIRONMENT
'''RHEL 7.8 to RHV-M 4.3.9 (based on RHEL 7.8)
Python 3.6
Ansible 2.9.7
ovirt-sdk-python 4.4.3'''

STEPS TO REPRODUCE
'''- name: create quotas
ovirt_quota:
auth: "{{ ovirt_auth }}"
data_center: "{{ rhv_datacenter }}"
name: "{{ quota.key }}"
state: "{{ quota.value.state | default('present') }}"

clusters:

name: "{{ quota.value.cluster }}"
memory: "{{ quota.value.mem | default(-1) }}"
cpu: "{{ quota.value.cpu | int | default(-1) }}"
with_dict: "{{ rhv_quota }}"
loop_control:
loop_var: quota
'''
'''yaml
rhv_quota:
test_cpu_10pct:
cluster: PDCLQAT
cpu: 10
memory: 20.0
'''

EXPECTED RESULTS
Should set CPU limit for cluster to 10

ACTUAL RESULTS
As it is a quoted "{{var}}", it gives a string to the module, but ovirtsdk expects an INT.
As no explicit cast, it screams that it wants an integer value.

285 limit=otypes.QuotaClusterLimit(
286 memory_limit=float(cluster.get('memory')),
287 vcpu_limit=cluster.get('cpu'),

As you see, it casts the memory parameter into a float but it doesn't do any cast on cpu parameter
When running ansible, it gives that error :

''' File "/tmp/ansible_ovirt_quota_payload_XlaYEj/ansible_ovirt_quota_payload.zip/ansible/modules/cloud/ovirt/ovirt_quota.py", line 290, in main
File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 20025, in add
return self._internal_add(limit, headers, query, wait)
File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 222, in _internal_add
request.body = writer.Writer.write(object, indent=True)
File "/usr/lib64/python2.7/site-packages/ovirtsdk4/writer.py", line 189, in write
writer(obj, cursor, root)
File "/usr/lib64/python2.7/site-packages/ovirtsdk4/writers.py", line 7376, in write_one
Writer.write_integer(writer, 'vcpu_limit', obj.vcpu_limit)
File "/usr/lib64/python2.7/site-packages/ovirtsdk4/writer.py", line 77, in write_integer
return writer.write_element(name, Writer.render_integer(value))
File "/usr/lib64/python2.7/site-packages/ovirtsdk4/writer.py", line 69, in render_integer
raise TypeError('The 'value' parameter must be an integer')
TypeError: The 'value' parameter must be an integer
'''
In python bindings, this parameter uses Writer.write_decimal/Reader.read_decimal

old bug reappeared in ansible 2.9

SUMMARY

The ovirt_disks module does not reliably extend disks when they are specified by name and vm_name. If two disks exist that share the same name, then the first result is always selected.

The passed vm_name should be used to filter the search results and select the correct disk.

Bug was already reported on ansible 2.5 and resolved in 2018 with ansible 2.6.
The issue, on ansible repo, was number 41008:
ansible/ansible#41008

But it reappeared in ansible 2.9 and was present in all minor releases.
Had to downgrade till 2.8.11 to find a working release. So it seems on 2.8 the bug wasn't there.
So I opened a new issue, number 69431, on ansible repo:
ansible/ansible#69431
but it was closed with the message:
"This plugin is no longer maintained in this repository and has been migrated to https://github.com/oVirt/ovirt-ansible-collection"

So I'm now opening the issue here.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

ovirt_disks

ANSIBLE VERSION
  config file = /home/manny/ovirttest/ansible.cfg
  configured module search path = [u'/home/manny/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /home/manny/venv-ansible/local/lib/python2.7/site-packages/ansible
  executable location = /home/manny/venv-ansible/bin/ansible
  python version = 2.7.17 (default, Nov  7 2019, 10:07:09) [GCC 9.2.1 20191008]
CONFIGURATION
DEFAULT_HOST_LIST(/home/manny/ovirttest/ansible.cfg) = [u'/home/manny/ovirttest/hosts']
DEFAULT_PRIVATE_KEY_FILE(/home/manny/ovirttest/ansible.cfg) = /home/manny/.ssh/id_rsa
DEFAULT_REMOTE_USER(/home/manny/ovirttest/ansible.cfg) = ansible
DEFAULT_TIMEOUT(/home/manny/ovirttest/ansible.cfg) = 60
DEFAULT_VAULT_PASSWORD_FILE(/home/manny/ovirttest/ansible.cfg) = /home/manny/.vault_ovirttest
HOST_KEY_CHECKING(/home/manny/ovirttest/ansible.cfg) = False
RETRY_FILES_ENABLED(/home/manny/ovirttest/ansible.cfg) = False
OS / ENVIRONMENT

Running from: Ubuntu 19.10
Managing: RHV 4.3

STEPS TO REPRODUCE
  ovirt_disk:
    vm_name: 'the-vm'
    name: 'root-disk'
    size: '32GiB'
    interface: virtio_scsi
    storage_domain: vm_storage
    bootable: yes
    auth: '{{ rhv_auth }}'

EXPECTED RESULTS

The disk to be expanded.

ACTUAL RESULTS

The wrong disk is "found" and ovirt attempts expansion (unsuccessfully, in this case).

Traceback (most recent call last):
  File "/tmp/ansible_ovirt_disk_payload_YZT59A/ansible_ovirt_disk_payload.zip/ansible/modules/cloud/ovirt/ovirt_disk.py", line 791, in main
  File "/tmp/ansible_ovirt_disk_payload_YZT59A/ansible_ovirt_disk_payload.zip/ansible/module_utils/ovirt.py", line 623, in create
    **kwargs
  File "/home/manny/.local/lib/python2.7/site-packages/ovirtsdk4/services.py", line 6985, in add
    return self._internal_add(attachment, headers, query, wait)
  File "/home/manny/.local/lib/python2.7/site-packages/ovirtsdk4/service.py", line 232, in _internal_add
    return future.wait() if wait else future
  File "/home/manny/.local/lib/python2.7/site-packages/ovirtsdk4/service.py", line 55, in wait
    return self._code(response)
  File "/home/manny/.local/lib/python2.7/site-packages/ovirtsdk4/service.py", line 229, in callback
    self._check_fault(response)
  File "/home/manny/.local/lib/python2.7/site-packages/ovirtsdk4/service.py", line 132, in _check_fault
    self._raise_error(response, body)
  File "/home/manny/.local/lib/python2.7/site-packages/ovirtsdk4/service.py", line 118, in _raise_error
    raise error
Error: Fault reason is "Operation Failed". Fault detail is "[Cannot attach Virtual Disk. The disk is not shareable and is already attached to a VM.]". HTTP response code is 409.
failed: [localhost] (item={u'group': u'logging', u'name': u'logger05', u'ram': u'16GiB', u'net2_addr': u'10.12.126.15', u'net1_addr': u'10.12.125.15', u'net3_addr': u'10.12.231.206', u'net0_addr': u'10.12.124.15', u'processors': 4}) => {
    "ansible_loop_var": "item",
    "changed": false,
    "invocation": {
        "module_args": {
            "activate": true,
            "auth": {
                "ca_file": null,
                "compress": true,
                "headers": null,
                "insecure": true,
                "kerberos": false,
                "timeout": 0,
                "token": "HX3lA90SCgpsJx9lXACZLHCR3Uv4SzphOV2lkPkbN93zrfzbTTpAIweq4YEfrY08X18N4oV51GB7FJ-Z6nC5qw",
                "url": "https://labengine.hk093znit.lab/ovirt-engine/api"
            },
            "bootable": null,
            "content_type": "data",
            "description": null,
            "download_image_path": null,
            "fetch_nested": false,
            "force": false,
            "format": "cow",
            "host": null,
            "id": "026b7c41-b186-46ff-a9a3-72fb0a1e4a50",
            "image_provider": null,
            "interface": "virtio_scsi",
            "logical_unit": null,
            "name": "logger-vm_Disk2",
            "nested_attributes": [],
            "openstack_volume_type": null,
            "poll_interval": 3,
            "profile": null,
            "quota_id": null,
            "shareable": null,
            "size": "500GiB",
            "sparse": true,
            "sparsify": null,
            "state": "present",
            "storage_domain": "vm_storage",
            "storage_domains": null,
            "timeout": 180,
            "upload_image_path": null,
            "vm_id": null,
            "vm_name": "logger05",
            "wait": true,
            "wipe_after_delete": null
        }
    },
    "item": {
        "group": "logging",
        "name": "logger05",
        "net0_addr": "10.12.124.15",
        "net1_addr": "10.12.125.15",
        "net2_addr": "10.12.126.15",
        "net3_addr": "10.12.231.206",
        "processors": 4,
        "ram": "16GiB"
    },
    "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach Virtual Disk. The disk is not shareable and is already attached to a VM.]\". HTTP response code is 409."
}

ovirt_vm: Add "Warm/Cold Reboot" Parameter

SUMMARY

The module ovirt_vm has no parameter to control whether a virtual machine should be warm or cold rebooted when started as "Run Once"

  • in the Run Once dialog there is an option called "Rollback this configuration during reboots"
  • in API it is called "volatile"
COMPONENT NAME

ovirt_vm

ADDITIONAL INFORMATION

Use case:
When configure Red Hat Linux installation via Ansible and Kickstart at the end of installation process there is need to reboot a machine. Without "Rollback this configuration during reboots = True" for Run Once process goes to an installation loop.

---
- name: oVirt VM Creation
  hosts: localhost
  connection: local
  vars_files:
    # Contains variables to connect to the engine
    - engine_vars.yml
    # Contains encrypted `engine_password` varibale using ansible-vault
    - passwords.yml
  tasks:
    - block:
        - name: Log in oVirt Engine
          ovirt_auth:
            hostname: "{{ engine_fqdn }}"
            username: "{{ engine_user }}"
            password: "{{ engine_password }}"
            insecure: "{{ engine_insecure | default(true) }}"
        - name: Remove VM, if VM is running it will be stopped
          ovirt_vm:
            auth: "{{ ovirt_auth }}"
            state: absent
            name: "{{ vm_name }}"
        - name: Create VM OS Disk
          ovirt_disk:
            auth: "{{ ovirt_auth }}"
            name: "{{ vm_name }}_OS"
            size: 10GiB
            format: raw
            sparse: True
            storage_domain: data
          register: mydisk
        - name: Create VM
          ovirt_vm:
            auth: "{{ ovirt_auth }}"
            state: running
            cluster: Default
            name: "{{ vm_name }}"
            cd_iso: "{{ iso_file }}"
            boot_devices:
              - hd
            memory: 4GiB
            memory_guaranteed: 4GiB
            memory_max: 4GiB
            cpu_cores: 2
            cpu_sockets: 2
            cpu_shares: 1024
            type: server
            operating_system: rhel_8x64
            disks:
              - name: "{{ vm_name }}_OS"
                bootable: True
                interface: virtio_scsi
            nics:
              - name: nic1
                profile_name: ovirtmgmt
            kernel_path: /opt/kernel/8.2/vmlinuz
            initrd_path: /opt/kernel/8.2/initrd.img
            kernel_params: inst.repo=cdrom inst.ks.sendmac inst.ks=http://{{ web_serv_ip }}:8080/ks.8.2.cfg ip={{ vm_ip }}:::255.255.255.0::eth0:off net.ifnames=0
            kernel_params_persist: false

      always:
        - name: Log out oVirt Engine
          ovirt_auth:
            state: absent
            ovirt_auth: "{{ ovirt_auth }}"
  collections:
    - ovirt.ovirt

Manageiq: CFME database is not configured while running ovirt.manageiq role

Running ovirt.manageiq role in order to deploy CFME in RHV engine fails on CFME database configuration.
The role is running as described here: https://github.com/oVirt/ovirt-ansible-manageiq#example-playbook
It seems like something got broke within it as the DB configuration fails for CFME versions that were successfully installed in the past.
Link to failed build: https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/v2v-pipeline-mguetta/444/console

Error:
TASK [oVirt.manageiq : Start ManageIQ server] **********************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Could not find the requested service evmserverd: host"}

Workaround: configure CFME database manually

hosted_engine_setup fails when done remotedly.

SUMMARY

The hosted_engine_setup role does not work when executed from a remote host.

COMPONENT NAME

Role hosted_engine_setup

STEPS TO REPRODUCE

Inventory:

# Settings for the hosted engine 
he_bridge_if: bond0
he_fqdn:***
he_shortname: "{{ he_fqdn.split('.')[0] }}"
he_vm_ip_addr: 10.123.250.18
he_vm_ip_prefix: 29
he_vm_dns_addr: ['10.0.80.11','10.0.80.12']
he_default_gateway: 10.123.250.17 
he_vm_etc_hosts: true
he_domain_type: iscsi
he_cluster: management
he_datacenter: management
he_appliance_password: "{{ login_password }}"
he_admin_password: "{{ login_password }}"
he_mem_size_MB: "4096"
he_vcpus: 4
he_storage_domain_addr: "{{ iscsi_portal }}"
he_iscsi_portal_port: "3260"
he_iscsi_tpgt: "1"
he_iscsi_target: "{{ iscsi_target }}"
he_lun_id": "{{ iscsi_lun }}"
- name: Deploy oVirt hosted engine
  hosts: ***

  vars: 
     he_host_name: "{{ inventory_hostname}}"
     he_host_address: "{{ private_ip }}"    
    
  roles:
     - role: hosted_engine_setup
  collections:
     - redhat.rhv
EXPECTED RESULTS

Hosted engine is deployed.

ACTUAL RESULTS

Process hangs until it times out at:

TASK [redhat.rhv.hosted_engine_setup : Wait for the local VM] ***********************************************************************

This is probably due to the following in full_execution.yaml

- name: Local engine VM installation - Pre tasks
  block:
    - name: 03 Bootstrap local VM
      import_tasks: bootstrap_local_vm/03_engine_initial_tasks.yml
      delegate_to: "{{ groups.engine[0] }}"

I suspect that groups.engine[0] refers to the temporary IP of the HE on the local bridge. This is not accessible from outside the host.

Add support for Ignition File using FedoraCoreOS (FCOS) or RHCOS

It would be nice if the module allow passthrough Ignition file on the creation time, at the first boot.

Fedora CoreOS (FCOS) Ignition files specify the configuration for provisioning FCOS instances. The process begins with a YAML configuration file. The FCOS Configuration Transpiler (FCCT) converts the human-friendly YAML file into machine-friendly JSON, which is the final configuration file for Ignition.

https://docs.fedoraproject.org/en-US/fedora-coreos/producing-ign/

Some plan on the roadmap for this feature?

Thanks in advance,

Ovirt ansible RPMs support for zLinux(s390x) architecture.

SUMMARY

I am working on building https://github.com/ManageIQ/manageiq on s390x architecture for which I need the ovirt-ansible RPMs on s390x arch which are missing at this link https://resources.ovirt.org/pub/ovirt-4.4/rpm/el8/

COMPONENT NAME

ovirt-ansible RPMs

These are the RPMs which I have build using the src.rpm on s390x, these have to be added to the RPM repo along with ppc64le and `x86_64

# ll ovirt*
ovirt-ansible-cluster-upgrade-1.2.3-1.el8.noarch.rpm
ovirt-ansible-disaster-recovery-1.3.0-1.el8.noarch.rpm
ovirt-ansible-engine-setup-1.2.4-1.el8.noarch.rpm
ovirt-ansible-hosted-engine-setup-1.1.8-1.el8.noarch.rpm
ovirt-ansible-image-template-1.2.2-1.el8.noarch.rpm
ovirt-ansible-infra-1.2.2-1.el8.noarch.rpm
ovirt-ansible-manageiq-1.2.1-1.el8.noarch.rpm
ovirt-ansible-repositories-1.2.5-1.el8.noarch.rpm
ovirt-ansible-roles-1.2.3-1.el8.noarch.rpm
ovirt-ansible-shutdown-env-1.1.0-1.el8.noarch.rpm
ovirt-ansible-vm-infra-1.2.3-1.el8.noarch.rpm

ovirt_disk: Unable to configure (FC) Direct LUN

From @mattpoel on Aug 11, 2020 16:13

SUMMARY

ovirt_disk provides the functionality to configure (FC) direct LUNs. We tried to configure FC direct LUNs, but ran into multiple issues.

  • Problem 1: If we try to create the direct LUN according to documentation, we will receive "[Cannot add Virtual Disk. The provided LUN is not visible by the specified host, please check storage server connectivity.]"
  • Problem 2: If you remove the host in the task, the direct LUN will be created, but incorrectly and not attached to the VM.
ISSUE TYPE
  • Bug Report
COMPONENT NAME

ovirt_disk

ANSIBLE VERSION
ansible 2.9.11
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['//home/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /app/ansible/ovirt/tools/Python-3.8.5/lib/python3.8/site-packages/ansible
  executable location = /app/ansible/ovirt/tools/Python-3.8.5/bin/ansible
  python version = 3.8.5 (default, Jul 28 2020, 12:35:17) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
CONFIGURATION
No output
OS / ENVIRONMENT

ovirt / OLVM version: 4.3.6.6-1.0.9

STEPS TO REPRODUCE

Try to configure a FC direct LUN for a VM.

  - name: VM -> Add Direct LUNs
    ovirt_disk:
      auth:
        username: "{{ ovirt_username }}"
        password: "{{ ovirt_password }}"
        insecure: yes
        url: "{{ ovirt_url }}"
      
      name: "dluntest_TEST01"
      host: "kvm01"
      vm_name: "dluntest"
      logical_unit:
        id: "36000144xxxxxxxx1d38"
        storage_type: fcp
EXPECTED RESULTS

FC Direct LUN is created and attached to VM.

ACTUAL RESULTS

When we are running the playbook we will receive the following error:

TASK [VM -> Add Direct LUNs] *****************************************************************************************************************************************
task path: /app/ansible/ovirt/DBCLUSTER_dlun.yml:35
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: ansible
<127.0.0.1> EXEC /bin/sh -c 'echo ~ansible && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/ansible/.ansible/tmp `"&& mkdir /home/ansible/.ansible/tmp/ansible-tmp-1597136428.0939364-24094-36864609799735 && echo ansible-tmp-1597136428.0939364-24094-36864609799735="` echo /home/ansible/.ansible/tmp/ansible-tmp-1597136428.0939364-24094-36864609799735 `" ) && sleep 0'
Using module file /app/ansible/ovirt/tools/Python-3.8.5/lib/python3.8/site-packages/ansible/modules/cloud/ovirt/ovirt_disk.py
<127.0.0.1> PUT /app/homes/ansible/.ansible/tmp/ansible-local-23909nslz0cij/tmptw32qp39 TO /app/homes/ansible/.ansible/tmp/ansible-tmp-1597136428.0939364-24094-36864609799735/AnsiballZ_ovirt_disk.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/ansible/.ansible/tmp/ansible-tmp-1597136428.0939364-24094-36864609799735/ /home/ansible/.ansible/tmp/ansible-tmp-1597136428.0939364-24094-36864609799735/AnsiballZ_ovirt_disk.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/app/ansible/ovirt/tools/Python-3.8.5/bin/python3.8 /home/ansible/.ansible/tmp/ansible-tmp-1597136428.0939364-24094-36864609799735/AnsiballZ_ovirt_disk.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/ansible/.ansible/tmp/ansible-tmp-1597136428.0939364-24094-36864609799735/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
  File "/tmp/ansible_ovirt_disk_payload_e2ur8e_i/ansible_ovirt_disk_payload.zip/ansible/modules/cloud/ovirt/ovirt_disk.py", line 737, in main
  File "/tmp/ansible_ovirt_disk_payload_e2ur8e_i/ansible_ovirt_disk_payload.zip/ansible/module_utils/ovirt.py", line 621, in create
    entity = self._service.add(
  File "/app/ansible/ovirt/tools/Python-3.8.5/lib/python3.8/site-packages/ovirtsdk4/services.py", line 7697, in add
    return self._internal_add(disk, headers, query, wait)
  File "/app/ansible/ovirt/tools/Python-3.8.5/lib/python3.8/site-packages/ovirtsdk4/service.py", line 232, in _internal_add
    return future.wait() if wait else future
  File "/app/ansible/ovirt/tools/Python-3.8.5/lib/python3.8/site-packages/ovirtsdk4/service.py", line 55, in wait
    return self._code(response)
  File "/app/ansible/ovirt/tools/Python-3.8.5/lib/python3.8/site-packages/ovirtsdk4/service.py", line 229, in callback
    self._check_fault(response)
  File "/app/ansible/ovirt/tools/Python-3.8.5/lib/python3.8/site-packages/ovirtsdk4/service.py", line 132, in _check_fault
    self._raise_error(response, body)
  File "/app/ansible/ovirt/tools/Python-3.8.5/lib/python3.8/site-packages/ovirtsdk4/service.py", line 118, in _raise_error
    raise error
ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Cannot add Virtual Disk. The provided LUN is not visible by the specified host, please check storage server connectivity.]". HTTP response code is 400.
fatal: [localhost]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "activate": true,
            "auth": {
                "insecure": true,
                "password": "userpassword",
                "url": "https://olvm.mydomain/ovirt-engine/api",
                "username": "ansible@internal"
            },
            "bootable": null,
            "content_type": "data",
            "description": null,
            "download_image_path": null,
            "fetch_nested": false,
            "force": false,
            "format": "cow",
            "host": "kvm01",
            "id": null,
            "image_provider": null,
            "interface": null,
            "logical_unit": {
                "id": "36000144xxxxxxxx1d38",
                "storage_type": "fcp"
            },
            "name": "dlun_TEST01",
            "nested_attributes": [],
            "openstack_volume_type": null,
            "poll_interval": 3,
            "profile": null,
            "quota_id": null,
            "shareable": null,
            "size": null,
            "sparse": null,
            "sparsify": null,
            "state": "present",
            "storage_domain": null,
            "storage_domains": null,
            "timeout": 180,
            "upload_image_path": null,
            "vm_id": null,
            "vm_name": "dlun",
            "wait": true,
            "wipe_after_delete": null
        }
    },
    "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot add Virtual Disk. The provided LUN is not visible by the specified host, please check storage server connectivity.]\". HTTP response code is 400."

Although we can add the LUN via this host in UI:

UI_DirectLUN_List

If we remove the KVM host from the task, the direct LUN will be created but we receive the following error:

TASK [VM -> Add Direct LUNs] ****************************************************************************************************************************************
task path: /app/ansible/ovirt/DBCLUSTER_dlun.yml:35
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: ansible
<127.0.0.1> EXEC /bin/sh -c 'echo ~ansible && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/ansible/.ansible/tmp `"&& mkdir /home/ansible/.ansible/tmp/ansible-tmp-1597161047.3235068-7178-41153481182820 && echo ansible-tmp-1597161047.3235068-7178-41153481182820="` echo /home/ansible/.ansible/tmp/ansible-tmp-1597161047.3235068-7178-41153481182820 `" ) && sleep 0'
Using module file /app/ansible/ovirt/tools/Python-3.8.5/lib/python3.8/site-packages/ansible/modules/cloud/ovirt/ovirt_disk.py
<127.0.0.1> PUT /app/homes/ansible/.ansible/tmp/ansible-local-6988q6ur1ftc/tmp1pq4_ccj TO /app/homes/ansible/.ansible/tmp/ansible-tmp-1597161047.3235068-7178-41153481182820/AnsiballZ_ovirt_disk.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/ansible/.ansible/tmp/ansible-tmp-1597161047.3235068-7178-41153481182820/ /home/ansible/.ansible/tmp/ansible-tmp-1597161047.3235068-7178-41153481182820/AnsiballZ_ovirt_disk.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/app/ansible/ovirt/tools/Python-3.8.5/bin/python3.8 /home/ansible/.ansible/tmp/ansible-tmp-1597161047.3235068-7178-41153481182820/AnsiballZ_ovirt_disk.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/ansible/.ansible/tmp/ansible-tmp-1597161047.3235068-7178-41153481182820/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
  File "/tmp/ansible_ovirt_disk_payload_ksta8q1c/ansible_ovirt_disk_payload.zip/ansible/modules/cloud/ovirt/ovirt_disk.py", line 737, in main
  File "/tmp/ansible_ovirt_disk_payload_ksta8q1c/ansible_ovirt_disk_payload.zip/ansible/module_utils/ovirt.py", line 621, in create
    entity = self._service.add(
  File "/app/ansible/ovirt/tools/Python-3.8.5/lib/python3.8/site-packages/ovirtsdk4/services.py", line 7697, in add
    return self._internal_add(disk, headers, query, wait)
  File "/app/ansible/ovirt/tools/Python-3.8.5/lib/python3.8/site-packages/ovirtsdk4/service.py", line 232, in _internal_add
    return future.wait() if wait else future
  File "/app/ansible/ovirt/tools/Python-3.8.5/lib/python3.8/site-packages/ovirtsdk4/service.py", line 55, in wait
    return self._code(response)
  File "/app/ansible/ovirt/tools/Python-3.8.5/lib/python3.8/site-packages/ovirtsdk4/service.py", line 229, in callback
    self._check_fault(response)
  File "/app/ansible/ovirt/tools/Python-3.8.5/lib/python3.8/site-packages/ovirtsdk4/service.py", line 132, in _check_fault
    self._raise_error(response, body)
  File "/app/ansible/ovirt/tools/Python-3.8.5/lib/python3.8/site-packages/ovirtsdk4/service.py", line 118, in _raise_error
    raise error
ovirtsdk4.NotFoundError: Fault reason is "Operation Failed". Fault detail is "Entity not found: b2db373a-5cb3-41e7-a2c2-3428e74add19". HTTP response code is 404.
fatal: [localhost]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "activate": true,
            "auth": {
                "insecure": true,
                "password": "userpassword",
                "url": "https://olvm.mydomain/ovirt-engine/api",
                "username": "ansible@internal"
            },
            "bootable": null,
            "content_type": "data",
            "description": null,
            "download_image_path": null,
            "fetch_nested": false,
            "force": false,
            "format": "cow",
            "host": null,
            "id": null,
            "image_provider": null,
            "interface": null,
            "logical_unit": {
                "id": "36000144xxxxxxxx1d38",
                "storage_type": "fcp"
            },
            "name": "dlun_TEST01",
            "nested_attributes": [],
            "openstack_volume_type": null,
            "poll_interval": 3,
            "profile": null,
            "quota_id": null,
            "shareable": null,
            "size": null,
            "sparse": null,
            "sparsify": null,
            "state": "present",
            "storage_domain": null,
            "storage_domains": null,
            "timeout": 180,
            "upload_image_path": null,
            "vm_id": null,
            "vm_name": "dlun",
            "wait": true,
            "wipe_after_delete": null
        }
    },
    "msg": "Fault reason is \"Operation Failed\". Fault detail is \"Entity not found: b2db373a-5cb3-41e7-a2c2-3428e74add19\". HTTP response code is 404."

There is actually a direct LUN with the mentioned ID, but it doesn't look correct in terms of size <1GiB instead of 16GiB and it is not attached to the VM.

Incorrect_DirectLUN

Copied from original issue: ansible/ansible#71210

Check for missing commits vs devel

SUMMARY

The "Big Migration" has now taken place.

As this collection already exists, we need to carefully check to see if any further commits went into devel since this repo was created.

Please check the contents of https://github.com/ansible-collection-migration/ovirt.ovirt against this repo

In particular:

  • Please do a per-file level diff against every file in the ansible-collection-migration repo and this one
  • Pay care to files added and removed.
  • During the last two weeks there have been lots of fixes, especially around and tests, dependencies, and new collection features e.g. meta/action_groups.yml
ISSUE TYPE
  • Bug Report

Can't delete a vlan tag assigned to a network

If I create a VLAN Tag for a network I can't delete it, but I have to delete the network and recreate it without the tag.
If I use the web console I can easily remove a vlan tag on a network.

I'm currently using Ansible 2.9.6.

Inventory plugin broken in python2

SUMMARY
COMPONENT NAME

ovirt inventory plugin

STEPS TO REPRODUCE

Run inventory plugin in python2

EXPECTED RESULTS

Works, this is still supported and required for rhel7 Ansible Tower and Ansible Tower on openshift

ACTUAL RESULTS
ansible-inventory 2.9.14
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/var/lib/awx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible-inventory
  python version = 2.7.5 (default, Mar 20 2020, 17:08:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
[WARNING]:  * Failed to parse /tmp/awx_675_2eostks8/ovirt.yml with auto plugin:
'datetime.datetime' object has no attribute 'timestamp'
  File "/usr/lib/python2.7/site-packages/ansible/inventory/manager.py", line 280, in parse_source
    plugin.parse(self._inventory, self._loader, source, cache=cache)
  File "/usr/lib/python2.7/site-packages/ansible/plugins/inventory/auto.py", line 58, in parse
    plugin.parse(inventory, loader, path, cache=cache)
  File "/var/lib/awx/vendor/awx_ansible_collections/ansible_collections/ovirt/ovirt/plugins/inventory/ovirt.py", line 264, in parse
    source_data = self._query(query_filter=query_filter)
  File "/var/lib/awx/vendor/awx_ansible_collections/ansible_collections/ovirt/ovirt/plugins/inventory/ovirt.py", line 149, in _query
    return [self._get_dict_of_struct(host) for host in self._get_hosts(query_filter=query_filter)]
  File "/var/lib/awx/vendor/awx_ansible_collections/ansible_collections/ovirt/ovirt/plugins/inventory/ovirt.py", line 129, in _get_dict_of_struct
    'creation_time_timestamp': vm.creation_time.timestamp(),

Notes

Just sanity checking that this is a python2 vs python3 problem:

$ python2
Python 2.7.18 (default, Jul 20 2020, 00:00:00) 
[GCC 10.1.1 20200507 (Red Hat 10.1.1-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import datetime
>>> datetime.datetime.now().timestamp()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'datetime.datetime' object has no attribute 'timestamp'
$ python3
Python 3.8.5 (default, Aug 12 2020, 00:00:00)
[GCC 10.2.1 20200723 (Red Hat 10.2.1-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
import datetime
>>> datetime.datetime.now().timestamp()
1604426969.311946

ovirt_nic_info returns no description in ovirt44..

SUMMARY

I am working on upgrading to ovirt4.4 from a ovirt4.2 instance.

ovirt_nic_info returns a description dict, with values on ovirt4.2. The same dict is empty on ovirt4.4

COMPONENT NAME

ovirt_nic_info

STEPS TO REPRODUCE
#
# Tst ovirt
#
- hosts:
    - localhost

  connection: local
  gather_facts: false

  tasks:
    - name: get the ovirt credentials from 1password.
      set_fact:
        engine_user: '{{ lookup( "onepassword", "ovirt44.omni", field="username" ) }}'
        engine_password: '{{ lookup( "onepassword", "ovirt44.omni", field="password" ) }}'
        engine_url: 'https://ovirt44.omni/ovirt-engine/api'

    - name: Login to oVirt
      ovirt_auth:
        url: https://ovirt44.omni/ovirt-engine/api
        username: "{{ engine_user }}@LDAP"
        password: "{{ engine_password }}"
        ca_file: "{{ engine_cafile | default(omit) }}"
        insecure: "{{ engine_insecure | default(true) }}"

    - name: dump ovirt nic info from ovirt44
      ovirt_nic_info:
        auth: "{{ ovirt_auth }}"
        vm: "gryphon"
      register: ovirt44_nics

    - name: Logout from oVirt
      ovirt_auth:
        state: absent
        ovirt_auth: "{{ ovirt_auth }}"
      delegate_to: localhost
      run_once: true

    - set_fact:
        engine_password: '{{ lookup( "onepassword", "nim.nersc.gov", field="password" ) }}'
        engine_user: '{{ lookup( "onepassword", "nim.nersc.gov", field="username" ) }}'

    - name: Login to oVirt
      ovirt_auth:
        url: https://192.168.85.24/ovirt-engine/api
        username: "{{ engine_user }}@LDAP"
        password: "{{ engine_password }}"
        ca_file: "{{ engine_cafile | default(omit) }}"
        insecure: "{{ engine_insecure | default(true) }}"

    - name: dump ovirt nic info from ovirt44
      ovirt_nic_info:
        auth: "{{ ovirt_auth }}"
        vm: "gryphon"
      register: ovirt42_nics

    - name: Logout from oVirt
      ovirt_auth:
        state: absent
        ovirt_auth: "{{ ovirt_auth }}"
      delegate_to: localhost
      run_once: true

    - name: dump ovirt44_nics
      debug:
        msg: "{{ ovirt44_nics }}"

    - name: dump ovirt42_nics
      debug:
        msg: "{{ ovirt42_nics }}"

EXPECTED RESULTS

This is from the ovirt4.2 system

           {
                "href": "/ovirt-engine/api/vms/85f6907d-4f9b-451c-88bc-6c9abfcccda5/nics/17b8a2ac-e983-4a15-a65a-b9fc3e634191",
                "id": "17b8a2ac-e983-4a15-a65a-b9fc3e634191",
                "interface": "virtio",
                "linked": true,
                "mac": {
                    "address": "00:1a:4a:16:01:3c"
                },
                "name": "onewire",
                "network_filter_parameters": [],
                "plugged": true,
                "reported_devices": [
                    {
                        "description": "guest reported data",
                        "href": "/ovirt-engine/api/vms/85f6907d-4f9b-451c-88bc-6c9abfcccda5/reporteddevices/6f6e6577-6972-6530-303a-31613a34613a",
                        "id": "6f6e6577-6972-6530-303a-31613a34613a",
                        "ips": [
                            {
                                "address": "192.168.93.254",
                                "version": "v4"
                            },
                            {
                                "address": "fe80::21a:4aff:fe16:13c",
                                "version": "v6"
                            }
                        ],
                        "mac": {
                            "address": "00:1a:4a:16:01:3c"
                        },
                        "name": "onewire",
                        "type": "network"
                    }
                ],
                "statistics": [],
                "vm": {
                    "href": "/ovirt-engine/api/vms/85f6907d-4f9b-451c-88bc-6c9abfcccda5",
                    "id": "85f6907d-4f9b-451c-88bc-6c9abfcccda5"
                },
                "vnic_profile": {
                    "href": "/ovirt-engine/api/vnicprofiles/00a6c245-529b-473a-b9fb-8cb2ea96166c",
                    "id": "00a6c245-529b-473a-b9fb-8cb2ea96166c"
                }
            }
        ]
ACTUAL RESULTS

This is from the ovirt4.4 system. It does know the IP address and other information.

            {
                "href": "/ovirt-engine/api/vms/05064ce9-3be8-4410-b42b-d02325908b49/nics/4d4c91fa-93f2-4433-8a9c-bf7b42c4582d",
                "id": "4d4c91fa-93f2-4433-8a9c-bf7b42c4582d",
                "interface": "virtio",
                "linked": true,
                "mac": {
                    "address": "56:6f:e8:d6:00:2f"
                },
                "name": "pdunet",
                "network_filter_parameters": [],
                "plugged": true,
                "reported_devices": [],
                "statistics": [],
                "vm": {
                    "href": "/ovirt-engine/api/vms/05064ce9-3be8-4410-b42b-d02325908b49",
                    "id": "05064ce9-3be8-4410-b42b-d02325908b49"
                },
                "vnic_profile": {
                    "href": "/ovirt-engine/api/vnicprofiles/cb886599-a652-4006-b53e-2543747ebf3f",
                    "id": "cb886599-a652-4006-b53e-2543747ebf3f"
                }
            }

Is it possible to get any of the information that is in 4.2, from the 4.4 system? I need to get the IPv4 addresses.

Make password optional in the users dict for the infra role

SUMMARY

Make it possible to ommit a password for a user that is created using the infra role.

COMPONENT NAME

aaa_jdbc sub role of infra role.

ADDITIONAL INFORMATION

Currently if you pass a "users" dict to the infra role each user must have a value for password set. If not the "manage internal users passwords" task fails.

This makes it impossible to use the role to simply change attributes on a user, or remove a user with a simple

users:
  - name: foo
    state: absent.

The solution would be to add a check whether a password has been set, and call the password-reset command only if there is indeed a password.

ovirt_vm module and PlacementPolicy

SUMMARY

According to API it is possible to set PlacementPolicy with a list of Hosts.
http://ovirt.github.io/ovirt-engine-api-model/4.4/#types/vm_placement_policy
It is also possible to mark specific hosts in the GUI.

The ovirt_vm module does not allow that, the placement_policy parameter is a string that relate on the parameter "host" which is a string as well.
It is not possible to define several hosts.

I think the solution is to separate the host parameter from the placement_policy and make placement_policy a dict (affinity and hosts).

ovirt_disk: Unable to detach a direct LUN (works fine via ovirt GUI)

SUMMARY

I'm trying to detach direct LUNs from VMs using Ansible / ovirt_disk. The detach process itself works fine in the ovirt GUI, but when I'm trying to use ovirt_disk it runs into the default timeout of 180s.

COMPONENT NAME

ovirt_disk

STEPS TO REPRODUCE
  - name: "ovirt -> Detach Direct LUNs"
    delegate_to: localhost
    ovirt_disk:
      auth: "{{ ovirt_auth }}"
      name: "testcl_LUN01"
      vm_name: "testcl1"
      state: detached
    tags: ovirt_configure_direct_luns
EXPECTED RESULTS

LUN gets detached.

ACTUAL RESULTS

Ansible runs into the default timeout of 180s.

TASK [ovirt -> Detach Direct LUNs] **********************************************************************************************************************************************************************************************
task path: /app/ansible/ovirt/ovirt-01-Reconfigure_Direct_LUNs.yml:56
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: 02e027cd
<localhost> EXEC /bin/sh -c 'echo ~02e027cd && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/ansible/.ansible/tmp `"&& mkdir /home/ansible/.ansible/tmp/ansible-tmp-1602080927.2819948-18820-128241010457198 && echo ansible-tmp-1602080927.2819948-18820-128241010457198="` echo /home/ansible/.ansible/tmp/ansible-tmp-1602080927.2819948-18820-128241010457198 `" ) && sleep 0'
Using module file /home/ansible/.ansible/plugins/modules/cloud/ovirt/ovirt_disk.py
<localhost> PUT /home/ansible/.ansible/tmp/ansible-local-18702d4u9gbd7/tmp2ch6s9zx TO /home/ansible/.ansible/tmp/ansible-tmp-1602080927.2819948-18820-128241010457198/AnsiballZ_ovirt_disk.py
<localhost> EXEC /bin/sh -c 'chmod u+x /home/ansible/.ansible/tmp/ansible-tmp-1602080927.2819948-18820-128241010457198/ /home/ansible/.ansible/tmp/ansible-tmp-1602080927.2819948-18820-128241010457198/AnsiballZ_ovirt_disk.py && sleep 0'
<localhost> EXEC /bin/sh -c '/app/ansible/ovirt/tools/Python-3.8.5/bin/python3.8 /home/ansible/.ansible/tmp/ansible-tmp-1602080927.2819948-18820-128241010457198/AnsiballZ_ovirt_disk.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /home/ansible/.ansible/tmp/ansible-tmp-1602080927.2819948-18820-128241010457198/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
  File "/tmp/ansible_ovirt_disk_payload_aqeudp3_/ansible_ovirt_disk_payload.zip/ansible/modules/ovirt_disk.py", line 792, in main
  File "/tmp/ansible_ovirt_disk_payload_aqeudp3_/ansible_ovirt_disk_payload.zip/ansible/module_utils/ovirt.py", line 640, in create
    wait(
  File "/tmp/ansible_ovirt_disk_payload_aqeudp3_/ansible_ovirt_disk_payload.zip/ansible/module_utils/ovirt.py", line 364, in wait
    raise Exception("Timeout exceed while waiting on result state of the entity.")
Exception: Timeout exceed while waiting on result state of the entity.
[WARNING]: Module did not set no_log for pass_discard
failed: [testcl1] (item={'name': 'LUN01', 'id': '36000144000000010e04v878sf8723fd9'}) => {
    "ansible_loop_var": "item",
    "changed": false,
    "invocation": {
        "module_args": {
            "activate": null,
            "auth": {
                "ca_file": null,
                "compress": true,
                "headers": null,
                "insecure": true,
                "kerberos": false,
                "timeout": 0,
                "token": "asdfasdfasdf",
                "url": "https://ovirt.domain.local/ovirt-engine/api"
            },
            "backup": null,
            "bootable": null,
            "content_type": "data",
            "description": null,
            "download_image_path": null,
            "fetch_nested": false,
            "force": false,
            "format": "cow",
            "host": null,
            "id": null,
            "image_provider": null,
            "interface": null,
            "logical_unit": null,
            "name": "testcl_LUN01",
            "nested_attributes": [],
            "openstack_volume_type": null,
            "pass_discard": null,
            "poll_interval": 3,
            "profile": null,
            "propagate_errors": null,
            "quota_id": null,
            "scsi_passthrough": null,
            "shareable": null,
            "size": null,
            "sparse": null,
            "sparsify": null,
            "state": "detached",
            "storage_domain": null,
            "storage_domains": null,
            "timeout": 180,
            "upload_image_path": null,
            "uses_scsi_reservation": null,
            "vm_id": null,
            "vm_name": "testcl1",
            "wait": true,
            "wipe_after_delete": null
        }
    },
    "item": {
        "id": "36000144000000010e04v878sf8723fd9",
        "name": "LUN01"
    },
    "msg": "Timeout exceed while waiting on result state of the entity."
}
<localhost> EXEC /bin/sh -c 'echo ~02e027cd && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/ansible/.ansible/tmp `"&& mkdir /home/ansible/.ansible/tmp/ansible-tmp-1602081108.6279354-18820-259396191722806 && echo ansible-tmp-1602081108.6279354-18820-259396191722806="` echo /home/ansible/.ansible/tmp/ansible-tmp-1602081108.6279354-18820-259396191722806 `" ) && sleep 0'

ovirt_quota, cluster cpu limit won't cast string to int (unable to use variables)

SUMMARY
Can't set cluster level quotas with ansible 2.9.7 and ovirtsdk 4.3.4 against RHV 4.3.9

ISSUE TYPE
Bug Report
COMPONENT NAME
ovirt_quota

ANSIBLE VERSION
'''
ansible 2.9.7
config file = None
configured module search path = ['/home/OAD/sloeuillet/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/OAD/sloeuillet/venv/lib/python3.6/site-packages/ansible
executable location = /home/OAD/sloeuillet/venv/bin/ansible
python version = 3.6.9 (default, Sep 11 2019, 16:40:19) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
'''

CONFIGURATION
''''''

OS / ENVIRONMENT
'''RHEL 7.8 to RHV-M 4.3.9 (based on RHEL 7.8)
Python 3.6
Ansible 2.9.7
ovirt-sdk-python 4.4.3'''

STEPS TO REPRODUCE
'''- name: create quotas
ovirt_quota:
auth: "{{ ovirt_auth }}"
data_center: "{{ rhv_datacenter }}"
name: "{{ quota.key }}"
state: "{{ quota.value.state | default('present') }}"

clusters:

  • name: "{{ quota.value.cluster }}"
    memory: "{{ quota.value.mem | default(-1) }}"
    cpu: "{{ quota.value.cpu | int | default(-1) }}"
    with_dict: "{{ rhv_quota }}"
    loop_control:
    loop_var: quota
    '''

'''yaml
rhv_quota:
test_cpu_10pct:
cluster: PDCLQAT
cpu: 10
memory: 20.0
'''

EXPECTED RESULTS
Should set CPU limit for cluster to 10

ACTUAL RESULTS
As it is a quoted "{{var}}", it gives a string to the module, but ovirtsdk expects an INT.
As no explicit cast, it screams that it wants an integer value.

285 limit=otypes.QuotaClusterLimit(
286 memory_limit=float(cluster.get('memory')),
287 vcpu_limit=cluster.get('cpu'),

As you see, it casts the memory parameter into a float but it doesn't do any cast on cpu parameter
When running ansible, it gives that error :

''' File "/tmp/ansible_ovirt_quota_payload_XlaYEj/ansible_ovirt_quota_payload.zip/ansible/modules/cloud/ovirt/ovirt_quota.py", line 290, in main
File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 20025, in add
return self._internal_add(limit, headers, query, wait)
File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 222, in _internal_add
request.body = writer.Writer.write(object, indent=True)
File "/usr/lib64/python2.7/site-packages/ovirtsdk4/writer.py", line 189, in write
writer(obj, cursor, root)
File "/usr/lib64/python2.7/site-packages/ovirtsdk4/writers.py", line 7376, in write_one
Writer.write_integer(writer, 'vcpu_limit', obj.vcpu_limit)
File "/usr/lib64/python2.7/site-packages/ovirtsdk4/writer.py", line 77, in write_integer
return writer.write_element(name, Writer.render_integer(value))
File "/usr/lib64/python2.7/site-packages/ovirtsdk4/writer.py", line 69, in render_integer
raise TypeError('The 'value' parameter must be an integer')
TypeError: The 'value' parameter must be an integer
'''
In python bindings, this parameter uses Writer.write_decimal/Reader.read_decimal

ovirt_auth received error "ovirtsdk4 version 4.4.0 or higher is required for this module" with Python3

I'm running Red Hat Linux 7.7 and trying to connect to and create VMs on Red Hat Virtualization 4.3.10. When I call the module ovirt_auth in a playbook using python 2.7, the module works but it would fail if I switch to Python 3.6.8. I couldn't figure out why it's not working with Python 3.6.8

TASK [Obtain SSO token with using username/password credentials] ************************************************************************************************************************************************************************
task path: /stage/ansible/rhv/create-ovirt-vm.yml:27
Using module file /stage/ansible/collections/ansible_collections/ovirt/ovirt/plugins/modules/ovirt_auth.py
Pipelining is enabled.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 && sleep 0'
fatal: [localhost]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "ca_file": "ca.pem",
            "compress": true,
            "headers": null,
            "hostname": null,
            "insecure": null,
            "kerberos": false,
            "ovirt_auth": null,
            "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
            "state": "present",
            "timeout": 0,
            "token": null,
            "url": "https://server/ovirt-engine/api",
            "username": "admin@internal"
        }
    },
    "msg": "ovirtsdk4 version 4.4.0 or higher is required for this module"
}
Read vars_file 'engine_vars.yml'

The ansible and ovirt-engine-sdf-python 4.4.4 are installed with pip and pip3.

[root@rhv-ansible2 rhv]# pip list
DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support pip 21.0 will remove support for this functionality.
Package                      Version
---------------------------- ----------
ansible                      2.9.13
Babel                        0.9.6
backports.ssl-match-hostname 3.5.0.1
Beaker                       1.5.4
certifi                      2019.11.28
cffi                         1.6.0
chardet                      3.0.4
configobj                    4.7.2
cryptography                 1.7.2
decorator                    3.4.0
enum34                       1.0.4
ethtool                      0.8
httplib2                     0.9.2
idna                         2.8
iniparse                     0.4
ipaddr                       2.1.11
ipaddress                    1.0.16
Jinja2                       2.7.2
jmespath                     0.9.0
kitchen                      1.1.1
lxml                         3.2.1
M2Crypto                     0.21.1
Magic-file-extensions        0.2
Mako                         0.8.1
MarkupSafe                   0.11
matplotlib                   1.2.0
nfsometer                    1.7
NFStest                      2.1.5
nose                         1.3.7
numpy                        1.7.1
ovirt-engine-sdk-python      4.4.4
paramiko                     2.1.1
passlib                      1.6.5
Paste                        1.7.5.1
pciutils                     1.7.3
perf                         0.1
pip                          20.2.3
ply                          3.4
pyasn1                       0.1.9
pycparser                    2.14
pycurl                       7.19.0
pygobject                    3.22.0
pygpgme                      0.3
pyinotify                    0.9.4
pyliblzma                    0.5.3
pyOpenSSL                    0.13.1
pyparsing                    1.5.6
PyPowerStore                 1.0.0.0
python-dateutil              1.5
python-dmidecode             3.10.13
python-linux-procfs          0.4.9
pytz                         2016.10
pyudev                       0.15
pyvmomi                      6.7.3
pyxattr                      0.5.1
PyYAML                       3.10
requests                     2.22.0
rhnlib                       2.5.65
schedutils                   0.4
setuptools                   0.9.8
six                          1.13.0
slip                         0.4.0
slip.dbus                    0.4.0
subscription-manager         1.24.13
syspurpose                   1.24.13
Tempita                      0.5.1
urlgrabber                   3.10
urllib3                      1.25.7
yum-metadata-parser          1.1.4
[root@rhv-ansible2 rhv]# pip3 list
DEPRECATION: The default format will switch to columns in the future. You can use --format=(legacy|columns) (or define a format=(legacy|columns) in your pip.conf under the [list] section) to disable this warning.
ansible (2.9.13)
certifi (2020.6.20)
cffi (1.14.3)
chardet (3.0.4)
cryptography (3.1)
idna (2.10)
Jinja2 (2.11.2)
MarkupSafe (1.1.1)
ovirt-engine-sdk-python (4.4.4)
pip (9.0.3)
pycparser (2.20)
pycurl (7.43.0.6)
pyOpenSSL (19.1.0)
pyvmomi (7.0)
PyYAML (5.3.1)
requests (2.24.0)
setuptools (39.2.0)
six (1.15.0)
urllib3 (1.25.10)

The ovirt.ovirt collection is installed from ansible galaxy.

Ovirt_vms error when creating vm with pre-allocated disk from a pre-allocated disk template

SUMMARY

ansible 2.9.11
ovirt_engine-sdk4-4.4.4
ovirt_engine-4.3.10.4

Seems, the reported bug, wasn't resolved: ansible/ansible#34154

We are getting the same error message reported in the bug

"msg": "Fault reason is "Operation Failed". Fault detail is "[Cannot add VM. Thin provisioned template disks can not be defined as Raw.]". HTTP response code is 400."
}
The template has only one boot drive and this is the pre-allocated drive:
template8_Disk1 Preallocated VirtIO-SCSI Image

COMPONENT NAME

ovirt_vm

STEPS TO REPRODUCE
EXPECTED RESULTS
ACTUAL RESULTS

Infra: There is no support to import a Storage Domain

The role ovirt.storages lacks support to import an existing Storage Domain.

If I use the same parameters as specified by the ovirt_storage_domain modules, I get that the specified ID is not used.

Is this an expected behaviour?

Ovirt hosted-engine-setup cannot detect FC lun id

SUMMARY

I tried to install hosted engine with Fibre Channel as storage domain but this ansible roles cannot assign variable. While selecting storage domain I can see my luns and ids of them but it keep saying 'value' parameter must be a string.

COMPONENT NAME

In ovirt-ansible-collection/roles/hosted_engine_setup/
In ovirt-ansible-collection/roles/hosted_engine_setup/defaults/main.yml

[ INFO ] TASK [ovirt.hosted_engine_setup : Add Fibre Channel storage domain]
[ ERROR ] TypeError: The 'value' parameter must be a string
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The 'value' parameter must be a string"}

STEPS TO REPRODUCE

Just try to install it with FC.

EXPECTED RESULTS

Add FC storage domain.

ACTUAL RESULTS
[ INFO ] TASK [ovirt.hosted_engine_setup : Add Fibre Channel storage domain]
[ ERROR ] TypeError: The 'value' parameter must be a string
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The 'value' parameter must be a string"}

When it shows my lun id I change the In ovirt-ansible-collection/roles/hosted_engine_setup/defaults/main.yml. I change he_lun_id: null to my lun id then it works well. I tried both gui and cli but results are same.

Do you have any idea to fix this?

Manageiq: CFME evmserverd is started, even if CFME is not installed

TASK [oVirt.manageiq : Fetch info about appliance] *****************************
fatal: [localhost]: FAILED! => {"changed": false, "cmd": ["rpm", "-qi", "cfme"], "delta": "0:00:00.049885", "end": "2020-06-29 14:15:52.138828", "msg": "non-zero return code", "rc": 1, "start": "2020-06-29 14:15:52.088943", "stderr": "", "stderr_lines": [], "stdout": "package cfme is not installed", "stdout_lines": ["package cfme is not installed"]}
...ignoring

TASK [oVirt.manageiq : Start ManageIQ server] **********************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Could not find the requested service evmserverd: host"}

Using kernel_path for unattended boot from ISO does nothing

SUMMARY

Goal is to automatically boot a VM from an ISO similar to as is shown here:

https://github.com/lukepafford/ansible2/blob/1bddf7e1475cfe3c1c2c77bb947dfbf21a2105ef/playbooks/create_vm/create_vm.yml#L60

Using ovirt_vm with kernel_path in an effort to do an unattended boot from ISO does not seem to do anything. The VM is left in a running pre-boot state. All tasks are shown as OK or changed.

The ISO is a customized derivative of the SIMP 6.4.0 ISO:

https://download.simp-project.com/simp/ISO/

/boot/grub2/grub.cfg shows the first menuentry as using linux /vmlinuz-3.10.0-1160.2.1.el7.x86_64. The image exists at /boot/vmlinuz-3.10.0-1160.2.1.el7.x86_64.

COMPONENT NAME

ovirt_vm

STEPS TO REPRODUCE
- name: test
  gather_facts: false
  hosts: localhost
  vars:
    ovirt_vm_name: ovirt_vm_test_01
    ovirt_disk_name: ovirt_disk_test_01
    memory: 128GiB
    disk_size: 250GiB
  tasks:
    - ovirt_auth:
        url: "https://{{ ovirt_engine_fqdn }}/ovirt-engine/api"
        username: redacted@redacted
        ca_file: "{{ ovirt_ca_bundle_file }}"
        password: "{{ ovirt_password }}"
        state: present
        timeout: 12
    - ovirt_disk:
        activate: true
        bootable: true
        ovirt_auth: "{{ ovirt_auth }}"
        name: "{{ ovirt_disk_name }}"
        size: "{{ disk_size }}"
        format: cow
        interface: virtio
        storage_domain: "{{ storage_domain }}"
        state: present
        wait: true
    - ovirt_vm:
        auth: "{{ ovirt_auth }}"
        state: present
        cluster: "{{ cluster }}"
        name: "{{ ovirt_vm_name }}"
        memory: "{{ memory }}"
        max_memory: "{{ memory }}"
        cd_iso: my-custom.iso
        boot_devices:
          - hd
          - cdrom
        disks:
          - name: "{{ ovirt_disk_name }}"
            bootable: true
        type: server
        usb_support: false
        operating_system: other_linux
        nics:
          - name: nic1
            profile_name: redacted
            interface: virtio
    - ovirt_vm:
        name: "{{ ovirt_vm_name }}"
        state: running
        auth: "{{ ovirt_auth }}"
        cluster: "{{ cluster }}"
        kernel_params: 'simp fips=0'
        kernel_path: 'iso://vmlinuz-3.10.0-1160.2.1.el7.x86_64'
        initrd_path: 'iso://initramfs-3.10.0-1160.2.1.el7.x86_64.img'
        kernel_params_persist: false
EXPECTED RESULTS

The VM should boot from the ISO automatically.

ACTUAL RESULTS

Instead I'm required to go into the Console from RVH-M UI and type simp to boot.

cloud-init on ubuntu 18.04 hostname or fqdn do not set full hostname

Using the cloud-init parameter from ovirt-vm does not set the full FQDN on Ubuntu 18.04. Successfully works on CentOS 7 and RHEL 7. I have attempted this with a custom_script as well as parameters for cloud-init:

Ansible task excerpt:

cloud_init:
      custom_script: |
        preserve_hostname: false
        host_name: "{{ rhvm_vm_name }}"
        fqdn: "{{ rhvm_vm_name }}"

Log file excerpt from /var/log/cloud-init.log:

2020-06-08 06:14:43,790 - cc_set_hostname.py[DEBUG]: Setting the hostname to newu-1.habana-labs.com (newu-1)
2020-06-08 06:14:43,790 - util.py[DEBUG]: Reading from /etc/hostname (quiet=False)
2020-06-08 06:14:43,790 - util.py[DEBUG]: Read 19 bytes from /etc/hostname
2020-06-08 06:14:43,790 - util.py[DEBUG]: Writing to /etc/hostname - wb: [644] 7 bytes
2020-06-08 06:14:43,791 - __init__.py[DEBUG]: Non-persistently setting the system hostname to newu-1
2020-06-08 06:14:43,792 - util.py[DEBUG]: Running command ['hostname', 'newu-1'] with allowed return codes [0] (shell=False, capture=True)
2020-06-08 06:14:43,798 - atomic_helper.py[DEBUG]: Atomically writing to file /var/lib/cloud/data/set-hostname (via temporary file /var/lib/cloud/data/tmp0t78eh9q) - w: [644] 61 bytes/chars
2020-06-08 06:14:43,799 - handlers.py[DEBUG]: finish: init-local/config-set_hostname: SUCCESS: config-set_hostname ran successfully
2020-06-08 06:14:43,799 - stages.py[DEBUG]: Running module update_hostname (<module 'cloudinit.config.cc_update_hostname' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_update_hostname.py'>) with frequency always
2020-06-08 06:14:43,799 - handlers.py[DEBUG]: start: init-local/config-update_hostname: running config-update_hostname with frequency always
2020-06-08 06:14:43,799 - helpers.py[DEBUG]: Running config-update_hostname using lock (<cloudinit.helpers.DummyLock object at 0x7f0607029c88>)
2020-06-08 06:14:43,799 - cc_update_hostname.py[DEBUG]: Updating hostname to newu-1.habana-labs.com (newu-1)
2020-06-08 06:14:43,800 - util.py[DEBUG]: Reading from /etc/hostname (quiet=False)
2020-06-08 06:14:43,800 - util.py[DEBUG]: Read 7 bytes from /etc/hostname
2020-06-08 06:14:43,800 - __init__.py[DEBUG]: Attempting to update hostname to newu-1 in 1 files

Rename collection?

Is there a name the collection is called ovirt.ovirt_collection rather than something shorter?

As a reminder the collection name doesn't need to match the GitHub repo name

ovirt_storage_domain fails to mount NFS domain even though webadmin succeeds

When using ovirt_storage_domain module to create NFS data storage domain, the playbook fails with this:

"msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while trying to mount target]\". HTTP response code is 400."

However, when I log into webadmin UI and do exactly the same operation manually, it succeeds very quickly. Here is the playbook I use:

- hosts: pit-med-04
  gather_facts: no
  tasks:
    - ovirt_auth:
        url: <censored>
        username: admin@internal
        password: <censored>
        insecure: true
    - ovirt_storage_domain:
        auth: "{{ ovirt_auth }}"
        name: nfs_storage
        host: host_1
        data_center: Default
        nfs:
          address: <censored>
          path: /home/nfs
        timeout: 100

Ansible log with verbosity 4: cryptobin.co/e59862k1
vdsm.log collected during failed attempt using ovirt_storage_domain: https://cryptobin.co/7258k3k1

This is what I entered into webadmin UI. I make 100 % sure that the IP address, host to use and NFS share path are the same.
nfs

After that, NFS domain has been set up with no issues. Here is vdsm.log from this successful attempt: https://cryptobin.co/b64646k1

Existing disk with same name with non-helpful error message

Configuring a VM with a disk where a disk with the same name already exists leads to an error message which does not lead the user to conclusive action.

Ansible version: 2.9.1
Ovirt SDK versoin: 4.4.4
ovirt-ansible-vm-infra version: 1.2.3
RHEV-M version: 4.3.9.4-11.el7

The error message in Ansible is:

The full traceback is:
Traceback (most recent call last):
  File "/tmp/ansible_ovirt_disk_payload_ghh7q2ub/ansible_ovirt_disk_payload.zip/ansible/modules/cloud/ovirt/ovirt_disk.py", line 737, in main
  File "/tmp/ansible_ovirt_disk_payload_ghh7q2ub/ansible_ovirt_disk_payload.zip/ansible/modules/cloud/ovirt/ovirt_disk.py", line 558, in update_storage_domains
  File "/tmp/ansible_ovirt_disk_payload_ghh7q2ub/ansible_ovirt_disk_payload.zip/ansible/module_utils/ovirt.py", line 772, in action
    getattr(entity_service, action)(**kwargs)
  File "/opt/venv/lib/python3.8/site-packages/ovirtsdk4/services.py", line 38593, in move
    return self._internal_action(action, 'move', None, headers, query, wait)
  File "/opt/venv/lib/python3.8/site-packages/ovirtsdk4/service.py", line 299, in _internal_action
    return future.wait() if wait else future
  File "/opt/venv/lib/python3.8/site-packages/ovirtsdk4/service.py", line 55, in wait
    return self._code(response)
  File "/opt/venv/lib/python3.8/site-packages/ovirtsdk4/service.py", line 296, in callback
    self._check_fault(response)
  File "/opt/venv/lib/python3.8/site-packages/ovirtsdk4/service.py", line 134, in _check_fault
    self._raise_error(response, body.fault)
  File "/opt/venv/lib/python3.8/site-packages/ovirtsdk4/service.py", line 118, in _raise_error
    raise error
ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[General command validation failure.]". HTTP response code is 500.

fatal: [myhost.int -> localhost]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "activate": true,
            "auth": {
                "ca_file": null,
                "compress": true,
                "headers": null,
                "insecure": true,
                "kerberos": false,
                "timeout": 0,
                "token": "mytoken",
                "url": "https://rhevm.int/ovirt-engine/api"
            },
            "bootable": true,
            "content_type": "data",
            "description": null,
            "download_image_path": null,
            "fetch_nested": false,
            "force": false,
            "format": "raw",
            "host": null,
            "id": null,
            "image_provider": null,
            "interface": "virtio_scsi",
            "logical_unit": null,
            "name": "myhost_val_sys",
            "nested_attributes": [],
            "openstack_volume_type": null,
            "poll_interval": 3,
            "profile": null,
            "quota_id": null,
            "shareable": null,
            "size": "20GiB",
            "sparse": null,
            "sparsify": null,
            "state": "present",
            "storage_domain": "mystoragedomain",
            "storage_domains": null,
            "timeout": 180,
            "upload_image_path": null,
            "vm_id": null,
            "vm_name": "myhost.int",
            "wait": true,
            "wipe_after_delete": null
        }
    },
	"msg": "Fault reason is \"Operation Failed\". Fault detail is \"[General command validation failure.]\". HTTP response code is 500."
}

The traceback in RHEVM engine.log:

2020-09-08 13:58:25,898Z INFO  [org.ovirt.engine.core.bll.storage.disk.MoveDiskCommand] (default task-489) [b3a0e368-e7e9-4d59-851a-64b524f4e07b] Lock Acquired to object 'EngineLock:{exclusiv
eLocks='[c87d2edd-3104-4233-a21f-464721b6a249=DISK]', sharedLocks=''}'
2020-09-08 13:58:25,911Z INFO  [org.ovirt.engine.core.bll.storage.disk.MoveDiskCommand] (default task-489) [b3a0e368-e7e9-4d59-851a-64b524f4e07b] Running command: MoveDiskCommand internal: fa
lse. Entities affected :  ID: c87d2edd-3104-4233-a21f-464721b6a249 Type: DiskAction group CONFIGURE_DISK_STORAGE with role type USER,  ID: c094fcee-61a0-48f5-8b07-32430644c7d9 Type: StorageAction group CREATE_DISK with role type USER
2020-09-08 13:58:25,980Z ERROR [org.ovirt.engine.core.bll.storage.disk.MoveOrCopyDiskCommand] (default task-489) [b3a0e368-e7e9-4d59-851a-64b524f4e07b] Error during ValidateFailure.: java.lan
g.NullPointerException
        at org.ovirt.engine.core.bll.validator.storage.StorageDomainValidator.isDomainWithinThresholds(StorageDomainValidator.java:100) [bll.jar:]
        at org.ovirt.engine.core.bll.validator.storage.MultipleStorageDomainsValidator.lambda$allDomainsWithinThresholds$3(MultipleStorageDomainsValidator.java:83) [bll.jar:]
        at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) [rt.jar:1.8.0_242]
        at java.util.HashMap$EntrySpliterator.tryAdvance(HashMap.java:1720) [rt.jar:1.8.0_242]
        at java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:126) [rt.jar:1.8.0_242]
        at java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:499) [rt.jar:1.8.0_242]
        at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:486) [rt.jar:1.8.0_242]
        at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) [rt.jar:1.8.0_242]
        at java.util.stream.FindOps$FindOp.evaluateSequential(FindOps.java:152) [rt.jar:1.8.0_242]
        at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) [rt.jar:1.8.0_242]
        at java.util.stream.ReferencePipeline.findFirst(ReferencePipeline.java:531) [rt.jar:1.8.0_242]
        at org.ovirt.engine.core.bll.validator.storage.MultipleStorageDomainsValidator.validOrFirstFailure(MultipleStorageDomainsValidator.java:205) [bll.jar:]
        at org.ovirt.engine.core.bll.validator.storage.MultipleStorageDomainsValidator.allDomainsWithinThresholds(MultipleStorageDomainsValidator.java:83) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.disk.MoveOrCopyDiskCommand.validateSpaceRequirements(MoveOrCopyDiskCommand.java:280) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.disk.MoveOrCopyDiskCommand.validate(MoveOrCopyDiskCommand.java:171) [bll.jar:]
        at org.ovirt.engine.core.bll.CommandBase.internalValidate(CommandBase.java:804) [bll.jar:]
        at org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:413) [bll.jar:]
        at org.ovirt.engine.core.bll.executor.DefaultBackendActionExecutor.execute(DefaultBackendActionExecutor.java:13) [bll.jar:]
        at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:451) [bll.jar:]
        at org.ovirt.engine.core.bll.Backend.runActionImpl(Backend.java:433) [bll.jar:]
        at org.ovirt.engine.core.bll.Backend.runInternalAction(Backend.java:639) [bll.jar:]
        at sun.reflect.GeneratedMethodAccessor837.invoke(Unknown Source) [:1.8.0_242]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_242]
        at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_242]
        at org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
        at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:78)
        at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:88)
        at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:101)
        at org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43) [wildfly-ejb3-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45) [wildfly-ee-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:40)
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
		at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53)
        at org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:52)
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:53) [wildfly-ejb3-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:230) [wildfly-ejb3-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:444) [wildfly-ejb3-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:162) [wildfly-ejb3-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
        at org.jboss.weld.module.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:72) [weld-ejb.jar:3.0.6.Final-redhat-00003]
        at org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:89)
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41) [wildfly-ejb3-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:47) [wildfly-ejb3-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100) [wildfly-ejb3-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.as.ejb3.deployment.processors.StartupAwaitInterceptor.processInvocation(StartupAwaitInterceptor.java:22) [wildfly-ejb3-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64) [wildfly-ejb3-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:67) [wildfly-ejb3-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:60)
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:438)
        at org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:631)
        at org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:57)
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53)
        at org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:198)
        at org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:185)
        at org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:81)
        at org.ovirt.engine.core.bll.interfaces.BackendInternal$$$view3.runInternalAction(Unknown Source) [bll.jar:]
        at sun.reflect.GeneratedMethodAccessor838.invoke(Unknown Source) [:1.8.0_242]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_242]
        at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_242]
        at org.jboss.weld.util.reflection.Reflections.invokeAndUnwrap(Reflections.java:410) [weld-core-impl.jar:3.0.6.Final-redhat-00003]
        at org.jboss.weld.module.ejb.EnterpriseBeanProxyMethodHandler.invoke(EnterpriseBeanProxyMethodHandler.java:134) [weld-ejb.jar:3.0.6.Final-redhat-00003]
        at org.jboss.weld.bean.proxy.EnterpriseTargetBeanInstance.invoke(EnterpriseTargetBeanInstance.java:56) [weld-core-impl.jar:3.0.6.Final-redhat-00003]
		at org.jboss.weld.module.ejb.InjectionPointPropagatingEnterpriseTargetBeanInstance.invoke(InjectionPointPropagatingEnterpriseTargetBeanInstance.java:68) [weld-ejb.jar:3.0.6.Final-redhat-00003]
        at org.jboss.weld.bean.proxy.ProxyMethodHandler.invoke(ProxyMethodHandler.java:106) [weld-core-impl.jar:3.0.6.Final-redhat-00003]
        at org.ovirt.engine.core.bll.BackendCommandObjectsHandler$BackendInternal$BackendLocal$2049259618$Proxy$_$$_Weld$EnterpriseProxy$.runInternalAction(Unknown Source) [bll.jar:]
        at org.ovirt.engine.core.bll.CommandBase.runInternalAction(CommandBase.java:2391) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.disk.MoveDiskCommand.executeCommand(MoveDiskCommand.java:88) [bll.jar:]
        at org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1168) [bll.jar:]
        at org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1326) [bll.jar:]
        at org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:2001) [bll.jar:]
        at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:164) [utils.jar:]
        at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:103) [utils.jar:]
        at org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1386) [bll.jar:]
        at org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:420) [bll.jar:]
        at org.ovirt.engine.core.bll.executor.DefaultBackendActionExecutor.execute(DefaultBackendActionExecutor.java:13) [bll.jar:]
        at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:451) [bll.jar:]
        at org.ovirt.engine.core.bll.Backend.runActionImpl(Backend.java:433) [bll.jar:]
        at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:388) [bll.jar:]
        at sun.reflect.GeneratedMethodAccessor710.invoke(Unknown Source) [:1.8.0_242]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_242]
        at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_242]
        at org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
        at org.jboss.as.weld.ejb.DelegatingInterceptorInvocationContext.proceed(DelegatingInterceptorInvocationContext.java:92) [wildfly-weld-ejb-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.weld.interceptor.proxy.WeldInvocationContextImpl.interceptorChainCompleted(WeldInvocationContextImpl.java:107) [weld-core-impl.jar:3.0.6.Final-redhat-00003]
        at org.jboss.weld.interceptor.proxy.WeldInvocationContextImpl.proceed(WeldInvocationContextImpl.java:126) [weld-core-impl.jar:3.0.6.Final-redhat-00003]
        at org.ovirt.engine.core.common.di.interceptor.LoggingInterceptor.apply(LoggingInterceptor.java:12) [common.jar:]
        at sun.reflect.GeneratedMethodAccessor69.invoke(Unknown Source) [:1.8.0_242]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_242]
        at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_242]
        at org.jboss.weld.interceptor.reader.SimpleInterceptorInvocation$SimpleMethodInvocation.invoke(SimpleInterceptorInvocation.java:73) [weld-core-impl.jar:3.0.6.Final-redhat-00003]
        at org.jboss.weld.interceptor.proxy.WeldInvocationContextImpl.invokeNext(WeldInvocationContextImpl.java:92) [weld-core-impl.jar:3.0.6.Final-redhat-00003]
        at org.jboss.weld.interceptor.proxy.WeldInvocationContextImpl.proceed(WeldInvocationContextImpl.java:124) [weld-core-impl.jar:3.0.6.Final-redhat-00003]
        at org.jboss.weld.bean.InterceptorImpl.intercept(InterceptorImpl.java:105) [weld-core-impl.jar:3.0.6.Final-redhat-00003]
        at org.jboss.as.weld.ejb.DelegatingInterceptorInvocationContext.proceed(DelegatingInterceptorInvocationContext.java:82) [wildfly-weld-ejb-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.as.weld.interceptors.EjbComponentInterceptorSupport.delegateInterception(EjbComponentInterceptorSupport.java:60)
        at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:76)
        at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:88)
        at org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:101)
        at org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
        at org.ovirt.engine.core.bll.interceptors.CorrelationIdTrackerInterceptor.aroundInvoke(CorrelationIdTrackerInterceptor.java:13) [bll.jar:]
        at sun.reflect.GeneratedMethodAccessor187.invoke(Unknown Source) [:1.8.0_242]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_242]
        at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_242]
        at org.jboss.as.ee.component.ManagedReferenceLifecycleMethodInterceptor.processInvocation(ManagedReferenceLifecycleMethodInterceptor.java:89)
		at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43) [wildfly-ejb3-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45) [wildfly-ee-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:40)
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53)
        at org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:52)
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:53) [wildfly-ejb3-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:230) [wildfly-ejb3-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:444) [wildfly-ejb3-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:162) [wildfly-ejb3-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
        at org.jboss.weld.module.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:81) [weld-ejb.jar:3.0.6.Final-redhat-00003]
        at org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:89)
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41) [wildfly-ejb3-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:47) [wildfly-ejb3-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100) [wildfly-ejb3-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.as.ejb3.deployment.processors.StartupAwaitInterceptor.processInvocation(StartupAwaitInterceptor.java:22) [wildfly-ejb3-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64) [wildfly-ejb3-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:67) [wildfly-ejb3-7.2.7.GA-redhat-00005.jar:7.2.7.GA-redhat-00005]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:60)
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:438)
        at org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:631)
        at org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:57)
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53)
        at org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:198)
        at org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:185)
        at org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:81)
		at org.ovirt.engine.core.common.interfaces.BackendLocal$$$view4.runAction(Unknown Source) [common.jar:]
        at org.ovirt.engine.api.restapi.resource.BackendResource.doAction(BackendResource.java:250)
        at org.ovirt.engine.api.restapi.resource.AbstractBackendActionableResource.doAction(AbstractBackendActionableResource.java:80)
        at org.ovirt.engine.api.restapi.resource.AbstractBackendActionableResource.doAction(AbstractBackendActionableResource.java:121)
        at org.ovirt.engine.api.restapi.resource.BackendDiskResource.move(BackendDiskResource.java:127)
        at org.ovirt.engine.api.resource.DiskResource.doMove(DiskResource.java:189)
        at sun.reflect.GeneratedMethodAccessor1546.invoke(Unknown Source) [:1.8.0_242]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_242]
        at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_242]
        at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:140) [resteasy-jaxrs.jar:3.6.1.SP7-redhat-00001]
        at org.jboss.resteasy.core.ResourceMethodInvoker.internalInvokeOnTarget(ResourceMethodInvoker.java:509) [resteasy-jaxrs.jar:3.6.1.SP7-redhat-00001]
        at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTargetAfterFilter(ResourceMethodInvoker.java:399) [resteasy-jaxrs.jar:3.6.1.SP7-redhat-00001]
        at org.jboss.resteasy.core.ResourceMethodInvoker.lambda$invokeOnTarget$0(ResourceMethodInvoker.java:363) [resteasy-jaxrs.jar:3.6.1.SP7-redhat-00001]
        at org.jboss.resteasy.core.interception.PreMatchContainerRequestContext.filter(PreMatchContainerRequestContext.java:358) [resteasy-jaxrs.jar:3.6.1.SP7-redhat-00001]
        at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:365) [resteasy-jaxrs.jar:3.6.1.SP7-redhat-00001]
        at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:337) [resteasy-jaxrs.jar:3.6.1.SP7-redhat-00001]
        at org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTargetObject(ResourceLocatorInvoker.java:137) [resteasy-jaxrs.jar:3.6.1.SP7-redhat-00001]
        at org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLocatorInvoker.java:106) [resteasy-jaxrs.jar:3.6.1.SP7-redhat-00001]
        at org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTargetObject(ResourceLocatorInvoker.java:132) [resteasy-jaxrs.jar:3.6.1.SP7-redhat-00001]
        at org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLocatorInvoker.java:100) [resteasy-jaxrs.jar:3.6.1.SP7-redhat-00001]
        at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:443) [resteasy-jaxrs.jar:3.6.1.SP7-redhat-00001]
        at org.jboss.resteasy.core.SynchronousDispatcher.lambda$invoke$4(SynchronousDispatcher.java:233) [resteasy-jaxrs.jar:3.6.1.SP7-redhat-00001]
        at org.jboss.resteasy.core.SynchronousDispatcher.lambda$preprocess$0(SynchronousDispatcher.java:139) [resteasy-jaxrs.jar:3.6.1.SP7-redhat-00001]
        at org.jboss.resteasy.core.interception.PreMatchContainerRequestContext.filter(PreMatchContainerRequestContext.java:358) [resteasy-jaxrs.jar:3.6.1.SP7-redhat-00001]
        at org.jboss.resteasy.core.SynchronousDispatcher.preprocess(SynchronousDispatcher.java:142) [resteasy-jaxrs.jar:3.6.1.SP7-redhat-00001]
        at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:219) [resteasy-jaxrs.jar:3.6.1.SP7-redhat-00001]
        at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:227) [resteasy-jaxrs.jar:3.6.1.SP7-redhat-00001]
        at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:56) [resteasy-jaxrs.jar:3.6.1.SP7-redhat-00001]
        at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:51) [resteasy-jaxrs.jar:3.6.1.SP7-redhat-00001]
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:791) [jboss-servlet-api_4.0_spec.jar:1.0.0.Final-redhat-1]
        at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:74)
        at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:81)
        at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
        at io.undertow.servlet.handlers.ServletChain$1.handleRequest(ServletChain.java:68)
        at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
        at io.undertow.servlet.handlers.RedirectDirHandler.handleRequest(RedirectDirHandler.java:68)
        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
        at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:251)
        at io.undertow.servlet.handlers.ServletInitialHandler.dispatchToPath(ServletInitialHandler.java:186)
        at io.undertow.servlet.spec.RequestDispatcherImpl.forwardImpl(RequestDispatcherImpl.java:227)
        at io.undertow.servlet.spec.RequestDispatcherImpl.forwardImplSetup(RequestDispatcherImpl.java:149)
        at io.undertow.servlet.spec.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:111)
        at org.ovirt.engine.api.restapi.invocation.VersionFilter.doFilter(VersionFilter.java:178)
        at org.ovirt.engine.api.restapi.invocation.VersionFilter.doFilter(VersionFilter.java:98)
        at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
        at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
		at org.ovirt.engine.api.restapi.invocation.CurrentFilter.doFilter(CurrentFilter.java:117)
        at org.ovirt.engine.api.restapi.invocation.CurrentFilter.doFilter(CurrentFilter.java:72)
        at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
        at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
        at org.ovirt.engine.core.aaa.filters.RestApiSessionMgmtFilter.doFilter(RestApiSessionMgmtFilter.java:78) [aaa.jar:]
        at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
        at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
        at org.ovirt.engine.core.aaa.filters.EnforceAuthFilter.doFilter(EnforceAuthFilter.java:42) [aaa.jar:]
        at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
        at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
        at org.ovirt.engine.core.aaa.filters.SsoRestApiNegotiationFilter.doFilter(SsoRestApiNegotiationFilter.java:84) [aaa.jar:]
        at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
        at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
        at org.ovirt.engine.core.aaa.filters.SsoRestApiAuthFilter.doFilter(SsoRestApiAuthFilter.java:47) [aaa.jar:]
        at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
        at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
        at org.ovirt.engine.core.aaa.filters.SessionValidationFilter.doFilter(SessionValidationFilter.java:59) [aaa.jar:]
        at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
        at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
        at org.ovirt.engine.core.aaa.filters.RestApiSessionValidationFilter.doFilter(RestApiSessionValidationFilter.java:35) [aaa.jar:]
        at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
        at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
        at org.ovirt.engine.api.restapi.security.CSRFProtectionFilter.doFilter(CSRFProtectionFilter.java:111)
        at org.ovirt.engine.api.restapi.security.CSRFProtectionFilter.doFilter(CSRFProtectionFilter.java:102)
        at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
        at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
        at org.ovirt.engine.core.utils.servlet.CORSSupportFilter.doFilter(CORSSupportFilter.java:283) [utils.jar:]
        at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
        at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
        at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)
        at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
        at io.undertow.servlet.handlers.ServletChain$1.handleRequest(ServletChain.java:68)
        at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
        at org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)
        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
        at io.undertow.servlet.handlers.RedirectDirHandler.handleRequest(RedirectDirHandler.java:68)
        at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:132)
        at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
        at io.undertow.security.handlers.AuthenticationConstraintHandler.handleRequest(AuthenticationConstraintHandler.java:53)
        at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
        at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
        at io.undertow.servlet.handlers.security.ServletSecurityConstraintHandler.handleRequest(ServletSecurityConstraintHandler.java:59)
        at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)
        at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)
        at io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
        at io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
		at org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
        at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:269)
        at io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:78)
        at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:133)
        at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:130)
        at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48)
        at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
        at org.wildfly.extension.undertow.security.SecurityContextThreadSetupAction.lambda$create$0(SecurityContextThreadSetupAction.java:105)
        at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1504)
        at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1504)
        at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1504)
        at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:249)
        at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:78)
        at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:99)
        at io.undertow.server.Connectors.executeRootHandler(Connectors.java:376)
        at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:830)
        at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
        at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1985)
        at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1487)
        at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1378)
        at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_242]

An error message of "A disk/object with the same name already exists" could be a more helpful error message.

Hide credentials in debug mode for ovirt_disk

SUMMARY

The module is returning the following output when is executed in debug mode.

    "invocation": {
        "module_args": {
            "activate": true, 
            "auth": {
                "ca_file": null, 
                "insecure": true, 
                "password": "CLEAR PASSWORD", 
                "token": null, 
                "url": "RHEV URL", 
                "username": "USERNAME"
            },

This output can be hidden using no_log parameter, but it could be a good practice to avoid the module showing this information as other modules does, hiding these fields with "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER" banner.

I tested using vaulted and plain-text variables, and the result is similar.

COMPONENT NAME

ovirt_disk

ovirt inventory plugin doesn't properly filter based on os_type

SUMMARY

When using the ovirt ansible inventory plugin, ovirt_query_filter: search: "os_type=" doesn't filter properly.

COMPONENT NAME

ovirt ansible inventory plugin

STEPS TO REPRODUCE

Create Inventory File as ovirt.yml:


plugin: ovirt.ovirt.ovirt
ovirt_url: https://hosted-enginel/ovirt-engine/api
ovirt_cafile: ca.pem
ovirt_username: admin@internal
ovirt_password: password

View results with:
ansible-inventory -i ovirt.yml --list

This command returns results which show vm info such as this excerpt:

      },
      "testserver.local": {
        "affinity_groups": [],
        "affinity_labels": [],
        "ansible_host": "10.1.1.10",
        "cluster": "Default",
        "creation_time": "2020-11-19 12:37:50.691000-05:00",
        "creation_time_timestamp": 1605807470.691,
        "description": "Test Server",
        "devices": {
          "eth0": [
            "10.1.1.10",
          ]
        },
        "fqdn": "testserver.local",
        "host": "ovirthost-1",
        "id": "abcdefg",
        "name": "testserver.local",
        "os_type": "rhel_8x64",
        "status": "up",
        "tags": [
        ],
        "template": "Blank"
      },

The os_type attribute is populated. But when using another inventory file with the os_type filter, no results are returned:


plugin: ovirt.ovirt.ovirt
ovirt_url: https://hosted-enginel/ovirt-engine/api
ovirt_cafile: ca.pem
ovirt_username: admin@internal
ovirt_password: password
ovirt_query_filter:
  all_content: true
  search: 'os_type=rhel_8x64'
  case_sensitive: no

# ansible-inventory -i ovirt.yml --list
{
  "_meta": {
    "hostvars": {}
  },
  "all": {
    "children": [
      "ungrouped"
    ]
  }
}
EXPECTED RESULTS
ovirt_query_filter:
  all_content: true
  search: 'os_type=rhel_8x64'

This should return all servers which match this os_type.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.