Giter Site home page Giter Site logo

linux-system-roles / storage Goto Github PK

View Code? Open in Web Editor NEW
97.0 15.0 54.0 858 KB

Ansible role for linux storage management

Home Page: https://linux-system-roles.github.io/storage/

License: MIT License

Python 76.16% Jinja 0.82% Shell 2.63% JavaScript 1.91% HTML 18.48%

storage's Introduction

Linux Storage Role

ansible-lint.yml ansible-test.yml codeql.yml markdownlint.yml python-unit-test.yml shellcheck.yml woke.yml

This role allows users to configure local storage with minimal input.

As of now, the role supports managing file systems and mount entries on

  • unpartitioned disks
  • lvm (unpartitioned whole-disk physical volumes only)

Requirements

See below

Collection requirements

The role requires external collections. Use the following command to install them:

ansible-galaxy collection install -vv -r meta/collection-requirements.yml

Role Variables

NOTE: Beginning with version 1.3.0, unspecified parameters are interpreted differently for existing and non-existing pools/volumes. For new/non-existent pools and volumes, any omitted parameters will use the default value as described in defaults/main.yml. For existing pools and volumes, omitted parameters will inherit whatever setting the pool or volume already has. This means that to change/override role defaults in an existing pool or volume, you must explicitly specify the new values/settings in the role variables.

storage_pools

The storage_pools variable is a list of pools to manage. Each pool contains a nested list of volume dicts as described below, as well as the following keys:

  • name

    This specifies the name of the pool to manage/create as a string. (One example of a pool is an LVM volume group.)

  • type

    This specifies the type of pool to manage. Valid values for type: lvm.

  • shared

    If set to true, the role creates or manages a shared volume group. Requires lvmlockd and dlm services configured and running.

    Default: false

    WARNING: Modifying the shared value on an existing pool is a destructive operation. The pool itself will be removed as part of the process.

  • disks

    A list which specifies the set of disks to use as backing storage for the pool. Supported identifiers include: device node (like /dev/sda or /dev/mapper/mpathb), device node basename (like sda or mpathb), /dev/disk/ symlink (like /dev/disk/by-id/wwn-0x5000c5005bc37f3f).

    For LVM pools this can be also used to add and remove disks to/from an existing pool. Disks in the list that are not used by the pool will be added to the pool. Disks that are currently used by the pool but not present in the list will be removed from the pool only if storage_safe_mode is set to false.

  • raid_level

    When used with type: lvm it manages a volume group with a mdraid array of given level on it. Input disks are in this case used as RAID members. Accepted values are: linear, raid0, raid1, raid4, raid5, raid6, raid10

  • volumes

    This is a list of volumes that belong to the current pool. It follows the same pattern as the storage_volumes variable, explained below.

  • encryption

    This specifies whether the pool will be encrypted using LUKS. WARNING: Toggling encryption for a pool is a destructive operation, meaning the pool itself will be removed as part of the process of adding/removing the encryption layer.

  • encryption_password

    This string specifies a password or passphrase used to unlock/open the LUKS volume(s).

  • encryption_key

    This string specifies the full path to the key file on the managed nodes used to unlock the LUKS volume(s). It is the responsibility of the user of this role to securely copy this file to the managed nodes, or otherwise ensure that the file is on the managed nodes.

  • encryption_cipher

    ifies a non-default cipher to be used by LUKS.

  • encryption_key_size

    s the LUKS key size (in bytes).

  • encryption_luks_version

    This integer specifies the LUKS version to use.

storage_volumes

The storage_volumes variable is a list of volumes to manage. Each volume has the following variables:

  • name

    This specifies the name of the volume.

  • type

    This specifies the type of volume on which the file system will reside. Valid values for type: lvm, disk or raid. The default is determined according to the OS and release (currently lvm).

  • disks

    This specifies the set of disks to use as backing storage for the file system. This is currently only relevant for volumes of type disk, where the list must contain only a single item.

  • size

    The size specifies the size of the file system. The format for this is intended to be human-readable, e.g.: "10g", "50 GiB". The size of LVM volumes can be specified as a percentage of the pool/VG size, eg: "50%" as of v1.4.2.

    When using compression or deduplication, size can be set higher than actual available space, e.g.: 3 times the size of the volume, based on duplicity and/or compressibility of stored data.

    NOTE: The requested volume size may be reduced as necessary so the volume can fit in the available pool space, but only if the required reduction is not more than 2% of the requested volume size.

  • fs_type

    This indicates the desired file system type to use, e.g.: "xfs", "ext4", "swap". The default is determined according to the OS and release (currently xfs for all the supported systems). Use "unformatted" if you do not want file system to be present. WARNING: Using "unformatted" file system type on an existing filesystem is a destructive operation and will destroy all data on the volume.

  • fs_label

    The fs_label is a string to be used for a file system label.

  • fs_create_options

    The fs_create_options specifies custom arguments to mkfs as a string.

  • mount_point

    The mount_point specifies the directory on which the file system will be mounted.

  • mount_options

    The mount_options specifies custom mount options as a string, e.g.: 'ro'.

  • mount_user

    The mount_user specifies desired owner of the mount directory.

  • mount_group

    The mount_group specifies desired group of the mount directory.

  • mount_mode

    The mount_mode specifies desired permissions of the mount directory.

  • raid_level

    Specifies RAID level. LVM RAID can be created as well. "Regular" RAID volume requires type to be raid. LVM RAID needs that volume has storage_pools parent with type lvm, raid_disks need to be specified as well. Accepted values are:

    • for LVM RAID volume: raid0, raid1, raid4, raid5, raid6, raid10, striped, mirror
    • for RAID volume: linear, raid0, raid1, raid4, raid5, raid6, raid10

    WARNING: Changing raid_level for a volume is a destructive operation, meaning all data on that volume will be lost as part of the process of removing old and adding new RAID. RAID reshaping is currently not supported.

  • raid_device_count

    When type is raid specifies number of active RAID devices.

  • raid_spare_count

    When type is raid specifies number of spare RAID devices.

  • raid_metadata_version

    When type is raid specifies RAID metadata version as a string, e.g.: '1.0'.

  • raid_chunk_size

    When type is raid specifies RAID chunk size as a string, e.g.: '512 KiB'. Chunk size has to be multiple of 4 KiB.

  • raid_stripe_size

    When type is lvm specifies LVM RAID stripe size as a string, e.g.: '512 KiB'.

  • raid_disks

    Specifies which disks should be used for LVM RAID volume. raid_level needs to be specified and volume has to have storage_pools parent with type lvm. Accepts sublist of disks of parent storage_pools. In case multiple LVM RAID volumes within the same storage pool, the same disk can be used in multiple raid_disks.

  • encryption

    This specifies whether the volume will be encrypted using LUKS. WARNING: Toggling encryption for a volume is a destructive operation, meaning all data on that volume will be removed as part of the process of adding/removing the encryption layer.

  • encryption_password

    This string specifies a password or passphrase used to unlock/open the LUKS volume.

  • encryption_key

    This string specifies the full path to the key file on the managed nodes used to unlock the LUKS volume(s). It is the responsibility of the user of this role to securely copy this file to the managed nodes, or otherwise ensure that the file is on the managed nodes.

  • encryption_cipher

    This string specifies a non-default cipher to be used by LUKS.

  • encryption_key_size

    This integer specifies the LUKS key size (in bits).

  • encryption_luks_version

    This integer specifies the LUKS version to use.

  • deduplication

    This specifies whether the Virtual Data Optimizer (VDO) will be used. When set, duplicate data stored on storage volume will be deduplicated resulting in more storage capacity. Can be used together with compression and vdo_pool_size. Volume has to be part of the LVM storage_pool. Limit one VDO storage_volume per storage_pool. Underlying volume has to be at least 9 GB (bare minimum is around 5 GiB).

  • compression

    This specifies whether the Virtual Data Optimizer (VDO) will be used. When set, data stored on storage volume will be compressed resulting in more storage capacity. Volume has to be part of the LVM storage_pool. Can be used together with deduplication and vdo_pool_size. Limit one VDO storage_volume per storage_pool.

  • vdo_pool_size

    When Virtual Data Optimizer (VDO) is used, this specifies the actual size the volume will take on the device. Virtual size of VDO volume is set by size parameter. vdo_pool_size format is intended to be human-readable, e.g.: "30g", "50GiB". Default value is equal to the size of the volume.

cached

This specifies whether the volume should be cached or not. This is currently supported only for LVM volumes where dm-cache is used.

cache_size

Size of the cache. cache_size format is intended to be human-readable, e.g.: "30g", "50GiB".

cache_mode

Mode for the cache. Supported values include writethrough (default) and writeback.

cache_devices

List of devices that will be used for the cache. These should be either physical volumes or drives these physical volumes are allocated on. Generally you want to select fast devices like SSD or NVMe drives for cache.

thin

Whether the volume should be thinly provisioned or not. This is supported only for LVM volumes.

thin_pool_name

For thin volumes, this can be used to specify the name of the LVM thin pool that will be used for the volume. If the pool with the provided name already exists, the volume will be added to that pool. If it doesn't exist a new pool named thin_pool_name will be created. If not specified:

  • if there are no existing thin pools present, a new thin pool will be created with an automatically generated name,
  • if there is exactly one existing thin pool, the thin volume will be added to it and
  • if there are multiple thin pools present an exception will be raised.

thin_pool_size

Size for the thin pool. thin_pool_size format is intended to be human-readable, e.g.: "30g", "50GiB".

storage_safe_mode

When true (the default), an error will occur instead of automatically removing existing devices and/or formatting.

storage_udevadm_trigger

When true (the default is false), the role will use udevadm trigger to cause udev changes to take effect immediately. This may help on some platforms with "buggy" udev.

Example Playbook

- name: Manage storage
  hosts: all
  roles:
    - name: linux-system-roles.storage
      storage_pools:
        - name: app
          disks:
            - sdb
            - sdc
          volumes:
            - name: shared
              size: "100 GiB"
              mount_point: "/mnt/app/shared"
              #fs_type: xfs
              state: present
            - name: users
              size: "400g"
              fs_type: ext4
              mount_point: "/mnt/app/users"
      storage_volumes:
        - name: images
          type: disk
          disks: ["mpathc"]
          mount_point: /opt/images
          fs_label: images

rpm-ostree

See README-ostree.md

License

MIT

storage's People

Contributors

darxriggs avatar dependabot[bot] avatar dwlehman avatar i386x avatar japokorn avatar lgtm-com[bot] avatar lucab85 avatar nhosoi avatar pcahyna avatar richm avatar scaronni avatar spetrosi avatar tabowling avatar timflannagan avatar tyll avatar ukulekek avatar vcrhonek avatar vojtechtrefny avatar yizhanglinux avatar yontalcar avatar zhongchanghui avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

storage's Issues

Duplicate package checks for lvm2

Both lv and vg tasks are checking for lvm2 package to be installed. This results in duplicate checking which can slow down playbook execution.

[tbowling@tbowling storage]$ grep LVM2 */*
tasks/lv-default.yml:- name: Install LVM2 commands as needed
tasks/vg-default.yml:- name: Install LVM2 commmands as needed

It would be better to check for any packages one time at a global level, This would include any other toolings such as lvm2, parted, or others. A WHEN condition could be used so that you are only checking for those packages when that technology is requested. for example, if use_partitions is false, do not verify that package is installed.

storage: For task [manage the pools and volumes to match the specified state] We may need to wait the mdadm rsycn finished.

Hi,

I think after we create the md ride , we need to wait for resync.
During the playbook running I was check the md status via "cat /proc/mdstat"
And I figure out that ,before the resync action finished ,we have done all of the steps and cleaned the test env.

I think we may need to wait the md ride finish its resync.
Wait the state become clean in "mdadm -D /dev/md127" before the next steps.

BR
Fine

Partition table still present on disk after removing a partition in a storage_pool.

Blkid output:

/dev/vdc: PTUUID="bb4bd267-7393-434a-b282-6075966e9664" PTTYPE="gpt"

Lsblk -f output:

vdb                                                                                                  
vdc                                         LVM2_member       bguQSI-EBkk-Tofj-gIuT-j1B4-izLP-gOtRlj 
โ””โ”€bar-test1                                 xfs               98da1876-dd6a-4d9c-8f40-ff6812fed07a   /opt/test1
vdd                                         xfs               498fda97-26bd-49ff-9dfd-2b0e73ae7f99   /opt/test4

Playbook (tests/test.yml):

  vars:
    use_partitions: true

  roles:
    - name: storage
      storage_pools:
        - name: bar
          disks: ['vdc']
          state: "absent"
          volumes:
            - name: test1
              size: 10g
              mount_point: '/opt/test1'
  • If you try to run the playbook using the vdb disk, you get the following error in the playbook:
TASK [storage : configure vg] *************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "err": "  Device /dev/vdc excluded by a filter.\n", "msg": "Creating physical volume '/dev/vdc' failed", "rc": 5}

LVM size is not supporting 100% option.

LVM size is not supporting 100% option. I tried multiple syntax combinations: "100%", '100%' but both failed.

TASK [linux-system-roles.storage : parse the specified size] **********************************************************************************************************************
fatal: [rhel7]: FAILED! => {"changed": false, "module_stderr": "Shared connection to rhel7 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n  File \"/root/.ansible/tmp/ansible-tmp-1543067239.8836384-269315629014179/AnsiballZ_bsize.py\", line 113, in <module>\r\n    _ansiballz_main()\r\n  File \"/root/.ansible/tmp/ansible-tmp-1543067239.8836384-269315629014179/AnsiballZ_bsize.py\", line 105, in _ansiballz_main\r\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n  File \"/root/.ansible/tmp/ansible-tmp-1543067239.8836384-269315629014179/AnsiballZ_bsize.py\", line 48, in invoke_module\r\n    imp.load_module('__main__', mod, module, MOD_DESC)\r\n  File \"/tmp/ansible_bsize_payload_fk39wL/__main__.py\", line 89, in <module>\r\n  File \"/tmp/ansible_bsize_payload_fk39wL/__main__.py\", line 86, in main\r\n  File \"/tmp/ansible_bsize_payload_fk39wL/__main__.py\", line 71, in run_module\r\n  File \"/tmp/ansible_bsize_payload_fk39wL/ansible_bsize_payload.zip/ansible/module_utils/size.py\", line 22, in __init__\r\n  File \"/tmp/ansible_bsize_payload_fk39wL/ansible_bsize_payload.zip/ansible/module_utils/size.py\", line 88, in _parse_units\r\nValueError: Unable to identify unit '%'\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

Playbook used was:

- hosts: all
  become: yes
  become_method: sudo
  become_user: root

  vars:
    use_partitions: false
  tasks:
    - name: Configure Composer Storage
      include_role:
        name: linux-system-roles.storage
      vars:
        storage_pools:
          - name: composer
            disks: ['vdb']
            # type: lvm
            state: present
            volumes:
              - name: composer
                size: 20G
                # type: lvm
                # fs_type: xfs
                fs_label: "composer"
                mount_point: '/var/lib/lorax/composer'

storage: tests_raid_volume_options.yml, max_return: 3 -> disks_needed: 3

Hi
This case need 3 disks for raid testing(raid_device_count: 2, raid_spare_count: 1).
If the disk num we get is less than 3 during get_unused_disk, the next cases will be failed.
So I think we'd better change to disks_needed: 3.

TASK [storage : manage the pools and volumes to match the specified state] *******************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:104
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1592361950.787065-137572-241363501133969 && echo ansible-tmp-1592361950.787065-137572-241363501133969="` echo /root/.ansible/tmp/ansible-tmp-1592361950.787065-137572-241363501133969 `" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
<localhost> PUT /root/.ansible/tmp/ansible-local-1370915xok6as5/tmp3pryeh_0 TO /root/.ansible/tmp/ansible-tmp-1592361950.787065-137572-241363501133969/AnsiballZ_blivet.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1592361950.787065-137572-241363501133969/ /root/.ansible/tmp/ansible-tmp-1592361950.787065-137572-241363501133969/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1592361950.787065-137572-241363501133969/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1592361950.787065-137572-241363501133969/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
  File "/tmp/ansible_blivet_payload_h6gjcwoe/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 1150, in run_module
  File "/tmp/ansible_blivet_payload_h6gjcwoe/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 868, in manage_volume
  File "/tmp/ansible_blivet_payload_h6gjcwoe/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 353, in manage
  File "/tmp/ansible_blivet_payload_h6gjcwoe/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 527, in _create
  File "/tmp/ansible_blivet_payload_h6gjcwoe/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 482, in _process_device_numbers
fatal: [localhost]: FAILED! => {
    "actions": [],
    "changed": false,
    "crypts": [],
    "invocation": {
        "module_args": {
            "disklabel_type": null,
            "packages_only": false,
            "pools": [],
            "safe_mode": false,
            "use_partitions": true,
            "volumes": [
                {
                    "disks": [
                        "sdj",
                        "sdk"
                    ],
                    "encryption": false,
                    "encryption_cipher": null,
                    "encryption_key_file": null,
                    "encryption_key_size": null,
                    "encryption_luks_version": null,
                    "encryption_passphrase": null,
                    "fs_create_options": "",
                    "fs_label": "",
                    "fs_overwrite_existing": true,
                    "fs_type": "xfs",
                    "mount_check": 0,
                    "mount_device_identifier": "uuid",
                    "mount_options": "defaults",
                    "mount_passno": 0,
                    "mount_point": "/opt/test1",
                    "name": "test1",
                    "raid_chunk_size": null,
                    "raid_device_count": 2,
                    "raid_level": "raid1",
                    "raid_metadata_version": "1.0",
                    "raid_spare_count": 1,
                    "size": 0,
                    "state": "present",
                    "type": "raid"
                }
            ]
        }
    },
    "leaves": [],
    "mounts": [],
    "msg": "failed to set up volume 'test1': cannot create RAID with 2 members (2 active and 1 spare)",
    "packages": [],
    "pools": [],
    "volumes": []
}

PLAY RECAP ***********************************************************************************************************************************************************************************************************************************
localhost                  : ok=31   changed=0    unreachable=0    failed=1    skipped=19   rescued=0    ignored=0 

Thanks
Yi

Occasionally logical volumes aren't being mounted properly after playground run

  • Occasionally, volumes specified in a storage_pool (where state: 'present' and each of the volumes were run with the default state) aren't properly mounted.
  • Typically when this happens, only the last volume specified is being correctly mounted like /dev/mapper/bar-test8 in the examples below.
  • When I re-ran the playbook, test6, and test7 were properly mounted.
  • Before running the playbook, I attempted to add three logical volumes of size 10g, 5g, and 5g in a 20g volume group. When the playbook failed due to insufficient space (vg had little less than 20g, and failed on the third logical volume), I manually unmounted, removed the created volumes from the volume group, removed their entry in /etc/fstab, and removed the vg. I then re-ran the playbook with the modified tests/test.yml file below:

Playbook used:

- name: bar
   disks: ['vdc']
   # state: "absent"
   volumes:
     - name: test6
        size: 5g
        mount_point: '/opt/test6'
     - name: test7
        size: 5g
        mount_point: '/opt/test7'
     - name: test8
        size: 5g
        mount_point: '/opt/test8'

lsblk output (used /dev/vdc) after playbook run:

vdc                                         252:32   0  20G  0 disk 
โ””โ”€vdc1                                      252:33   0  20G  0 part 
  โ”œโ”€bar-test6                               253:6    0   5G  0 lvm  
  โ”œโ”€bar-test7                               253:7    0   5G  0 lvm  
  โ””โ”€bar-test8                               253:8    0   5G  0 lvm  /opt/test8

blkid output:

/dev/mapper/bar-test6: UUID="bf9a6dbc-4048-4671-8edf-010fb262a7b5" TYPE="xfs"
/dev/mapper/bar-test7: UUID="f3c6def6-5739-4d6d-94e7-2b65f1dcf631" TYPE="xfs"
/dev/mapper/bar-test8: UUID="98ac0141-3413-40cf-bb07-79690cd31695" TYPE="xfs"

mount output:

$ sudo umount /dev/mapper/bar-test6
umount: /dev/mapper/bar-test6: not mounted.

storage: ignore null-blk when do find_unused_disk

Hi
The null-blk was selected when do find_unused_disk, and it finally failed.
I tried manually to create pv/vg with null-blk, found it was excluded by a filter

environment: RHEL-8.2

$ lsblk -o NAME,FSTYPE,TYPE  /dev/nullb0 /dev/nvme0n1`
NAME    FSTYPE TYPE
nullb0         disk
nvme0n1        disk
$ vgcreate foo4 /dev/nullb0 
  Device /dev/nullb0 excluded by a filter.
$ pvcreate /dev/nullb0 
  Device /dev/nullb0 excluded by a filter.

playbook

$ cat tests/nullb0.yml 
---
- hosts: all
  become: true
  vars:
    volume_group_size: '10g'
    volume_size: '80g'
    storage_safe_mode: false

  tasks:
    - include_role:
        name: storage

    - include_tasks: get_unused_disk.yml
      vars:
        min_size: "{{ volume_group_size }}"
        max_return: 2

    - name: Create one logical volumes which has 4 char vg and 78 lv
      include_role:
        name: storage
      vars:
        storage_pools:
            - name: foo4
              disks: ["{{ unused_disks[0] }}"]
              volumes:
                - name: test1
                  size: "{{ volume_size }}"
                  mount_point: '/opt/test1'

    - include_tasks: verify-role-results.yml

    - name: Clean up
      include_role:
        name: storage
      vars:
        storage_pools:
            - name: foo4
              disks: ["{{ unused_disks[0] }}"]
              state: absent
              volumes: []

$ ansible-playbook -i inventory tests/nullb0.yml -vvvv

TASK [storage : debug] ******************************************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:84
ok: [localhost] => {
    "_storage_pools": [
        {
            "disks": [
                "nullb0"
            ],
            "name": "foo4",
            "state": "present",
            "type": "lvm",
            "volumes": [
                {
                    "fs_create_options": "",
                    "fs_label": "",
                    "fs_overwrite_existing": true,
                    "fs_type": "xfs",
                    "mount_check": 0,
                    "mount_device_identifier": "uuid",
                    "mount_options": "defaults",
                    "mount_passno": 0,
                    "mount_point": "/opt/test1",
                    "name": "test1",
                    "pool": "foo4",
                    "size": "80g",
                    "state": "present",
                    "type": "lvm"
                }
            ]
        }
    ]
}

TASK [storage : debug] ******************************************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:87
ok: [localhost] => {
    "_storage_volumes": []
}

TASK [storage : get required packages] **************************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:90
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1590068525.539217-12932-14548696496167 && echo ansible-tmp-1590068525.539217-12932-14548696496167="` echo /root/.ansible/tmp/ansible-tmp-1590068525.539217-12932-14548696496167 `" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
<localhost> PUT /root/.ansible/tmp/ansible-local-12439gse6jap4/tmp4o2eo1nd TO /root/.ansible/tmp/ansible-tmp-1590068525.539217-12932-14548696496167/AnsiballZ_blivet.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1590068525.539217-12932-14548696496167/ /root/.ansible/tmp/ansible-tmp-1590068525.539217-12932-14548696496167/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1590068525.539217-12932-14548696496167/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1590068525.539217-12932-14548696496167/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "actions": [],
    "changed": false,
    "invocation": {
        "module_args": {
            "disklabel_type": null,
            "packages_only": true,
            "pools": [
                {
                    "disks": [
                        "nullb0"
                    ],
                    "name": "foo4",
                    "state": "present",
                    "type": "lvm",
                    "volumes": [
                        {
                            "fs_create_options": "",
                            "fs_label": "",
                            "fs_overwrite_existing": true,
                            "fs_type": "xfs",
                            "mount_check": 0,
                            "mount_device_identifier": "uuid",
                            "mount_options": "defaults",
                            "mount_passno": 0,
                            "mount_point": "/opt/test1",
                            "name": "test1",
                            "pool": "foo4",
                            "size": "80g",
                            "state": "present",
                            "type": "lvm"
                        }
                    ]
                }
            ],
            "safe_mode": true,
            "use_partitions": null,
            "volumes": []
        }
    },
    "leaves": [],
    "mounts": [],
    "packages": [
        "lvm2",
        "xfsprogs"
    ],
    "pools": [],
    "volumes": []
}

TASK [storage : make sure required packages are installed] ******************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:99
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1590068529.3606427-13013-276846597370977 && echo ansible-tmp-1590068529.3606427-13013-276846597370977="` echo /root/.ansible/tmp/ansible-tmp-1590068529.3606427-13013-276846597370977 `" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/packaging/os/dnf.py
<localhost> PUT /root/.ansible/tmp/ansible-local-12439gse6jap4/tmp23kyjljz TO /root/.ansible/tmp/ansible-tmp-1590068529.3606427-13013-276846597370977/AnsiballZ_dnf.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1590068529.3606427-13013-276846597370977/ /root/.ansible/tmp/ansible-tmp-1590068529.3606427-13013-276846597370977/AnsiballZ_dnf.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1590068529.3606427-13013-276846597370977/AnsiballZ_dnf.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1590068529.3606427-13013-276846597370977/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "changed": false,
    "invocation": {
        "module_args": {
            "allow_downgrade": false,
            "autoremove": false,
            "bugfix": false,
            "conf_file": null,
            "disable_excludes": null,
            "disable_gpg_check": false,
            "disable_plugin": [],
            "disablerepo": [],
            "download_dir": null,
            "download_only": false,
            "enable_plugin": [],
            "enablerepo": [],
            "exclude": [],
            "install_repoquery": true,
            "install_weak_deps": true,
            "installroot": "/",
            "list": null,
            "lock_timeout": 30,
            "name": [
                "lvm2",
                "xfsprogs"
            ],
            "releasever": null,
            "security": false,
            "skip_broken": false,
            "state": "present",
            "update_cache": false,
            "update_only": false,
            "validate_certs": true
        }
    },
    "msg": "Nothing to do",
    "rc": 0,
    "results": []
}

TASK [storage : manage the pools and volumes to match the specified state] **************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:104
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1590068533.2727416-13029-153970000470128 && echo ansible-tmp-1590068533.2727416-13029-153970000470128="` echo /root/.ansible/tmp/ansible-tmp-1590068533.2727416-13029-153970000470128 `" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
<localhost> PUT /root/.ansible/tmp/ansible-local-12439gse6jap4/tmpwlp4zaij TO /root/.ansible/tmp/ansible-tmp-1590068533.2727416-13029-153970000470128/AnsiballZ_blivet.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1590068533.2727416-13029-153970000470128/ /root/.ansible/tmp/ansible-tmp-1590068533.2727416-13029-153970000470128/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1590068533.2727416-13029-153970000470128/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1590068533.2727416-13029-153970000470128/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
  File "/tmp/ansible_blivet_payload_vlkee3mf/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 835, in run_module
  File "/usr/lib/python3.6/site-packages/blivet/actionlist.py", line 48, in wrapped_func
    return func(obj, *args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/actionlist.py", line 327, in process
    action.execute(callbacks)
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/deviceaction.py", line 656, in execute
    options=self.device.format_args)
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/formats/__init__.py", line 513, in create
    self._create(**kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/formats/lvmpv.py", line 124, in _create
    blockdev.lvm.pvcreate(self.device, data_alignment=self.data_alignment, extra=[ea_yes])
  File "/usr/lib64/python3.6/site-packages/gi/overrides/BlockDev.py", line 993, in wrapped
    raise transform[1](msg)
fatal: [localhost]: FAILED! => {
    "actions": [],
    "changed": false,
    "invocation": {
        "module_args": {
            "disklabel_type": null,
            "packages_only": false,
            "pools": [
                {
                    "disks": [
                        "nullb0"
                    ],
                    "name": "foo4",
                    "state": "present",
                    "type": "lvm",
                    "volumes": [
                        {
                            "_device": "/dev/mapper/foo4-test1",
                            "_mount_id": "/dev/mapper/foo4-test1",
                            "fs_create_options": "",
                            "fs_label": "",
                            "fs_overwrite_existing": true,
                            "fs_type": "xfs",
                            "mount_check": 0,
                            "mount_device_identifier": "uuid",
                            "mount_options": "defaults",
                            "mount_passno": 0,
                            "mount_point": "/opt/test1",
                            "name": "test1",
                            "pool": "foo4",
                            "size": "80g",
                            "state": "present",
                            "type": "lvm"
                        }
                    ]
                }
            ],
            "safe_mode": false,
            "use_partitions": null,
            "volumes": []
        }
    },
    "leaves": [],
    "mounts": [],
    "msg": "Failed to commit changes to disk",
    "packages": [
        "lvm2",
        "e2fsprogs",
        "dosfstools",
        "xfsprogs"
    ],
    "pools": [],
    "volumes": []
}

PLAY RECAP ******************************************************************************************************************************************************************************************************************************************
localhost                  : ok=35   changed=0    unreachable=0    failed=1    skipped=12   rescued=0    ignored=0   

storage: tests_disk_errors.yml will be failed if system has SWAP configured

tests_disk_errors will be failed if the system has SWAP configured, I tried disable the SWAP partition on fstab and it passed.

test case

$ ansible-playbook -i inventory tests/tests_disk_errors.yml
TASK [Try to replace the file system on disk in safe mode] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tests/tests_disk_errors.yml:101

TASK [storage : Set version specific variables] **********************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:2
ok: [localhost] => (item=/root/test/storage/vars/RedHat-8.yml) => {"ansible_facts": {"blivet_package_list": ["python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap"]}, "ansible_included_var_files": ["/root/test/storage/vars/RedHat-8.yml"], "ansible_loop_var": "item", "changed": false, "item": "/root/test/storage/vars/RedHat-8.yml"}

TASK [storage : define an empty list of pools to be used in testing] *************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:9
ok: [localhost] => {"ansible_facts": {"_storage_pools_list": []}, "changed": false}

TASK [storage : define an empty list of volumes to be used in testing] ***********************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:13
ok: [localhost] => {"ansible_facts": {"_storage_volumes_list": []}, "changed": false}

TASK [storage : include the appropriate provider tasks] **************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:17
included: /root/test/storage/tasks/main-blivet.yml for localhost

TASK [storage : get a list of rpm packages installed on host machine] ************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:2
skipping: [localhost] => {"changed": false, "skip_reason": "Conditional result was False"}

TASK [storage : make sure blivet is available] ***********************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:7
ok: [localhost] => {"changed": false, "msg": "Nothing to do", "rc": 0, "results": []}

TASK [storage : initialize internal facts] ***************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:18
ok: [localhost] => {"ansible_facts": {"_storage_pools": [], "_storage_vol_defaults": [], "_storage_vol_pools": [], "_storage_vols_no_defaults": [], "_storage_vols_no_defaults_by_pool": {}, "_storage_vols_w_defaults": [], "_storage_volumes": []}, "changed": false}

TASK [storage : Apply defaults to pools and volumes [1/6]] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:28

TASK [storage : Apply defaults to pools and volumes [2/6]] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:36

TASK [storage : Apply defaults to pools and volumes [3/6]] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:44

TASK [storage : Apply defaults to pools and volumes [4/6]] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:52

TASK [storage : Apply defaults to pools and volumes [5/6]] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:61

TASK [storage : Apply defaults to pools and volumes [6/6]] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:72
ok: [localhost] => (item={'name': 'test1', 'type': 'disk', 'fs_type': 'ext3', 'disks': ['nvme0n1']}) => {"ansible_facts": {"_storage_volumes": [{"disks": ["nvme0n1"], "encryption": false, "encryption_cipher": null, "encryption_key_file": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_passphrase": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "ext3", "mount_check": 0, "mount_device_identifier": "uuid", "mount_options": "defaults", "mount_passno": 0, "mount_point": "", "name": "test1", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "size": 0, "state": "present", "type": "disk"}]}, "ansible_loop_var": "volume", "changed": false, "volume": {"disks": ["nvme0n1"], "fs_type": "ext3", "name": "test1", "type": "disk"}}

TASK [storage : debug] ***********************************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:84
ok: [localhost] => {
    "_storage_pools": []
}

TASK [storage : debug] ***********************************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:87
ok: [localhost] => {
    "_storage_volumes": [
        {
            "disks": [
                "nvme0n1"
            ],
            "encryption": false,
            "encryption_cipher": null,
            "encryption_key_file": null,
            "encryption_key_size": null,
            "encryption_luks_version": null,
            "encryption_passphrase": null,
            "fs_create_options": "",
            "fs_label": "",
            "fs_overwrite_existing": true,
            "fs_type": "ext3",
            "mount_check": 0,
            "mount_device_identifier": "uuid",
            "mount_options": "defaults",
            "mount_passno": 0,
            "mount_point": "",
            "name": "test1",
            "raid_chunk_size": null,
            "raid_device_count": null,
            "raid_level": null,
            "raid_metadata_version": null,
            "raid_spare_count": null,
            "size": 0,
            "state": "present",
            "type": "disk"
        }
    ]
}

TASK [storage : get required packages] *******************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:90
ok: [localhost] => {"actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": ["e2fsprogs"], "pools": [], "volumes": []}

TASK [storage : make sure required packages are installed] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:99
ok: [localhost] => {"changed": false, "msg": "Nothing to do", "rc": 0, "results": []}

TASK [storage : manage the pools and volumes to match the specified state] *******************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:104
fatal: [localhost]: FAILED! => {"actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "msg": "cannot remove existing formatting on volume 'test1' in safe mode", "packages": [], "pools": [], "volumes": []}

TASK [Check that we failed in the role] ******************************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/tests_disk_errors.yml:116
ok: [localhost] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [Verify the output] *********************************************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/tests_disk_errors.yml:122
ok: [localhost] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [Unmount file system] *******************************************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/tests_disk_errors.yml:129

TASK [storage : Set version specific variables] **********************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:2
ok: [localhost] => (item=/root/test/storage/vars/RedHat-8.yml) => {"ansible_facts": {"blivet_package_list": ["python3-blivet", "libblockdev-crypto", "libblockdev-dm", "libblockdev-lvm", "libblockdev-mdraid", "libblockdev-swap"]}, "ansible_included_var_files": ["/root/test/storage/vars/RedHat-8.yml"], "ansible_loop_var": "item", "changed": false, "item": "/root/test/storage/vars/RedHat-8.yml"}

TASK [storage : define an empty list of pools to be used in testing] *************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:9
ok: [localhost] => {"ansible_facts": {"_storage_pools_list": []}, "changed": false}

TASK [storage : define an empty list of volumes to be used in testing] ***********************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:13
ok: [localhost] => {"ansible_facts": {"_storage_volumes_list": []}, "changed": false}

TASK [storage : include the appropriate provider tasks] **************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:17
included: /root/test/storage/tasks/main-blivet.yml for localhost

TASK [storage : get a list of rpm packages installed on host machine] ************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:2
skipping: [localhost] => {"changed": false, "skip_reason": "Conditional result was False"}

TASK [storage : make sure blivet is available] ***********************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:7
ok: [localhost] => {"changed": false, "msg": "Nothing to do", "rc": 0, "results": []}

TASK [storage : initialize internal facts] ***************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:18
ok: [localhost] => {"ansible_facts": {"_storage_pools": [], "_storage_vol_defaults": [], "_storage_vol_pools": [], "_storage_vols_no_defaults": [], "_storage_vols_no_defaults_by_pool": {}, "_storage_vols_w_defaults": [], "_storage_volumes": []}, "changed": false}

TASK [storage : Apply defaults to pools and volumes [1/6]] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:28

TASK [storage : Apply defaults to pools and volumes [2/6]] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:36

TASK [storage : Apply defaults to pools and volumes [3/6]] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:44

TASK [storage : Apply defaults to pools and volumes [4/6]] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:52

TASK [storage : Apply defaults to pools and volumes [5/6]] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:61

TASK [storage : Apply defaults to pools and volumes [6/6]] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:72
ok: [localhost] => (item={'name': 'test1', 'type': 'disk', 'fs_type': 'ext4', 'disks': ['nvme0n1'], 'mount_point': 'none'}) => {"ansible_facts": {"_storage_volumes": [{"disks": ["nvme0n1"], "encryption": false, "encryption_cipher": null, "encryption_key_file": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_passphrase": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "ext4", "mount_check": 0, "mount_device_identifier": "uuid", "mount_options": "defaults", "mount_passno": 0, "mount_point": "none", "name": "test1", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "size": 0, "state": "present", "type": "disk"}]}, "ansible_loop_var": "volume", "changed": false, "volume": {"disks": ["nvme0n1"], "fs_type": "ext4", "mount_point": "none", "name": "test1", "type": "disk"}}

TASK [storage : debug] ***********************************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:84
ok: [localhost] => {
    "_storage_pools": []
}

TASK [storage : debug] ***********************************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:87
ok: [localhost] => {
    "_storage_volumes": [
        {
            "disks": [
                "nvme0n1"
            ],
            "encryption": false,
            "encryption_cipher": null,
            "encryption_key_file": null,
            "encryption_key_size": null,
            "encryption_luks_version": null,
            "encryption_passphrase": null,
            "fs_create_options": "",
            "fs_label": "",
            "fs_overwrite_existing": true,
            "fs_type": "ext4",
            "mount_check": 0,
            "mount_device_identifier": "uuid",
            "mount_options": "defaults",
            "mount_passno": 0,
            "mount_point": "none",
            "name": "test1",
            "raid_chunk_size": null,
            "raid_device_count": null,
            "raid_level": null,
            "raid_metadata_version": null,
            "raid_spare_count": null,
            "size": 0,
            "state": "present",
            "type": "disk"
        }
    ]
}

TASK [storage : get required packages] *******************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:90
ok: [localhost] => {"actions": [], "changed": false, "crypts": [], "leaves": [], "mounts": [], "packages": ["e2fsprogs"], "pools": [], "volumes": []}

TASK [storage : make sure required packages are installed] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:99
ok: [localhost] => {"changed": false, "msg": "Nothing to do", "rc": 0, "results": []}

TASK [storage : manage the pools and volumes to match the specified state] *******************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:104
ok: [localhost] => {"actions": [], "changed": false, "crypts": [], "leaves": ["/dev/sda1", "/dev/sda2", "/dev/mapper/rhel_storageqe--62-home", "/dev/mapper/rhel_storageqe--62-root", "/dev/mapper/rhel_storageqe--62-swap", "/dev/sdb", "/dev/sdh", "/dev/sdi", "/dev/sdj", "/dev/sdc", "/dev/sdk", "/dev/sdl1", "/dev/sdd", "/dev/sde", "/dev/sdf", "/dev/sdg", "/dev/nvme0n1"], "mounts": [{"path": "/opt/test1", "state": "absent"}, {"dump": 0, "fstype": "ext4", "opts": "defaults", "passno": 0, "path": "none", "src": "UUID=6190b98d-8e08-4a5d-b64d-1f7f99f1e9f6", "state": "mounted"}], "packages": ["e2fsprogs", "lvm2", "xfsprogs", "dosfstools"], "pools": [], "volumes": [{"_device": "/dev/nvme0n1", "_kernel_device": "/dev/nvme0n1", "_mount_id": "UUID=6190b98d-8e08-4a5d-b64d-1f7f99f1e9f6", "_raw_device": "/dev/nvme0n1", "_raw_kernel_device": "/dev/nvme0n1", "disks": ["nvme0n1"], "encryption": false, "encryption_cipher": null, "encryption_key_file": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_passphrase": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "ext4", "mount_check": 0, "mount_device_identifier": "uuid", "mount_options": "defaults", "mount_passno": 0, "mount_point": "none", "name": "test1", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "size": 0, "state": "present", "type": "disk"}]}

TASK [storage : debug] ***********************************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:113
ok: [localhost] => {
    "blivet_output": {
        "actions": [],
        "changed": false,
        "crypts": [],
        "failed": false,
        "leaves": [
            "/dev/sda1",
            "/dev/sda2",
            "/dev/mapper/rhel_storageqe--62-home",
            "/dev/mapper/rhel_storageqe--62-root",
            "/dev/mapper/rhel_storageqe--62-swap",
            "/dev/sdb",
            "/dev/sdh",
            "/dev/sdi",
            "/dev/sdj",
            "/dev/sdc",
            "/dev/sdk",
            "/dev/sdl1",
            "/dev/sdd",
            "/dev/sde",
            "/dev/sdf",
            "/dev/sdg",
            "/dev/nvme0n1"
        ],
        "mounts": [
            {
                "path": "/opt/test1",
                "state": "absent"
            },
            {
                "dump": 0,
                "fstype": "ext4",
                "opts": "defaults",
                "passno": 0,
                "path": "none",
                "src": "UUID=6190b98d-8e08-4a5d-b64d-1f7f99f1e9f6",
                "state": "mounted"
            }
        ],
        "packages": [
            "e2fsprogs",
            "lvm2",
            "xfsprogs",
            "dosfstools"
        ],
        "pools": [],
        "volumes": [
            {
                "_device": "/dev/nvme0n1",
                "_kernel_device": "/dev/nvme0n1",
                "_mount_id": "UUID=6190b98d-8e08-4a5d-b64d-1f7f99f1e9f6",
                "_raw_device": "/dev/nvme0n1",
                "_raw_kernel_device": "/dev/nvme0n1",
                "disks": [
                    "nvme0n1"
                ],
                "encryption": false,
                "encryption_cipher": null,
                "encryption_key_file": null,
                "encryption_key_size": null,
                "encryption_luks_version": null,
                "encryption_passphrase": null,
                "fs_create_options": "",
                "fs_label": "",
                "fs_overwrite_existing": true,
                "fs_type": "ext4",
                "mount_check": 0,
                "mount_device_identifier": "uuid",
                "mount_options": "defaults",
                "mount_passno": 0,
                "mount_point": "none",
                "name": "test1",
                "raid_chunk_size": null,
                "raid_device_count": null,
                "raid_level": null,
                "raid_metadata_version": null,
                "raid_spare_count": null,
                "size": 0,
                "state": "present",
                "type": "disk"
            }
        ]
    }
}

TASK [storage : set the list of pools for test verification] *********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:116
ok: [localhost] => {"ansible_facts": {"_storage_pools_list": []}, "changed": false}

TASK [storage : set the list of volumes for test verification] *******************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:120
ok: [localhost] => {"ansible_facts": {"_storage_volumes_list": [{"_device": "/dev/nvme0n1", "_kernel_device": "/dev/nvme0n1", "_mount_id": "UUID=6190b98d-8e08-4a5d-b64d-1f7f99f1e9f6", "_raw_device": "/dev/nvme0n1", "_raw_kernel_device": "/dev/nvme0n1", "disks": ["nvme0n1"], "encryption": false, "encryption_cipher": null, "encryption_key_file": null, "encryption_key_size": null, "encryption_luks_version": null, "encryption_passphrase": null, "fs_create_options": "", "fs_label": "", "fs_overwrite_existing": true, "fs_type": "ext4", "mount_check": 0, "mount_device_identifier": "uuid", "mount_options": "defaults", "mount_passno": 0, "mount_point": "none", "name": "test1", "raid_chunk_size": null, "raid_device_count": null, "raid_level": null, "raid_metadata_version": null, "raid_spare_count": null, "size": 0, "state": "present", "type": "disk"}]}, "changed": false}

TASK [storage : remove obsolete mounts] ******************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:136
changed: [localhost] => (item={'path': '/opt/test1', 'state': 'absent'}) => {"ansible_loop_var": "mount_info", "changed": true, "dump": "0", "fstab": "/etc/fstab", "mount_info": {"path": "/opt/test1", "state": "absent"}, "name": "/opt/test1", "opts": "defaults", "passno": "0"}

TASK [storage : tell systemd to refresh its view of /etc/fstab] ******************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:147
ok: [localhost] => {"changed": false, "name": null, "status": {}}

TASK [storage : set up new/current mounts] ***************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:152
failed: [localhost] (item={'src': 'UUID=6190b98d-8e08-4a5d-b64d-1f7f99f1e9f6', 'path': 'none', 'fstype': 'ext4', 'opts': 'defaults', 'dump': 0, 'passno': 0, 'state': 'mounted'}) => {"ansible_loop_var": "mount_info", "changed": false, "mount_info": {"dump": 0, "fstype": "ext4", "opts": "defaults", "passno": 0, "path": "none", "src": "UUID=6190b98d-8e08-4a5d-b64d-1f7f99f1e9f6", "state": "mounted"}, "msg": "Error mounting none: mount: /root/test/storage/tests/none: unknown filesystem type 'swap'.\n"}

PLAY RECAP ***********************************************************************************************************************************************************************************************************************************
localhost                  : ok=95   changed=4    unreachable=0    failed=1    skipped=45   rescued=3    ignored=0 
[root@storageqe-62 storage]# lsblk 
NAME                        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                           8:0    0 279.4G  0 disk 
โ”œโ”€sda1                        8:1    0   600M  0 part /boot/efi
โ”œโ”€sda2                        8:2    0     1G  0 part /boot
โ””โ”€sda3                        8:3    0 277.8G  0 part 
  โ”œโ”€rhel_storageqe--62-root 253:0    0    70G  0 lvm  /
  โ”œโ”€rhel_storageqe--62-swap 253:1    0   7.9G  0 lvm  [SWAP]
  โ””โ”€rhel_storageqe--62-home 253:2    0   200G  0 lvm  /home
sdb                           8:16   0 279.4G  0 disk 
sdc                           8:32   0 186.3G  0 disk 
sdd                           8:48   0 111.8G  0 disk 
sde                           8:64   0 111.8G  0 disk 
sdf                           8:80   0 931.5G  0 disk 
sdg                           8:96   0 931.5G  0 disk 
sdh                           8:112  0 931.5G  0 disk 
sdi                           8:128  0 931.5G  0 disk 
sdj                           8:144  0 931.5G  0 disk 
sdk                           8:160  0 279.4G  0 disk 
sdl                           8:176  0 279.4G  0 disk 
โ””โ”€sdl1                        8:177  0 279.4G  0 part /root/test
nvme0n1                     259:0    0 745.2G  0 disk 
[root@storageqe-62 storage]# cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Tue Jun 16 02:49:03 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/rhel_storageqe--62-root /                       xfs     defaults        0 0
UUID=0c459216-6a71-4860-8e5f-97bfc9c93095 /boot                   xfs     defaults        0 0
UUID=3189-4B31          /boot/efi               vfat    umask=0077,shortname=winnt 0 2
/dev/mapper/rhel_storageqe--62-home /home                   xfs     defaults        0 0
/dev/mapper/rhel_storageqe--62-swap none                    swap    defaults        0 0
UUID=6190b98d-8e08-4a5d-b64d-1f7f99f1e9f6 none ext4 defaults 0 0

[root@storageqe-62 storage]# blkid | grep 6190b98d-8e08-4a5d-b64d-1f7f99f1e9f
/dev/nvme0n1: UUID="6190b98d-8e08-4a5d-b64d-1f7f99f1e9f6" TYPE="ext4"

storage: No handle checking for the duplicate volume name

When I create the duplicated name lv, it will be failed and have the warning ouput.
When I test this scenario with playbook, I didn't find such warning information.

[root@storageqe-62 storage]# lvcreate -L 3G -n test2 foo
WARNING: xfs signature detected on /dev/foo/test2 at offset 0. Wipe it? [y/n]: y
Wiping xfs signature on /dev/foo/test2.
Logical volume "test2" created.
[root@storageqe-62 storage]# lvcreate -L 3G -n test2 foo
Logical Volume "test2" already exists in volume group "foo"
[root@storageqe-62 storage]# echo $?
5

test case:
[root@storageqe-62 storage]# cat tests/test1.yml

  • hosts: all
    become: true
    vars:
    volume_group_size: '10g'
    volume_size: '3g'

    tasks:

    • include_role:
      name: storage

    • include_tasks: get_unused_disk.yml
      vars:
      min_size: "{{ volume_group_size }}"
      max_return: 1

    • name: Create three LVM logical volumes under one volume group
      include_role:
      name: storage
      vars:
      storage_pools:
      - name: foo
      disks: "{{ unused_disks }}"
      volumes:
      - name: test1
      size: "{{ volume_size }}"
      mount_point: '/opt/test1'
      - name: test2
      size: "{{ volume_size }}"
      mount_point: '/opt/test2'
      - name: test2
      size: "{{ volume_size }}"
      mount_point: '/opt/test3'

Executing log:
TASK [Print out pool information] ********************************************************************************************************************************************************************************
task path: /root/storage/tests/verify-role-results.yml:1
ok: [localhost] => {
"_storage_pools_list": [
{
"disks": [
"sdb"
],
"name": "foo",
"state": "present",
"type": "lvm",
"volumes": [
{
"_device": "/dev/mapper/foo-test1",
"_mount_id": "/dev/mapper/foo-test1",
"fs_create_options": "",
"fs_label": "",
"fs_overwrite_existing": true,
"fs_type": "xfs",
"mount_check": 0,
"mount_device_identifier": "uuid",
"mount_options": "defaults",
"mount_passno": 0,
"mount_point": "/opt/test1",
"name": "test1",
"pool": "foo",
"size": "3g",
"state": "present",
"type": "lvm"
},
{
"_device": "/dev/mapper/foo-test2",
"_mount_id": "/dev/mapper/foo-test2",
"fs_create_options": "",
"fs_label": "",
"fs_overwrite_existing": true,
"fs_type": "xfs",
"mount_check": 0,
"mount_device_identifier": "uuid",
"mount_options": "defaults",
"mount_passno": 0,
"mount_point": "/opt/test2",
"name": "test2",
"pool": "foo",
"size": "3g",
"state": "present",
"type": "lvm"
},
{
"_device": "/dev/mapper/foo-test2",
"_mount_id": "/dev/mapper/foo-test2",
"fs_create_options": "",
"fs_label": "",
"fs_overwrite_existing": true,
"fs_type": "xfs",
"mount_check": 0,
"mount_device_identifier": "uuid",
"mount_options": "defaults",
"mount_passno": 0,
"mount_point": "/opt/test3",
"name": "test2",
"pool": "foo",
"size": "3g",
"state": "present",
"type": "lvm"
}
]
}
]
}

Can not create a file system on a disk which contains a partition

If there is a partition on a disk and I try to create a filesystem (disk volume) directly on the disk (which should destroy the partition), the role fails in the blivet module:

TASK [linux-system-roles.storage : manage the pools and volumes to match the specified state] ***
fatal: [rhel8.1-76-storage]: FAILED! => {"actions": [], "changed": false, "leaves": [], "mounts": [], "msg": "Failed to commit changes to disk", "packages": ["xfsprogs", "e2fsprogs", "lvm2"], "pools": [], "volumes": []}

Reproducer
preparation:

# fdisk /dev/vdc

Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

The old xfs signature will be removed by a write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x853a732c.

Command (m for help): p
Disk /dev/vdc: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x853a732c

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 
First sector (2048-20971519, default 2048): 
Last sector, +sectors or +size{K,M,G,T,P} (2048-20971519, default 20971519): 

Created a new partition 1 of type 'Linux' and of size 10 GiB.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
[  836.891961]  vdc: vdc1
Syncing disks.

playbook:

- hosts: all
  roles:
    - name: linux-system-roles.storage
      storage_safe_mode: false
      storage_volumes:
        - name: barefs
          type: disk
          disks:
            - /dev/vdc
          fs_type: ext3

Note that safe mode is off, so the role should be able to remove existing partitions. Note also that it is not because the partition is mounted - it is not, there is even no filesystem on it.

Modifying the mount location on an existing disk device causes the playbook to fail

Example of playbook that would reproduce this:

- name: Create a disk device mounted at '/opt/test1'
      include_role:
        name: storage
      vars:
        storage_volumes:
          - name: test1
            type: disk
            mount_point: '/opt/test1'
            disks: ['vdb']
    
    - name: Change the disk device mount location to '/opt/test2'
      include_role:
        name: storage
      vars:
        storage_volumes:
          - name: test1
            type: disk
            mount_point: '/opt/test2'
            disks: ['vdb']

Output:

TASK [storage : Remove file system as needed] *********************************************************************************************************************************************************************
task path: /home/tim/Documents/storage/tasks/fs-default.yml:59
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tim
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1549388550.8278959-101630898191692 `" && echo ansible-tmp-1549388550.8278959-101630898191692="` echo /root/.ansible/tmp/ansible-tmp-1549388550.8278959-101630898191692 `" ) && sleep 0'
Using module file /usr/lib/python3.7/site-packages/ansible/modules/commands/command.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-9757nqfhq7kn/tmp1jn3g0hm TO /root/.ansible/tmp/ansible-tmp-1549388550.8278959-101630898191692/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1549388550.8278959-101630898191692/ /root/.ansible/tmp/ansible-tmp-1549388550.8278959-101630898191692/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-ylcynifwzjqwmtbtcmikvotmzrvbrgag; /usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1549388550.8278959-101630898191692/AnsiballZ_command.py'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1549388550.8278959-101630898191692/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
    "changed": true,
    "cmd": [
        "wipefs",
        "-af",
        "/dev/vdb"
    ],
    "delta": "0:00:00.013150",
    "end": "2019-02-05 12:42:31.073063",
    "invocation": {
        "module_args": {
            "_raw_params": "wipefs -af /dev/vdb",
            "_uses_shell": false,
            "argv": null,
            "chdir": null,
            "creates": null,
            "executable": null,
            "removes": null,
            "stdin": null,
            "warn": true
        }
    },
    "rc": 0,
    "start": "2019-02-05 12:42:31.059913",
    "stderr": "",
    "stderr_lines": [],
    "stdout": "/dev/vdb: 4 bytes were erased at offset 0x00000000 (xfs): 58 46 53 42",
    "stdout_lines": [
        "/dev/vdb: 4 bytes were erased at offset 0x00000000 (xfs): 58 46 53 42"
    ]
}

TASK [storage : Create filesystem as needed] **********************************************************************************************************************************************************************
task path: /home/tim/Documents/storage/tasks/fs-default.yml:63
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: tim
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1549388551.2953556-165794858308127 `" && echo ansible-tmp-1549388551.2953556-165794858308127="` echo /root/.ansible/tmp/ansible-tmp-1549388551.2953556-165794858308127 `" ) && sleep 0'
Using module file /usr/lib/python3.7/site-packages/ansible/modules/system/filesystem.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-9757nqfhq7kn/tmp92cqfhyn TO /root/.ansible/tmp/ansible-tmp-1549388551.2953556-165794858308127/AnsiballZ_filesystem.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1549388551.2953556-165794858308127/ /root/.ansible/tmp/ansible-tmp-1549388551.2953556-165794858308127/AnsiballZ_filesystem.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-ytjmxvjpbixbspktudmdphbjxyojhvjz; /usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1549388551.2953556-165794858308127/AnsiballZ_filesystem.py'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1549388551.2953556-165794858308127/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
    "changed": false,
    "cmd": "/usr/sbin/mkfs.xfs -f /dev/vdb",
    "invocation": {
        "module_args": {
            "dev": "/dev/vdb",
            "force": false,
            "fstype": "xfs",
            "opts": "",
            "resizefs": false
        }
    },
    "msg": "mkfs.xfs: /dev/vdb contains a mounted filesystem\nUsage: mkfs.xfs\n/* blocksize */\t\t[-b size=num]\n/* metadata */\t\t[-m crc=0|1,finobt=0|1,uuid=xxx,rmapbt=0|1,reflink=0|1]\n/* data subvol */\t[-d agcount=n,agsize=n,file,name=xxx,size=num,\n\t\t\t    (sunit=value,swidth=value|su=num,sw=num|noalign),\n\t\t\t    sectsize=num\n/* force overwrite */\t[-f]\n/* inode size */\t[-i log=n|perblock=n|size=num,maxpct=n,attr=0|1|2,\n\t\t\t    projid32bit=0|1,sparse=0|1]\n/* no discard */\t[-K]\n/* log subvol */\t[-l agnum=n,internal,size=num,logdev=xxx,version=n\n\t\t\t    sunit=value|su=num,sectsize=num,lazy-count=0|1]\n/* label */\t\t[-L label (maximum 12 characters)]\n/* naming */\t\t[-n size=num,version=2|ci,ftype=0|1]\n/* no-op info only */\t[-N]\n/* prototype file */\t[-p fname]\n/* quiet */\t\t[-q]\n/* realtime subvol */\t[-r extsize=num,size=num,rtdev=xxx]\n/* sectorsize */\t[-s size=num]\n/* version */\t\t[-V]\n\t\t\tdevicename\n<devicename> is required unless -d name=xxx is given.\n<num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB),\n      xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).\n<value> is xxx (512 byte blocks).",
    "rc": 1,
    "stderr": "mkfs.xfs: /dev/vdb contains a mounted filesystem\nUsage: mkfs.xfs\n/* blocksize */\t\t[-b size=num]\n/* metadata */\t\t[-m crc=0|1,finobt=0|1,uuid=xxx,rmapbt=0|1,reflink=0|1]\n/* data subvol */\t[-d agcount=n,agsize=n,file,name=xxx,size=num,\n\t\t\t    (sunit=value,swidth=value|su=num,sw=num|noalign),\n\t\t\t    sectsize=num\n/* force overwrite */\t[-f]\n/* inode size */\t[-i log=n|perblock=n|size=num,maxpct=n,attr=0|1|2,\n\t\t\t    projid32bit=0|1,sparse=0|1]\n/* no discard */\t[-K]\n/* log subvol */\t[-l agnum=n,internal,size=num,logdev=xxx,version=n\n\t\t\t    sunit=value|su=num,sectsize=num,lazy-count=0|1]\n/* label */\t\t[-L label (maximum 12 characters)]\n/* naming */\t\t[-n size=num,version=2|ci,ftype=0|1]\n/* no-op info only */\t[-N]\n/* prototype file */\t[-p fname]\n/* quiet */\t\t[-q]\n/* realtime subvol */\t[-r extsize=num,size=num,rtdev=xxx]\n/* sectorsize */\t[-s size=num]\n/* version */\t\t[-V]\n\t\t\tdevicename\n<devicename> is required unless -d name=xxx is given.\n<num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB),\n      xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).\n<value> is xxx (512 byte blocks).\n",
    "stderr_lines": [
        "mkfs.xfs: /dev/vdb contains a mounted filesystem",
        "Usage: mkfs.xfs",
        "/* blocksize */\t\t[-b size=num]",
        "/* metadata */\t\t[-m crc=0|1,finobt=0|1,uuid=xxx,rmapbt=0|1,reflink=0|1]",
        "/* data subvol */\t[-d agcount=n,agsize=n,file,name=xxx,size=num,",
        "\t\t\t    (sunit=value,swidth=value|su=num,sw=num|noalign),",
        "\t\t\t    sectsize=num",
        "/* force overwrite */\t[-f]",
        "/* inode size */\t[-i log=n|perblock=n|size=num,maxpct=n,attr=0|1|2,",
        "\t\t\t    projid32bit=0|1,sparse=0|1]",
        "/* no discard */\t[-K]",
        "/* log subvol */\t[-l agnum=n,internal,size=num,logdev=xxx,version=n",
        "\t\t\t    sunit=value|su=num,sectsize=num,lazy-count=0|1]",
        "/* label */\t\t[-L label (maximum 12 characters)]",
        "/* naming */\t\t[-n size=num,version=2|ci,ftype=0|1]",
        "/* no-op info only */\t[-N]",
        "/* prototype file */\t[-p fname]",
        "/* quiet */\t\t[-q]",
        "/* realtime subvol */\t[-r extsize=num,size=num,rtdev=xxx]",
        "/* sectorsize */\t[-s size=num]",
        "/* version */\t\t[-V]",
        "\t\t\tdevicename",
        "<devicename> is required unless -d name=xxx is given.",
        "<num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB),",
        "      xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).",
        "<value> is xxx (512 byte blocks)."
    ],
    "stdout": "",
    "stdout_lines": []
}

consider failing earlier when no device is provided

I accidentally executed a playbook using the storage role where I did not have a device defined. Would it be good to detect very early that a device name is NULL or invalid before executing so much of the logic? Below is the snippet from my playbook:

    - name: Configure Image Builder Storage
      include_role:
        name: linux-system-roles.storage
      vars:
        use_partitions: false
        storage_pools:
          - name: image_builder
            disks: ['']  # something like vdb
            # type: lvm
            state: present
            volumes:
              - name: composer
                size: "19.5G"
                # type: lvm
                # fs_type: xfs
                fs_label: "imgbldr"
                mount_point: '/var/lib/lorax/composer'
      when: CONFIG_STORAGE

storage: relabel test failed: unable to change label in the second time

Hi
The label was successful to set in the first time, but when I want to change the label in the second time, it failed.

environment: RHEL-8.2

[root@node1 ~]# lsblk -o NAME,FSTYPE,TYPE
NAME                FSTYPE      TYPE
sr0                             rom
vda                             disk
โ”œโ”€vda1              xfs         part
โ””โ”€vda2              LVM2_member part
  โ”œโ”€rhel_node1-root xfs         lvm
  โ””โ”€rhel_node1-swap swap        lvm
vdb                             disk
vdc                             disk

playbook

# cat tests/test-fs-relabel.yml 
---
- hosts: all
  become: true
  vars:
    storage_safe_mode: false
    mount_location: '/opt/test1'

  tasks:
    - include_role:
        name: storage
    - include_tasks: get_unused_disk.yml
      vars:
        max_return: 1

    - name: set label
      include_role:
        name: storage
      vars:
        storage_volumes:
          - name: test1
            type: disk
            mount_point: "{{ mount_location }}"
            fs_type: ext4
            disks: "{{ unused_disks }}"
            fs_label: label

    - include_tasks: verify-role-results.yml

    - name: relabel
      include_role:
        name: storage
      vars:
        storage_volumes:
          - name: test1
            type: disk
            mount_point: "{{ mount_location }}"
            fs_type: ext4
            disks: "{{ unused_disks }}"
            fs_label: relabel

    - include_tasks: verify-role-results.yml

    - name: Clean up
      include_role:
        name: storage
      vars:
        storage_volumes:
          - name: test1
            type: disk
            mount_point: "{{ mount_location }}"
            disks: "{{ unused_disks }}"
            state: absent

    - include_tasks: verify-role-results.yml

log

TASK [relabel] ****************************************************************************

TASK [storage : Set version specific variables] *******************************************
ok: [192.168.122.101] => (item=/root/ansible-test/storage/vars/RedHat-8.yml)

TASK [storage : define an empty list of pools to be used in testing] **********************
ok: [192.168.122.101]

TASK [storage : define an empty list of volumes to be used in testing] ********************
ok: [192.168.122.101]

TASK [storage : include the appropriate provider tasks] ***********************************
included: /root/ansible-test/storage/tasks/main-blivet.yml for 192.168.122.101

TASK [storage : get a list of rpm packages installed on host machine] *********************
skipping: [192.168.122.101]

TASK [storage : make sure blivet is available] ********************************************
ok: [192.168.122.101]

TASK [storage : initialize internal facts] ************************************************
ok: [192.168.122.101]

TASK [storage : Apply defaults to pools and volumes [1/6]] ********************************

TASK [storage : Apply defaults to pools and volumes [2/6]] ********************************

TASK [storage : Apply defaults to pools and volumes [3/6]] ********************************

TASK [storage : Apply defaults to pools and volumes [4/6]] ********************************

TASK [storage : Apply defaults to pools and volumes [5/6]] ********************************

TASK [storage : Apply defaults to pools and volumes [6/6]] ********************************
ok: [192.168.122.101] => (item={'name': 'test1', 'type': 'disk', 'mount_point': '/opt/test1', 'fs_type': 'ext4', 'disks': ['vdc'], 'fs_label': 'relabel'})

TASK [storage : debug] ********************************************************************
ok: [192.168.122.101] => {
    "_storage_pools": []
}

TASK [storage : debug] ********************************************************************
ok: [192.168.122.101] => {
    "_storage_volumes": [
        {
            "disks": [
                "vdc"
            ],
            "fs_create_options": "",
            "fs_label": "relabel",
            "fs_overwrite_existing": true,
            "fs_type": "ext4",
            "mount_check": 0,
            "mount_device_identifier": "uuid",
            "mount_options": "defaults",
            "mount_passno": 0,
            "mount_point": "/opt/test1",
            "name": "test1",
            "size": 0,
            "state": "present",
            "type": "disk"
        }
    ]
}

TASK [storage : get required packages] ****************************************************
ok: [192.168.122.101]

TASK [storage : make sure required packages are installed] ********************************
ok: [192.168.122.101]

TASK [storage : manage the pools and volumes to match the specified state] ****************
ok: [192.168.122.101]

TASK [storage : debug] ********************************************************************
ok: [192.168.122.101] => {
    "blivet_output": {
        "actions": [],
        "changed": false,
        "failed": false,
        "leaves": [
            "/dev/vda1",
            "/dev/mapper/rhel_node1-root",
            "/dev/mapper/rhel_node1-swap",
            "/dev/vdb",
            "/dev/vdc",
            "/dev/sr0"
        ],
        "mounts": [
            {
                "dump": 0,
                "fstype": "ext4",
                "opts": "defaults",
                "passno": 0,
                "path": "/opt/test1",
                "src": "UUID=e957f05d-1357-4c1a-a028-4fa02f50aaad",
                "state": "mounted"
            }
        ],
        "packages": [
            "lvm2",
            "xfsprogs",
            "e2fsprogs"
        ],
        "pools": [],
        "volumes": [
            {
                "_device": "/dev/vdc",
                "_mount_id": "UUID=e957f05d-1357-4c1a-a028-4fa02f50aaad",
                "disks": [
                    "vdc"
                ],
                "fs_create_options": "",
                "fs_label": "relabel",
                "fs_overwrite_existing": true,
                "fs_type": "ext4",
                "mount_check": 0,
                "mount_device_identifier": "uuid",
                "mount_options": "defaults",
                "mount_passno": 0,
                "mount_point": "/opt/test1",
                "name": "test1",
                "size": 0,
                "state": "present",
                "type": "disk"
            }
        ]
    }
}

TASK [storage : set the list of pools for test verification] ******************************
ok: [192.168.122.101]

TASK [storage : set the list of volumes for test verification] ****************************
ok: [192.168.122.101]

TASK [storage : manage mounts to match the specified state] *******************************
changed: [192.168.122.101] => (item={'src': 'UUID=e957f05d-1357-4c1a-a028-4fa02f50aaad', 'path': '/opt/test1', 'fstype': 'ext4', 'opts': 'defaults', 'dump': 0, 'passno': 0, 'state': 'mounted'})

TASK [storage : tell systemd to refresh its view of /etc/fstab] ***************************
ok: [192.168.122.101]

TASK [storage : Update facts] *************************************************************
ok: [192.168.122.101]

TASK [include_tasks] **********************************************************************
included: /root/ansible-test/storage/tests/verify-role-results.yml for 192.168.122.101

TASK [Print out pool information] *********************************************************
skipping: [192.168.122.101]

TASK [Print out volume information] *******************************************************
ok: [192.168.122.101] => {
    "_storage_volumes_list": [
        {
            "_device": "/dev/vdc",
            "_mount_id": "UUID=e957f05d-1357-4c1a-a028-4fa02f50aaad",
            "disks": [
                "vdc"
            ],
            "fs_create_options": "",
            "fs_label": "relabel",
            "fs_overwrite_existing": true,
            "fs_type": "ext4",
            "mount_check": 0,
            "mount_device_identifier": "uuid",
            "mount_options": "defaults",
            "mount_passno": 0,
            "mount_point": "/opt/test1",
            "name": "test1",
            "size": 0,
            "state": "present",
            "type": "disk"
        }
    ]
}

TASK [Collect info about the volumes.] ****************************************************
ok: [192.168.122.101]

TASK [Read the /etc/fstab file for volume existence] **************************************
ok: [192.168.122.101]

TASK [Verify the volumes listed in storage_pools were correctly managed] ******************

TASK [Clean up variable namespace] ********************************************************
ok: [192.168.122.101]

TASK [Verify the volumes with no pool were correctly managed] *****************************
[WARNING]: The loop variable 'storage_test_volume' is already in use. You should set the
`loop_var` value in the `loop_control` option for the task to something else to avoid
variable collisions and unexpected behavior.
included: /root/ansible-test/storage/tests/test-verify-volume.yml for 192.168.122.101

TASK [set_fact] ***************************************************************************
ok: [192.168.122.101]

TASK [include_tasks] **********************************************************************
included: /root/ansible-test/storage/tests/test-verify-volume-mount.yml for 192.168.122.101
included: /root/ansible-test/storage/tests/test-verify-volume-fstab.yml for 192.168.122.101
included: /root/ansible-test/storage/tests/test-verify-volume-fs.yml for 192.168.122.101
included: /root/ansible-test/storage/tests/test-verify-volume-device.yml for 192.168.122.101

TASK [Set some facts] *********************************************************************
ok: [192.168.122.101]

TASK [Verify the current mount state by device] *******************************************
ok: [192.168.122.101] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [Verify the current mount state by mount point] **************************************
ok: [192.168.122.101] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [Verify the mount fs type] ***********************************************************
ok: [192.168.122.101] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [Unset facts] ************************************************************************
ok: [192.168.122.101]

TASK [Set some variables for fstab checking] **********************************************
ok: [192.168.122.101]

TASK [Verify that the device identifier appears in /etc/fstab] ****************************
ok: [192.168.122.101] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [Verify the fstab mount point] *******************************************************
ok: [192.168.122.101] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [Clean up variables] *****************************************************************
ok: [192.168.122.101]

TASK [Verify fs type] *********************************************************************
ok: [192.168.122.101] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [Verify fs label] ********************************************************************
fatal: [192.168.122.101]: FAILED! => {
    "assertion": false,
    "changed": false,
    "evaluated_to": false,
    "msg": "Assertion failed"
}

PLAY RECAP ********************************************************************************
192.168.122.101            : ok=105  changed=3    unreachable=0    failed=1    skipped=26   rescued=0    ignored=0   

integration tests should clean up after themselves

When running the integration tests anywhere other than in a throwaway vm it quickly becomes evident that some automated cleanup would be handy, both for the test runner's convenience and for the additional test coverage. The cleanup should probably only happen when the test validation was successful.

storage: resize function for xfs FS can not work

Pulled #97 to local test ,found the resize function does not work when the file system type is xfs.
In fact, the size of the lv capacity has not changed when do resize from 10g to 15g,but the result output by the terminal shows passed.

BTW,resize function for ext2/ext3/ext4 works well.

environment: RHEL-8.2

playbook

---
- hosts: all
  become: true
  vars:
    mount_location: '/opt/test1'
    volume_group_size: '5g'
    volume_size_before: '10g'
    volume_size_after: '15g'
    storage_safe_mode: false

  tasks:
    - include_role:
        name: storage

    - include_tasks: get_unused_disk.yml
      vars:
        min_size: "{{ volume_group_size }}"
        max_return: 1

    - name: Create one LVM logical volume with "{{ volume_size_before }}" under one volume group
      include_role:
        name: storage
      vars:
          storage_pools:
            - name: foo
              disks: "{{ unused_disks }}"
              type: lvm
              volumes:
                - name: test1
                  fs_type: 'xfs'
                  size: "{{ volume_size_before }}"
                  mount_point: "{{ mount_location }}"

    - shell: lsblk | grep foo-test1

    - shell: mount | grep foo-test1

    - include_tasks: verify-role-results.yml

    - name: Change volume_size  "{{ volume_size_after }}"
      include_role:
        name: storage
      vars:
          storage_pools:
            - name: foo
              type: lvm
              disks: "{{ unused_disks }}"
              volumes:
                - name: test1
                  fs_type: 'xfs'
                  size: "{{ volume_size_after }}"
                  mount_point: "{{ mount_location }}"

    - shell: lsblk | grep foo-test1

    - shell: mount | grep foo-test1

    - include_tasks: verify-role-results.yml

    - name: Clean up
      include_role:
        name: storage
      vars:
          storage_pools:
            - name: foo
              disks: "{{ unused_disks }}"
              state: absent
              volumes:
                - name: test1
                  size: "{{ volume_size_after }}"
                  mount_point: "{{ mount_location }}"

    - include_tasks: verify-role-results.yml
                                       

eg:ext4's actions
"blivet_output": {
"actions": [
{
"action": "resize device",
"device": "/dev/mapper/foo-test1",
"fs_type": null
},
{
"action": "resize format",
"device": "/dev/mapper/foo-test1",
"fs_type": "ext4"
}
],

but xfs's actions shows empty

output log

#resize to 15g
TASK [storage : debug] ******************************************************************************************************
task path: /root/ansible-test/upstream/storage/tasks/main-blivet.yml:113
ok: [192.168.122.101] => {
    "blivet_output": {
        "actions": [],
        "changed": false,
        "failed": false,
        "leaves": [
            "/dev/vda1",
            "/dev/mapper/rhel_node1-root",
            "/dev/mapper/rhel_node1-swap",
            "/dev/mapper/foo-test1",
            "/dev/vdc",
            "/dev/sr0"
        ],
        "mounts": [
            {
                "dump": 0,
                "fstype": "xfs",
                "opts": "defaults",
                "passno": 0,
                "path": "/opt/test1",
                "src": "/dev/mapper/foo-test1",
                "state": "mounted"
            }
        ],
        "packages": [
            "xfsprogs",
            "lvm2"
        ],
        "pools": [
            {
                "disks": [
                    "vdb"
                ],
                "name": "foo",
                "state": "present",
                "type": "lvm",
                "volumes": [
                    {
                        "_device": "/dev/mapper/foo-test1",
                        "_mount_id": "/dev/mapper/foo-test1",
                        "fs_create_options": "",
                        "fs_label": "",
                        "fs_overwrite_existing": true,
                        "fs_type": "xfs",
                        "mount_check": 0,
                        "mount_device_identifier": "uuid",
                        "mount_options": "defaults",
                        "mount_passno": 0,
                        "mount_point": "/opt/test1",
                        "name": "test1",
                        "pool": "foo",
                        "size": "15g",
                        "state": "present",
                        "type": "lvm"
                    }
                ]
            }
        ],
        "volumes": []
    }
}
.
.
.
.
.
<192.168.122.101> (0, b'', b'')
changed: [192.168.122.101] => {
    "changed": true,
    "cmd": "lsblk | grep foo-test1",
    "delta": "0:00:00.005214",
    "end": "2020-07-02 11:06:14.608691",
    "invocation": {
        "module_args": {
            "_raw_params": "lsblk | grep foo-test1",
            "_uses_shell": true,
            "argv": null,
            "chdir": null,
            "creates": null,
            "executable": null,
            "removes": null,
            "stdin": null,
            "stdin_add_newline": true,
            "strip_empty_ends": true,
            "warn": true
        }
    },
    "rc": 0,
    "start": "2020-07-02 11:06:14.603477",
    "stderr": "",
    "stderr_lines": [],
    "stdout": "โ””โ”€foo-test1         253:2    0   10G  0 lvm  /opt/test1",
    "stdout_lines": [
        "โ””โ”€foo-test1         253:2    0   10G  0 lvm  /opt/test1"
    ]
}

support identifying existing devices by UUID

The current codebase does not make any attempt to identify devices by descriptors other than name. While we cannot specify UUID for new devices, we can support lookup/id of existing devices by a uuid field.

Modify any conditional that relies on ansible_facts.lvm presence.

  • There are multiple task/block conditionals that reference keys in ansible_facts.lvm. That is is problematic when the user doesn't have lvm2 installed before-hand, and the user runs in the playbook in check mode. If lvm2 is not installed, then ansible_facts.lvm won't be available, and Ansible will terminate the playbook execution.
  • If the block in tasks/vg-default.yml was run in check mode, then that block will not execute, and sequent tasks that rely on ansible_facts.lvm will return fatal status.

Insufficient space when lv size matches vg

Playbook fails when lv size is equal to vg size.

TASK [linux-system-roles.storage : Make sure LV exists] ***************************************************************************************************************************
fatal: [rhel7]: FAILED! => {"changed": false, "err": "  Volume group \"composer\" has insufficient free space (5119 extents): 5120 required.\n", "msg": "Creating logical volume 'composer' failed", "rc": 5}

[root@rhel7 ~]# vgs
  VG       #PV #LV #SN Attr   VSize   VFree  
  composer   1   0   0 wz--n- <20.00g <20.00g
  rhel       1   2   0 wz--n-  <7.00g      0 

Playbook used was:

- hosts: all
  become: yes
  become_method: sudo
  become_user: root

  vars:
    use_partitions: false
  tasks:
    - name: Configure Composer Storage
      include_role:
        name: linux-system-roles.storage
      vars:
        storage_pools:
          - name: composer
            disks: ['vdb']
            # type: lvm
            state: present
            volumes:
              - name: composer
                size: 20G
                # type: lvm
                # fs_type: xfs
                fs_label: "composer"
                mount_point: '/var/lib/lorax/composer'

fstype: swap not working

Is swap supported as fs_type? Per documentation I see it is supported but when I run playbook, it says unknown fs_type.

I am using below playbook to create swap volume:

  • hosts: localhost
    become: true

vars:

storage_use_partitions: true

roles:
- name: linux-system-roles.storage
storage_pools:
- name: app_data_01
disks: ['/dev/sdb']
#state: "absent"
volumes:
- name: data1
fs_label: data1
fs_type: ext4
size: 10g
mount_point: '/fs/data1'
- name: data_02
disks: ['/dev/sdc','/dev/sdd']
#state: "absent"
volumes:
- name: data2
size: 8g
fs_type: swap
mount_point: /fs/hdata2
- name: data3
mount_options: 'ro,noatime'
fs_type: xfs
size: 2g
mount_point: /fs/data3

This is the error which I get
"msg": "Error mounting /fs/hdata2: mount: unknown filesystem type 'swap'\n"

storage: tests_luks.yml partition case failed with nvme disk

This issue can be reproduced with NVMe disk on RHEL7/8

playbook

# cat tests_luks.yml
---
- hosts: all
  become: true
  vars:
    storage_safe_mode: false
    mount_location: '/opt/test1'
    volume_size: '5g'

  tasks:
    - include_role:
        name: storage

    - include_tasks: get_unused_disk.yml
      vars:
        min_size: "{{ volume_size }}"
        max_return: 1

    ##
    ## Partition
    ##

    - name: Create an encrypted partition volume w/ default fs
      include_role:
        name: storage
      vars:
        storage_pools:
          - name: foo
            type: partition
            disks: "{{ unused_disks }}"
            volumes:
              - name: test1
                type: partition
                mount_point: "{{ mount_location }}"
                #                size: 4g
                encryption: true
                encryption_passphrase: 'yabbadabbadoo'

    - include_tasks: verify-role-results.yml

    - name: Remove the encryption layer
      include_role:
        name: storage
      vars:
        storage_pools:
          - name: foo
            type: partition
            disks: "{{ unused_disks }}"
            volumes:
              - name: test1
                type: partition
                mount_point: "{{ mount_location }}"
                #                size: 4g
                encryption: false
                encryption_passphrase: 'yabbadabbadoo'

    - include_tasks: verify-role-results.yml

Detailed Log

# ansible-playbook -i inventory tests/tests_luks.yml -vvv

[root@storageqe-62 storage]# cat OUTPUT 
ansible-playbook 2.9.11
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.6/site-packages/ansible
  executable location = /usr/bin/ansible-playbook
  python version = 3.6.8 (default, Jun 26 2020, 12:10:09) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /root/test/storage/inventory as it did not pass its verify_file() method
script declined parsing /root/test/storage/inventory as it did not pass its verify_file() method
auto declined parsing /root/test/storage/inventory as it did not pass its verify_file() method
Parsed /root/test/storage/inventory inventory source with ini plugin

PLAYBOOK: tests_luks.yml ********************************************************************************************************************************************************************************
1 plays in tests/tests_luks.yml

PLAY [all] **********************************************************************************************************************************************************************************************

TASK [Gathering Facts] **********************************************************************************************************************************************************************************
task path: /root/test/storage/tests/tests_luks.yml:2
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551654.528671-15523-246813385857590 && echo ansible-tmp-1596551654.528671-15523-246813385857590="echo /root/.ansible/tmp/ansible-tmp-1596551654.528671-15523-246813385857590" ) && sleep 0'
Attempting python interpreter discovery
EXEC /bin/sh -c 'echo PLATFORM; uname; echo FOUND; command -v '"'"'/usr/bin/python'"'"'; command -v '"'"'python3.7'"'"'; command -v '"'"'python3.6'"'"'; command -v '"'"'python3.5'"'"'; command -v '"'"'python2.7'"'"'; command -v '"'"'python2.6'"'"'; command -v '"'"'/usr/libexec/platform-python'"'"'; command -v '"'"'/usr/bin/python3'"'"'; command -v '"'"'python'"'"'; echo ENDFOUND && sleep 0'
EXEC /bin/sh -c '/usr/bin/python3.6 && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/system/setup.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmp4s2clgmr TO /root/.ansible/tmp/ansible-tmp-1596551654.528671-15523-246813385857590/AnsiballZ_setup.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551654.528671-15523-246813385857590/ /root/.ansible/tmp/ansible-tmp-1596551654.528671-15523-246813385857590/AnsiballZ_setup.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551654.528671-15523-246813385857590/AnsiballZ_setup.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551654.528671-15523-246813385857590/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
META: ran handlers

TASK [include_role : storage] ***************************************************************************************************************************************************************************
task path: /root/test/storage/tests/tests_luks.yml:10

TASK [storage : Set version specific variables] *********************************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:2
ok: [localhost] => {
"ansible_facts": {
"blivet_package_list": [
"python3-blivet",
"libblockdev-crypto",
"libblockdev-dm",
"libblockdev-lvm",
"libblockdev-mdraid",
"libblockdev-swap"
]
},
"ansible_included_var_files": [
"/root/test/storage/vars/RedHat_8.yml"
],
"changed": false
}

TASK [storage : define an empty list of pools to be used in testing] ************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:14
ok: [localhost] => {
"ansible_facts": {
"_storage_pools_list": []
},
"changed": false
}

TASK [storage : define an empty list of volumes to be used in testing] **********************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:18
ok: [localhost] => {
"ansible_facts": {
"_storage_volumes_list": []
},
"changed": false
}

TASK [storage : include the appropriate provider tasks] *************************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:22
included: /root/test/storage/tasks/main-blivet.yml for localhost

TASK [storage : get a list of rpm packages installed on host machine] ***********************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:2
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [storage : make sure blivet is available] **********************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:7
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551658.1332204-15631-187703718888323 && echo ansible-tmp-1596551658.1332204-15631-187703718888323="echo /root/.ansible/tmp/ansible-tmp-1596551658.1332204-15631-187703718888323" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/packaging/os/dnf.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmpv7a6bt6k TO /root/.ansible/tmp/ansible-tmp-1596551658.1332204-15631-187703718888323/AnsiballZ_dnf.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551658.1332204-15631-187703718888323/ /root/.ansible/tmp/ansible-tmp-1596551658.1332204-15631-187703718888323/AnsiballZ_dnf.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551658.1332204-15631-187703718888323/AnsiballZ_dnf.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551658.1332204-15631-187703718888323/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"changed": true,
"invocation": {
"module_args": {
"allow_downgrade": false,
"autoremove": false,
"bugfix": false,
"conf_file": null,
"disable_excludes": null,
"disable_gpg_check": false,
"disable_plugin": [],
"disablerepo": [],
"download_dir": null,
"download_only": false,
"enable_plugin": [],
"enablerepo": [],
"exclude": [],
"install_repoquery": true,
"install_weak_deps": true,
"installroot": "/",
"list": null,
"lock_timeout": 30,
"name": [
"python3-blivet",
"libblockdev-crypto",
"libblockdev-dm",
"libblockdev-lvm",
"libblockdev-mdraid",
"libblockdev-swap"
],
"releasever": null,
"security": false,
"skip_broken": false,
"state": "present",
"update_cache": false,
"update_only": false,
"validate_certs": true
}
},
"msg": "",
"rc": 0,
"results": [
"Installed: nss-softokn-freebl-3.44.0-15.el8.x86_64",
"Installed: nss-sysinit-3.44.0-15.el8.x86_64",
"Installed: libblockdev-2.24-1.el8.x86_64",
"Installed: nss-util-3.44.0-15.el8.x86_64",
"Installed: libblockdev-crypto-2.24-1.el8.x86_64",
"Installed: daxctl-libs-67-2.el8.x86_64",
"Installed: libblockdev-dm-2.24-1.el8.x86_64",
"Installed: libblockdev-fs-2.24-1.el8.x86_64",
"Installed: libblockdev-kbd-2.24-1.el8.x86_64",
"Installed: gdisk-1.0.3-6.el8.x86_64",
"Installed: libblockdev-loop-2.24-1.el8.x86_64",
"Installed: libblockdev-lvm-2.24-1.el8.x86_64",
"Installed: libblockdev-mdraid-2.24-1.el8.x86_64",
"Installed: libblockdev-mpath-2.24-1.el8.x86_64",
"Installed: libblockdev-nvdimm-2.24-1.el8.x86_64",
"Installed: libblockdev-part-2.24-1.el8.x86_64",
"Installed: device-mapper-multipath-0.8.4-2.el8.x86_64",
"Installed: libblockdev-swap-2.24-1.el8.x86_64",
"Installed: device-mapper-multipath-libs-0.8.4-2.el8.x86_64",
"Installed: libblockdev-utils-2.24-1.el8.x86_64",
"Installed: libbytesize-1.4-3.el8.x86_64",
"Installed: lsof-4.93.2-1.el8.x86_64",
"Installed: mdadm-4.1-14.el8.x86_64",
"Installed: userspace-rcu-0.10.1-2.el8.x86_64",
"Installed: python3-pyparted-1:3.11.0-13.el8.x86_64",
"Installed: nspr-4.21.0-2.el8_0.x86_64",
"Installed: volume_key-libs-0.3.11-5.el8.x86_64",
"Installed: ndctl-67-2.el8.x86_64",
"Installed: blivet-data-1:3.2.2-3.el8.noarch",
"Installed: nss-3.44.0-15.el8.x86_64",
"Installed: ndctl-libs-67-2.el8.x86_64",
"Installed: python3-blivet-1:3.2.2-3.el8.noarch",
"Installed: python3-blockdev-2.24-1.el8.x86_64",
"Installed: python3-bytesize-1.4-3.el8.x86_64",
"Installed: nss-softokn-3.44.0-15.el8.x86_64"
]
}

TASK [storage : initialize internal facts] **************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:18
ok: [localhost] => {
"ansible_facts": {
"_storage_pools": [],
"_storage_vol_defaults": [],
"_storage_vol_pools": [],
"_storage_vols_no_defaults": [],
"_storage_vols_no_defaults_by_pool": {},
"_storage_vols_w_defaults": [],
"_storage_volumes": []
},
"changed": false
}

TASK [storage : Apply defaults to pools and volumes [1/6]] **********************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:28

TASK [storage : Apply defaults to pools and volumes [2/6]] **********************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:36

TASK [storage : Apply defaults to pools and volumes [3/6]] **********************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:44

TASK [storage : Apply defaults to pools and volumes [4/6]] **********************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:52

TASK [storage : Apply defaults to pools and volumes [5/6]] **********************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:61

TASK [storage : Apply defaults to pools and volumes [6/6]] **********************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:72

TASK [storage : debug] **********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:84
ok: [localhost] => {
"_storage_pools": []
}

TASK [storage : debug] **********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:87
ok: [localhost] => {
"_storage_volumes": []
}

TASK [storage : get required packages] ******************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:90
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551675.9051943-16220-178753882587611 && echo ansible-tmp-1596551675.9051943-16220-178753882587611="echo /root/.ansible/tmp/ansible-tmp-1596551675.9051943-16220-178753882587611" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmpbiozo0to TO /root/.ansible/tmp/ansible-tmp-1596551675.9051943-16220-178753882587611/AnsiballZ_blivet.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551675.9051943-16220-178753882587611/ /root/.ansible/tmp/ansible-tmp-1596551675.9051943-16220-178753882587611/AnsiballZ_blivet.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551675.9051943-16220-178753882587611/AnsiballZ_blivet.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551675.9051943-16220-178753882587611/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"actions": [],
"changed": false,
"crypts": [],
"invocation": {
"module_args": {
"disklabel_type": null,
"packages_only": true,
"pools": [],
"safe_mode": true,
"use_partitions": null,
"volumes": []
}
},
"leaves": [],
"mounts": [],
"packages": [],
"pools": [],
"volumes": []
}

TASK [storage : make sure required packages are installed] **********************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:99
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551677.1227002-16238-163363811539987 && echo ansible-tmp-1596551677.1227002-16238-163363811539987="echo /root/.ansible/tmp/ansible-tmp-1596551677.1227002-16238-163363811539987" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/packaging/os/dnf.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmph5isg7r4 TO /root/.ansible/tmp/ansible-tmp-1596551677.1227002-16238-163363811539987/AnsiballZ_dnf.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551677.1227002-16238-163363811539987/ /root/.ansible/tmp/ansible-tmp-1596551677.1227002-16238-163363811539987/AnsiballZ_dnf.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551677.1227002-16238-163363811539987/AnsiballZ_dnf.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551677.1227002-16238-163363811539987/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"invocation": {
"module_args": {
"allow_downgrade": false,
"autoremove": false,
"bugfix": false,
"conf_file": null,
"disable_excludes": null,
"disable_gpg_check": false,
"disable_plugin": [],
"disablerepo": [],
"download_dir": null,
"download_only": false,
"enable_plugin": [],
"enablerepo": [],
"exclude": [],
"install_repoquery": true,
"install_weak_deps": true,
"installroot": "/",
"list": null,
"lock_timeout": 30,
"name": [],
"releasever": null,
"security": false,
"skip_broken": false,
"state": "present",
"update_cache": false,
"update_only": false,
"validate_certs": true
}
},
"msg": "Nothing to do",
"rc": 0,
"results": []
}

TASK [storage : manage the pools and volumes to match the specified state] ******************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:104
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551680.7664928-16254-120747266880603 && echo ansible-tmp-1596551680.7664928-16254-120747266880603="echo /root/.ansible/tmp/ansible-tmp-1596551680.7664928-16254-120747266880603" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmpsx6rcxev TO /root/.ansible/tmp/ansible-tmp-1596551680.7664928-16254-120747266880603/AnsiballZ_blivet.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551680.7664928-16254-120747266880603/ /root/.ansible/tmp/ansible-tmp-1596551680.7664928-16254-120747266880603/AnsiballZ_blivet.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551680.7664928-16254-120747266880603/AnsiballZ_blivet.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551680.7664928-16254-120747266880603/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"actions": [],
"changed": false,
"crypts": [],
"invocation": {
"module_args": {
"disklabel_type": null,
"packages_only": false,
"pools": [],
"safe_mode": false,
"use_partitions": null,
"volumes": []
}
},
"leaves": [],
"mounts": [],
"packages": [],
"pools": [],
"volumes": []
}

TASK [storage : debug] **********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:113
ok: [localhost] => {
"blivet_output": {
"actions": [],
"changed": false,
"crypts": [],
"failed": false,
"leaves": [],
"mounts": [],
"packages": [],
"pools": [],
"volumes": []
}
}

TASK [storage : set the list of pools for test verification] ********************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:116
ok: [localhost] => {
"ansible_facts": {
"_storage_pools_list": []
},
"changed": false
}

TASK [storage : set the list of volumes for test verification] ******************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:120
ok: [localhost] => {
"ansible_facts": {
"_storage_volumes_list": []
},
"changed": false
}

TASK [storage : remove obsolete mounts] *****************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:136

TASK [storage : tell systemd to refresh its view of /etc/fstab] *****************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:147
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [storage : set up new/current mounts] **************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:152

TASK [storage : tell systemd to refresh its view of /etc/fstab] *****************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:163
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [storage : Manage /etc/crypttab to account for changes we just made] *******************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:171

TASK [storage : Update facts] ***************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:186
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551682.4189703-16288-269707015022603 && echo ansible-tmp-1596551682.4189703-16288-269707015022603="echo /root/.ansible/tmp/ansible-tmp-1596551682.4189703-16288-269707015022603" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/system/setup.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmppvp3cca_ TO /root/.ansible/tmp/ansible-tmp-1596551682.4189703-16288-269707015022603/AnsiballZ_setup.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551682.4189703-16288-269707015022603/ /root/.ansible/tmp/ansible-tmp-1596551682.4189703-16288-269707015022603/AnsiballZ_setup.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551682.4189703-16288-269707015022603/AnsiballZ_setup.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551682.4189703-16288-269707015022603/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]

TASK [include_tasks] ************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/tests_luks.yml:13
included: /root/test/storage/tests/get_unused_disk.yml for localhost

TASK [Find unused disks in the system] ******************************************************************************************************************************************************************
task path: /root/test/storage/tests/get_unused_disk.yml:2
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551684.3667912-16397-128580638255759 && echo ansible-tmp-1596551684.3667912-16397-128580638255759="echo /root/.ansible/tmp/ansible-tmp-1596551684.3667912-16397-128580638255759" ) && sleep 0'
Using module file /root/test/storage/library/find_unused_disk.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmpmqn72f82 TO /root/.ansible/tmp/ansible-tmp-1596551684.3667912-16397-128580638255759/AnsiballZ_find_unused_disk.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551684.3667912-16397-128580638255759/ /root/.ansible/tmp/ansible-tmp-1596551684.3667912-16397-128580638255759/AnsiballZ_find_unused_disk.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551684.3667912-16397-128580638255759/AnsiballZ_find_unused_disk.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551684.3667912-16397-128580638255759/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"disks": [
"nvme0n1"
],
"invocation": {
"module_args": {
"max_return": 1,
"min_size": "5g"
}
}
}

TASK [set_fact] *****************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/get_unused_disk.yml:8
ok: [localhost] => {
"ansible_facts": {
"unused_disks": [
"nvme0n1"
]
},
"changed": false
}

TASK [Exit playbook when there's not enough unused disks in the system] *********************************************************************************************************************************
task path: /root/test/storage/tests/get_unused_disk.yml:12
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [Print unused disks] *******************************************************************************************************************************************************************************
task path: /root/test/storage/tests/get_unused_disk.yml:17
ok: [localhost] => {
"unused_disks": [
"nvme0n1"
]
}

TASK [Create an encrypted partition volume w/ default fs] ***********************************************************************************************************************************************
task path: /root/test/storage/tests/tests_luks.yml:22

TASK [storage : Set version specific variables] *********************************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:2
ok: [localhost] => {
"ansible_facts": {
"blivet_package_list": [
"python3-blivet",
"libblockdev-crypto",
"libblockdev-dm",
"libblockdev-lvm",
"libblockdev-mdraid",
"libblockdev-swap"
]
},
"ansible_included_var_files": [
"/root/test/storage/vars/RedHat_8.yml"
],
"changed": false
}

TASK [storage : define an empty list of pools to be used in testing] ************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:14
ok: [localhost] => {
"ansible_facts": {
"_storage_pools_list": []
},
"changed": false
}

TASK [storage : define an empty list of volumes to be used in testing] **********************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:18
ok: [localhost] => {
"ansible_facts": {
"_storage_volumes_list": []
},
"changed": false
}

TASK [storage : include the appropriate provider tasks] *************************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:22
included: /root/test/storage/tasks/main-blivet.yml for localhost

TASK [storage : get a list of rpm packages installed on host machine] ***********************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:2
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [storage : make sure blivet is available] **********************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:7
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551686.3083057-16431-81065081504968 && echo ansible-tmp-1596551686.3083057-16431-81065081504968="echo /root/.ansible/tmp/ansible-tmp-1596551686.3083057-16431-81065081504968" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/packaging/os/dnf.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmpcs3kyb78 TO /root/.ansible/tmp/ansible-tmp-1596551686.3083057-16431-81065081504968/AnsiballZ_dnf.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551686.3083057-16431-81065081504968/ /root/.ansible/tmp/ansible-tmp-1596551686.3083057-16431-81065081504968/AnsiballZ_dnf.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551686.3083057-16431-81065081504968/AnsiballZ_dnf.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551686.3083057-16431-81065081504968/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"invocation": {
"module_args": {
"allow_downgrade": false,
"autoremove": false,
"bugfix": false,
"conf_file": null,
"disable_excludes": null,
"disable_gpg_check": false,
"disable_plugin": [],
"disablerepo": [],
"download_dir": null,
"download_only": false,
"enable_plugin": [],
"enablerepo": [],
"exclude": [],
"install_repoquery": true,
"install_weak_deps": true,
"installroot": "/",
"list": null,
"lock_timeout": 30,
"name": [
"python3-blivet",
"libblockdev-crypto",
"libblockdev-dm",
"libblockdev-lvm",
"libblockdev-mdraid",
"libblockdev-swap"
],
"releasever": null,
"security": false,
"skip_broken": false,
"state": "present",
"update_cache": false,
"update_only": false,
"validate_certs": true
}
},
"msg": "Nothing to do",
"rc": 0,
"results": []
}

TASK [storage : initialize internal facts] **************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:18
ok: [localhost] => {
"ansible_facts": {
"_storage_pools": [],
"_storage_vol_defaults": [],
"_storage_vol_pools": [],
"_storage_vols_no_defaults": [],
"_storage_vols_no_defaults_by_pool": {},
"_storage_vols_w_defaults": [],
"_storage_volumes": []
},
"changed": false
}

TASK [storage : Apply defaults to pools and volumes [1/6]] **********************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:28
ok: [localhost] => (item={'name': 'foo', 'type': 'partition', 'disks': ['nvme0n1'], 'volumes': [{'name': 'test1', 'type': 'partition', 'mount_point': '/opt/test1', 'encryption': True, 'encryption_passphrase': 'yabbadabbadoo'}]}) => {
"ansible_facts": {
"_storage_pools": [
{
"disks": [
"nvme0n1"
],
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": null,
"name": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"state": "present",
"type": "partition",
"volumes": [
{
"encryption": true,
"encryption_passphrase": "yabbadabbadoo",
"mount_point": "/opt/test1",
"name": "test1",
"type": "partition"
}
]
}
]
},
"ansible_loop_var": "pool",
"changed": false,
"pool": {
"disks": [
"nvme0n1"
],
"name": "foo",
"type": "partition",
"volumes": [
{
"encryption": true,
"encryption_passphrase": "yabbadabbadoo",
"mount_point": "/opt/test1",
"name": "test1",
"type": "partition"
}
]
}
}

TASK [storage : Apply defaults to pools and volumes [2/6]] **********************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:36
ok: [localhost] => (item=[{'state': 'present', 'type': 'partition', 'encryption': False, 'encryption_passphrase': None, 'encryption_key_file': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'name': 'foo', 'disks': ['nvme0n1'], 'volumes': [{'name': 'test1', 'type': 'partition', 'mount_point': '/opt/test1', 'encryption': True, 'encryption_passphrase': 'yabbadabbadoo'}]}, {'name': 'test1', 'type': 'partition', 'mount_point': '/opt/test1', 'encryption': True, 'encryption_passphrase': 'yabbadabbadoo'}]) => {
"ansible_facts": {
"_storage_vol_defaults": [
{
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": null,
"fs_create_options": "",
"fs_label": "",
"fs_overwrite_existing": true,
"fs_type": "xfs",
"mount_check": 0,
"mount_device_identifier": "uuid",
"mount_options": "defaults",
"mount_passno": 0,
"mount_point": "",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"size": 0,
"state": "present",
"type": "lvm"
}
],
"_storage_vol_pools": [
"foo"
],
"_storage_vols_no_defaults": [
{
"encryption": true,
"encryption_passphrase": "yabbadabbadoo",
"mount_point": "/opt/test1",
"name": "test1",
"type": "partition"
}
]
},
"ansible_loop_var": "item",
"changed": false,
"item": [
{
"disks": [
"nvme0n1"
],
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": null,
"name": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"state": "present",
"type": "partition",
"volumes": [
{
"encryption": true,
"encryption_passphrase": "yabbadabbadoo",
"mount_point": "/opt/test1",
"name": "test1",
"type": "partition"
}
]
},
{
"encryption": true,
"encryption_passphrase": "yabbadabbadoo",
"mount_point": "/opt/test1",
"name": "test1",
"type": "partition"
}
]
}

TASK [storage : Apply defaults to pools and volumes [3/6]] **********************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:44
ok: [localhost] => (item=[{'name': 'test1', 'type': 'partition', 'mount_point': '/opt/test1', 'encryption': True, 'encryption_passphrase': 'yabbadabbadoo'}, {'state': 'present', 'type': 'lvm', 'size': 0, 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_passphrase': None, 'encryption_key_file': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None}]) => {
"ansible_facts": {
"_storage_vols_w_defaults": [
{
"encryption": true,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": "yabbadabbadoo",
"fs_create_options": "",
"fs_label": "",
"fs_overwrite_existing": true,
"fs_type": "xfs",
"mount_check": 0,
"mount_device_identifier": "uuid",
"mount_options": "defaults",
"mount_passno": 0,
"mount_point": "/opt/test1",
"name": "test1",
"pool": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"size": 0,
"state": "present",
"type": "partition"
}
]
},
"ansible_index_var": "idx",
"ansible_loop_var": "item",
"changed": false,
"idx": 0,
"item": [
{
"encryption": true,
"encryption_passphrase": "yabbadabbadoo",
"mount_point": "/opt/test1",
"name": "test1",
"type": "partition"
},
{
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": null,
"fs_create_options": "",
"fs_label": "",
"fs_overwrite_existing": true,
"fs_type": "xfs",
"mount_check": 0,
"mount_device_identifier": "uuid",
"mount_options": "defaults",
"mount_passno": 0,
"mount_point": "",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"size": 0,
"state": "present",
"type": "lvm"
}
]
}

TASK [storage : Apply defaults to pools and volumes [4/6]] **********************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:52
ok: [localhost] => (item={'state': 'present', 'type': 'partition', 'encryption': False, 'encryption_passphrase': None, 'encryption_key_file': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'name': 'foo', 'disks': ['nvme0n1'], 'volumes': [{'name': 'test1', 'type': 'partition', 'mount_point': '/opt/test1', 'encryption': True, 'encryption_passphrase': 'yabbadabbadoo'}]}) => {
"ansible_facts": {
"_storage_vols_no_defaults_by_pool": {
"foo": [
{
"encryption": true,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": "yabbadabbadoo",
"fs_create_options": "",
"fs_label": "",
"fs_overwrite_existing": true,
"fs_type": "xfs",
"mount_check": 0,
"mount_device_identifier": "uuid",
"mount_options": "defaults",
"mount_passno": 0,
"mount_point": "/opt/test1",
"name": "test1",
"pool": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"size": 0,
"state": "present",
"type": "partition"
}
]
}
},
"ansible_loop_var": "item",
"changed": false,
"item": {
"disks": [
"nvme0n1"
],
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": null,
"name": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"state": "present",
"type": "partition",
"volumes": [
{
"encryption": true,
"encryption_passphrase": "yabbadabbadoo",
"mount_point": "/opt/test1",
"name": "test1",
"type": "partition"
}
]
}
}

TASK [storage : Apply defaults to pools and volumes [5/6]] **********************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:61
ok: [localhost] => (item={'state': 'present', 'type': 'partition', 'encryption': False, 'encryption_passphrase': None, 'encryption_key_file': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'name': 'foo', 'disks': ['nvme0n1'], 'volumes': [{'name': 'test1', 'type': 'partition', 'mount_point': '/opt/test1', 'encryption': True, 'encryption_passphrase': 'yabbadabbadoo'}]}) => {
"ansible_facts": {
"_storage_pools": [
{
"disks": [
"nvme0n1"
],
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": null,
"name": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"state": "present",
"type": "partition",
"volumes": [
{
"encryption": true,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": "yabbadabbadoo",
"fs_create_options": "",
"fs_label": "",
"fs_overwrite_existing": true,
"fs_type": "xfs",
"mount_check": 0,
"mount_device_identifier": "uuid",
"mount_options": "defaults",
"mount_passno": 0,
"mount_point": "/opt/test1",
"name": "test1",
"pool": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"size": 0,
"state": "present",
"type": "partition"
}
]
}
]
},
"ansible_index_var": "idx",
"ansible_loop_var": "pool",
"changed": false,
"idx": 0,
"pool": {
"disks": [
"nvme0n1"
],
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": null,
"name": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"state": "present",
"type": "partition",
"volumes": [
{
"encryption": true,
"encryption_passphrase": "yabbadabbadoo",
"mount_point": "/opt/test1",
"name": "test1",
"type": "partition"
}
]
}
}

TASK [storage : Apply defaults to pools and volumes [6/6]] **********************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:72

TASK [storage : debug] **********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:84
ok: [localhost] => {
"_storage_pools": [
{
"disks": [
"nvme0n1"
],
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": null,
"name": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"state": "present",
"type": "partition",
"volumes": [
{
"encryption": true,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": "yabbadabbadoo",
"fs_create_options": "",
"fs_label": "",
"fs_overwrite_existing": true,
"fs_type": "xfs",
"mount_check": 0,
"mount_device_identifier": "uuid",
"mount_options": "defaults",
"mount_passno": 0,
"mount_point": "/opt/test1",
"name": "test1",
"pool": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"size": 0,
"state": "present",
"type": "partition"
}
]
}
]
}

TASK [storage : debug] **********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:87
ok: [localhost] => {
"_storage_volumes": []
}

TASK [storage : get required packages] ******************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:90
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551691.1982038-16465-51285333209511 && echo ansible-tmp-1596551691.1982038-16465-51285333209511="echo /root/.ansible/tmp/ansible-tmp-1596551691.1982038-16465-51285333209511" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmps14vqdi_ TO /root/.ansible/tmp/ansible-tmp-1596551691.1982038-16465-51285333209511/AnsiballZ_blivet.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551691.1982038-16465-51285333209511/ /root/.ansible/tmp/ansible-tmp-1596551691.1982038-16465-51285333209511/AnsiballZ_blivet.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551691.1982038-16465-51285333209511/AnsiballZ_blivet.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551691.1982038-16465-51285333209511/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"actions": [],
"changed": false,
"crypts": [],
"invocation": {
"module_args": {
"disklabel_type": null,
"packages_only": true,
"pools": [
{
"disks": [
"nvme0n1"
],
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": null,
"name": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"state": "present",
"type": "partition",
"volumes": [
{
"encryption": true,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": "yabbadabbadoo",
"fs_create_options": "",
"fs_label": "",
"fs_overwrite_existing": true,
"fs_type": "xfs",
"mount_check": 0,
"mount_device_identifier": "uuid",
"mount_options": "defaults",
"mount_passno": 0,
"mount_point": "/opt/test1",
"name": "test1",
"pool": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"size": 0,
"state": "present",
"type": "partition"
}
]
}
],
"safe_mode": true,
"use_partitions": null,
"volumes": []
}
},
"leaves": [],
"mounts": [],
"packages": [
"cryptsetup",
"xfsprogs"
],
"pools": [],
"volumes": []
}

TASK [storage : make sure required packages are installed] **********************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:99
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551694.9371064-16528-8555688808085 && echo ansible-tmp-1596551694.9371064-16528-8555688808085="echo /root/.ansible/tmp/ansible-tmp-1596551694.9371064-16528-8555688808085" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/packaging/os/dnf.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmpajtghxml TO /root/.ansible/tmp/ansible-tmp-1596551694.9371064-16528-8555688808085/AnsiballZ_dnf.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551694.9371064-16528-8555688808085/ /root/.ansible/tmp/ansible-tmp-1596551694.9371064-16528-8555688808085/AnsiballZ_dnf.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551694.9371064-16528-8555688808085/AnsiballZ_dnf.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551694.9371064-16528-8555688808085/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"changed": true,
"invocation": {
"module_args": {
"allow_downgrade": false,
"autoremove": false,
"bugfix": false,
"conf_file": null,
"disable_excludes": null,
"disable_gpg_check": false,
"disable_plugin": [],
"disablerepo": [],
"download_dir": null,
"download_only": false,
"enable_plugin": [],
"enablerepo": [],
"exclude": [],
"install_repoquery": true,
"install_weak_deps": true,
"installroot": "/",
"list": null,
"lock_timeout": 30,
"name": [
"cryptsetup",
"xfsprogs"
],
"releasever": null,
"security": false,
"skip_broken": false,
"state": "present",
"update_cache": false,
"update_only": false,
"validate_certs": true
}
},
"msg": "",
"rc": 0,
"results": [
"Installed: cryptsetup-2.3.3-1.el8.x86_64"
]
}

TASK [storage : manage the pools and volumes to match the specified state] ******************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:104
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551700.3955412-16581-219911815072388 && echo ansible-tmp-1596551700.3955412-16581-219911815072388="echo /root/.ansible/tmp/ansible-tmp-1596551700.3955412-16581-219911815072388" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmpoq5ibqe2 TO /root/.ansible/tmp/ansible-tmp-1596551700.3955412-16581-219911815072388/AnsiballZ_blivet.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551700.3955412-16581-219911815072388/ /root/.ansible/tmp/ansible-tmp-1596551700.3955412-16581-219911815072388/AnsiballZ_blivet.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551700.3955412-16581-219911815072388/AnsiballZ_blivet.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551700.3955412-16581-219911815072388/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"actions": [
{
"action": "create format",
"device": "/dev/nvme0n1",
"fs_type": "disklabel"
},
{
"action": "create device",
"device": "/dev/nvme0n1p1",
"fs_type": null
},
{
"action": "create format",
"device": "/dev/nvme0n1p1",
"fs_type": "luks"
},
{
"action": "create device",
"device": "/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"fs_type": null
},
{
"action": "create format",
"device": "/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"fs_type": "xfs"
}
],
"changed": true,
"crypts": [
{
"backing_device": "/dev/nvme0n1p1",
"name": "luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"password": "-",
"state": "present"
}
],
"invocation": {
"module_args": {
"disklabel_type": null,
"packages_only": false,
"pools": [
{
"disks": [
"nvme0n1"
],
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": null,
"name": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"state": "present",
"type": "partition",
"volumes": [
{
"_device": "/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"_kernel_device": "/dev/dm-3",
"_mount_id": "/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"_raw_device": "/dev/nvme0n1p1",
"_raw_kernel_device": "/dev/nvme0n1p1",
"encryption": true,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": "yabbadabbadoo",
"fs_create_options": "",
"fs_label": "",
"fs_overwrite_existing": true,
"fs_type": "xfs",
"mount_check": 0,
"mount_device_identifier": "uuid",
"mount_options": "defaults",
"mount_passno": 0,
"mount_point": "/opt/test1",
"name": "test1",
"pool": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"size": 0,
"state": "present",
"type": "partition"
}
]
}
],
"safe_mode": false,
"use_partitions": null,
"volumes": []
}
},
"leaves": [
"/dev/sda1",
"/dev/sda2",
"/dev/mapper/rhel_storageqe--62-home",
"/dev/mapper/rhel_storageqe--62-root",
"/dev/mapper/rhel_storageqe--62-swap",
"/dev/sdb",
"/dev/sdh",
"/dev/sdi",
"/dev/sdj",
"/dev/sdc",
"/dev/sdk",
"/dev/sdl1",
"/dev/sdd",
"/dev/sde",
"/dev/sdf",
"/dev/sdg",
"/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5"
],
"mounts": [
{
"dump": 0,
"fstype": "xfs",
"opts": "defaults",
"passno": 0,
"path": "/opt/test1",
"src": "/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"state": "mounted"
}
],
"packages": [
"e2fsprogs",
"lvm2",
"xfsprogs",
"cryptsetup",
"dosfstools"
],
"pools": [
{
"disks": [
"nvme0n1"
],
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": null,
"name": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"state": "present",
"type": "partition",
"volumes": [
{
"_device": "/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"_kernel_device": "/dev/dm-3",
"_mount_id": "/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"_raw_device": "/dev/nvme0n1p1",
"_raw_kernel_device": "/dev/nvme0n1p1",
"encryption": true,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": "yabbadabbadoo",
"fs_create_options": "",
"fs_label": "",
"fs_overwrite_existing": true,
"fs_type": "xfs",
"mount_check": 0,
"mount_device_identifier": "uuid",
"mount_options": "defaults",
"mount_passno": 0,
"mount_point": "/opt/test1",
"name": "test1",
"pool": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"size": 0,
"state": "present",
"type": "partition"
}
]
}
],
"volumes": []
}

TASK [storage : debug] **********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:113
ok: [localhost] => {
"blivet_output": {
"actions": [
{
"action": "create format",
"device": "/dev/nvme0n1",
"fs_type": "disklabel"
},
{
"action": "create device",
"device": "/dev/nvme0n1p1",
"fs_type": null
},
{
"action": "create format",
"device": "/dev/nvme0n1p1",
"fs_type": "luks"
},
{
"action": "create device",
"device": "/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"fs_type": null
},
{
"action": "create format",
"device": "/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"fs_type": "xfs"
}
],
"changed": true,
"crypts": [
{
"backing_device": "/dev/nvme0n1p1",
"name": "luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"password": "-",
"state": "present"
}
],
"failed": false,
"leaves": [
"/dev/sda1",
"/dev/sda2",
"/dev/mapper/rhel_storageqe--62-home",
"/dev/mapper/rhel_storageqe--62-root",
"/dev/mapper/rhel_storageqe--62-swap",
"/dev/sdb",
"/dev/sdh",
"/dev/sdi",
"/dev/sdj",
"/dev/sdc",
"/dev/sdk",
"/dev/sdl1",
"/dev/sdd",
"/dev/sde",
"/dev/sdf",
"/dev/sdg",
"/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5"
],
"mounts": [
{
"dump": 0,
"fstype": "xfs",
"opts": "defaults",
"passno": 0,
"path": "/opt/test1",
"src": "/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"state": "mounted"
}
],
"packages": [
"e2fsprogs",
"lvm2",
"xfsprogs",
"cryptsetup",
"dosfstools"
],
"pools": [
{
"disks": [
"nvme0n1"
],
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": null,
"name": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"state": "present",
"type": "partition",
"volumes": [
{
"_device": "/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"_kernel_device": "/dev/dm-3",
"_mount_id": "/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"_raw_device": "/dev/nvme0n1p1",
"_raw_kernel_device": "/dev/nvme0n1p1",
"encryption": true,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": "yabbadabbadoo",
"fs_create_options": "",
"fs_label": "",
"fs_overwrite_existing": true,
"fs_type": "xfs",
"mount_check": 0,
"mount_device_identifier": "uuid",
"mount_options": "defaults",
"mount_passno": 0,
"mount_point": "/opt/test1",
"name": "test1",
"pool": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"size": 0,
"state": "present",
"type": "partition"
}
]
}
],
"volumes": []
}
}

TASK [storage : set the list of pools for test verification] ********************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:116
ok: [localhost] => {
"ansible_facts": {
"_storage_pools_list": [
{
"disks": [
"nvme0n1"
],
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": null,
"name": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"state": "present",
"type": "partition",
"volumes": [
{
"_device": "/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"_kernel_device": "/dev/dm-3",
"_mount_id": "/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"_raw_device": "/dev/nvme0n1p1",
"_raw_kernel_device": "/dev/nvme0n1p1",
"encryption": true,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": "yabbadabbadoo",
"fs_create_options": "",
"fs_label": "",
"fs_overwrite_existing": true,
"fs_type": "xfs",
"mount_check": 0,
"mount_device_identifier": "uuid",
"mount_options": "defaults",
"mount_passno": 0,
"mount_point": "/opt/test1",
"name": "test1",
"pool": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"size": 0,
"state": "present",
"type": "partition"
}
]
}
]
},
"changed": false
}

TASK [storage : set the list of volumes for test verification] ******************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:120
ok: [localhost] => {
"ansible_facts": {
"_storage_volumes_list": []
},
"changed": false
}

TASK [storage : remove obsolete mounts] *****************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:136

TASK [storage : tell systemd to refresh its view of /etc/fstab] *****************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:147
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551718.4167783-17619-4844520969938 && echo ansible-tmp-1596551718.4167783-17619-4844520969938="echo /root/.ansible/tmp/ansible-tmp-1596551718.4167783-17619-4844520969938" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/system/systemd.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmpcbripnpt TO /root/.ansible/tmp/ansible-tmp-1596551718.4167783-17619-4844520969938/AnsiballZ_systemd.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551718.4167783-17619-4844520969938/ /root/.ansible/tmp/ansible-tmp-1596551718.4167783-17619-4844520969938/AnsiballZ_systemd.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551718.4167783-17619-4844520969938/AnsiballZ_systemd.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551718.4167783-17619-4844520969938/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"invocation": {
"module_args": {
"daemon_reexec": false,
"daemon_reload": true,
"enabled": null,
"force": null,
"masked": null,
"name": null,
"no_block": false,
"scope": null,
"state": null,
"user": null
}
},
"name": null,
"status": {}
}

TASK [storage : set up new/current mounts] **************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:152
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551720.0600996-17655-275030844737805 && echo ansible-tmp-1596551720.0600996-17655-275030844737805="echo /root/.ansible/tmp/ansible-tmp-1596551720.0600996-17655-275030844737805" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/system/mount.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmpc2j9_s_a TO /root/.ansible/tmp/ansible-tmp-1596551720.0600996-17655-275030844737805/AnsiballZ_mount.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551720.0600996-17655-275030844737805/ /root/.ansible/tmp/ansible-tmp-1596551720.0600996-17655-275030844737805/AnsiballZ_mount.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551720.0600996-17655-275030844737805/AnsiballZ_mount.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551720.0600996-17655-275030844737805/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => (item={'src': '/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5', 'path': '/opt/test1', 'fstype': 'xfs', 'opts': 'defaults', 'dump': 0, 'passno': 0, 'state': 'mounted'}) => {
"ansible_loop_var": "mount_info",
"changed": true,
"dump": "0",
"fstab": "/etc/fstab",
"fstype": "xfs",
"invocation": {
"module_args": {
"backup": false,
"boot": true,
"dump": null,
"fstab": null,
"fstype": "xfs",
"opts": "defaults",
"passno": null,
"path": "/opt/test1",
"src": "/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"state": "mounted"
}
},
"mount_info": {
"dump": 0,
"fstype": "xfs",
"opts": "defaults",
"passno": 0,
"path": "/opt/test1",
"src": "/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"state": "mounted"
},
"name": "/opt/test1",
"opts": "defaults",
"passno": "0",
"src": "/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5"
}

TASK [storage : tell systemd to refresh its view of /etc/fstab] *****************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:163
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551720.95644-17680-240149803781325 && echo ansible-tmp-1596551720.95644-17680-240149803781325="echo /root/.ansible/tmp/ansible-tmp-1596551720.95644-17680-240149803781325" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/system/systemd.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmp58e0xbbj TO /root/.ansible/tmp/ansible-tmp-1596551720.95644-17680-240149803781325/AnsiballZ_systemd.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551720.95644-17680-240149803781325/ /root/.ansible/tmp/ansible-tmp-1596551720.95644-17680-240149803781325/AnsiballZ_systemd.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551720.95644-17680-240149803781325/AnsiballZ_systemd.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551720.95644-17680-240149803781325/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"invocation": {
"module_args": {
"daemon_reexec": false,
"daemon_reload": true,
"enabled": null,
"force": null,
"masked": null,
"name": null,
"no_block": false,
"scope": null,
"state": null,
"user": null
}
},
"name": null,
"status": {}
}

TASK [storage : Manage /etc/crypttab to account for changes we just made] *******************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:171
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551721.9902716-17717-260049588039911 && echo ansible-tmp-1596551721.9902716-17717-260049588039911="echo /root/.ansible/tmp/ansible-tmp-1596551721.9902716-17717-260049588039911" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/system/crypttab.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmph6ki_zxv TO /root/.ansible/tmp/ansible-tmp-1596551721.9902716-17717-260049588039911/AnsiballZ_crypttab.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551721.9902716-17717-260049588039911/ /root/.ansible/tmp/ansible-tmp-1596551721.9902716-17717-260049588039911/AnsiballZ_crypttab.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551721.9902716-17717-260049588039911/AnsiballZ_crypttab.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551721.9902716-17717-260049588039911/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => (item={'backing_device': '/dev/nvme0n1p1', 'name': 'luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5', 'password': '-', 'state': 'present'}) => {
"ansible_loop_var": "entry",
"backing_device": "/dev/nvme0n1p1",
"changed": true,
"entry": {
"backing_device": "/dev/nvme0n1p1",
"name": "luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"password": "-",
"state": "present"
},
"gid": 0,
"group": "root",
"invocation": {
"module_args": {
"backing_device": "/dev/nvme0n1p1",
"name": "luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"opts": null,
"password": "-",
"path": "/etc/crypttab",
"state": "present"
}
},
"mode": "0600",
"msg": "added line",
"name": "luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"opts": null,
"owner": "root",
"password": "-",
"path": "/etc/crypttab",
"secontext": "system_u:object_r:etc_t:s0",
"size": 59,
"state": "file",
"uid": 0,
"warnings": [
"Module did not set no_log for password"
]
}

TASK [storage : Update facts] ***************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:186
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551722.8393655-17732-211856504207708 && echo ansible-tmp-1596551722.8393655-17732-211856504207708="echo /root/.ansible/tmp/ansible-tmp-1596551722.8393655-17732-211856504207708" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/system/setup.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmp2yhupv6h TO /root/.ansible/tmp/ansible-tmp-1596551722.8393655-17732-211856504207708/AnsiballZ_setup.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551722.8393655-17732-211856504207708/ /root/.ansible/tmp/ansible-tmp-1596551722.8393655-17732-211856504207708/AnsiballZ_setup.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551722.8393655-17732-211856504207708/AnsiballZ_setup.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551722.8393655-17732-211856504207708/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]

TASK [include_tasks] ************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/tests_luks.yml:38
included: /root/test/storage/tests/verify-role-results.yml for localhost

TASK [Print out pool information] ***********************************************************************************************************************************************************************
task path: /root/test/storage/tests/verify-role-results.yml:1
ok: [localhost] => {
"_storage_pools_list": [
{
"disks": [
"nvme0n1"
],
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": null,
"name": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"state": "present",
"type": "partition",
"volumes": [
{
"_device": "/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"_kernel_device": "/dev/dm-3",
"_mount_id": "/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"_raw_device": "/dev/nvme0n1p1",
"_raw_kernel_device": "/dev/nvme0n1p1",
"encryption": true,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": "yabbadabbadoo",
"fs_create_options": "",
"fs_label": "",
"fs_overwrite_existing": true,
"fs_type": "xfs",
"mount_check": 0,
"mount_device_identifier": "uuid",
"mount_options": "defaults",
"mount_passno": 0,
"mount_point": "/opt/test1",
"name": "test1",
"pool": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"size": 0,
"state": "present",
"type": "partition"
}
]
}
]
}

TASK [Print out volume information] *********************************************************************************************************************************************************************
task path: /root/test/storage/tests/verify-role-results.yml:6
skipping: [localhost] => {}

TASK [Collect info about the volumes.] ******************************************************************************************************************************************************************
task path: /root/test/storage/tests/verify-role-results.yml:14
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551725.3757377-17847-222674636170443 && echo ansible-tmp-1596551725.3757377-17847-222674636170443="echo /root/.ansible/tmp/ansible-tmp-1596551725.3757377-17847-222674636170443" ) && sleep 0'
Using module file /root/test/storage/library/blockdev_info.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmpyqchlbr_ TO /root/.ansible/tmp/ansible-tmp-1596551725.3757377-17847-222674636170443/AnsiballZ_blockdev_info.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551725.3757377-17847-222674636170443/ /root/.ansible/tmp/ansible-tmp-1596551725.3757377-17847-222674636170443/AnsiballZ_blockdev_info.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551725.3757377-17847-222674636170443/AnsiballZ_blockdev_info.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551725.3757377-17847-222674636170443/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"info": {
"/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5": {
"fstype": "xfs",
"label": "",
"name": "/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"size": "745.2G",
"type": "crypt",
"uuid": "f1698797-4201-47e1-ac23-985ab6927e03"
},
"/dev/mapper/rhel_storageqe--62-home": {
"fstype": "xfs",
"label": "",
"name": "/dev/mapper/rhel_storageqe--62-home",
"size": "200G",
"type": "lvm",
"uuid": "25082e54-84e7-4945-87a4-532894d69113"
},
"/dev/mapper/rhel_storageqe--62-root": {
"fstype": "xfs",
"label": "",
"name": "/dev/mapper/rhel_storageqe--62-root",
"size": "70G",
"type": "lvm",
"uuid": "2eeea9bb-806d-46d1-a309-9806a6d92074"
},
"/dev/mapper/rhel_storageqe--62-swap": {
"fstype": "swap",
"label": "",
"name": "/dev/mapper/rhel_storageqe--62-swap",
"size": "7.9G",
"type": "lvm",
"uuid": "aeaa2293-343b-4399-afa5-7d2ceafac06e"
},
"/dev/nvme0n1": {
"fstype": "",
"label": "",
"name": "/dev/nvme0n1",
"size": "745.2G",
"type": "disk",
"uuid": ""
},
"/dev/nvme0n1p1": {
"fstype": "crypto_LUKS",
"label": "",
"name": "/dev/nvme0n1p1",
"size": "745.2G",
"type": "partition",
"uuid": "d1731709-dfb2-4096-a9c0-6e332d6e95e5"
},
"/dev/sda": {
"fstype": "",
"label": "",
"name": "/dev/sda",
"size": "279.4G",
"type": "disk",
"uuid": ""
},
"/dev/sda1": {
"fstype": "vfat",
"label": "",
"name": "/dev/sda1",
"size": "600M",
"type": "partition",
"uuid": "E3F6-B0B3"
},
"/dev/sda2": {
"fstype": "xfs",
"label": "",
"name": "/dev/sda2",
"size": "1G",
"type": "partition",
"uuid": "02369863-9365-4c2c-a2c4-141b221fdf33"
},
"/dev/sda3": {
"fstype": "LVM2_member",
"label": "",
"name": "/dev/sda3",
"size": "277.8G",
"type": "partition",
"uuid": "XUQoSV-45yt-VtMv-SmVa-5iAe-NPRB-bkvLmD"
},
"/dev/sdb": {
"fstype": "xfs",
"label": "",
"name": "/dev/sdb",
"size": "279.4G",
"type": "disk",
"uuid": "22fd63bb-7a6a-4abd-ae93-03a803699d32"
},
"/dev/sdc": {
"fstype": "ext3",
"label": "",
"name": "/dev/sdc",
"size": "186.3G",
"type": "disk",
"uuid": "698fd066-11fb-49ee-bbd6-c196ac5776c4"
},
"/dev/sdd": {
"fstype": "ext3",
"label": "",
"name": "/dev/sdd",
"size": "111.8G",
"type": "disk",
"uuid": "ebb1ec3f-28cd-4df4-b73e-f8125892fa13"
},
"/dev/sde": {
"fstype": "xfs",
"label": "",
"name": "/dev/sde",
"size": "111.8G",
"type": "disk",
"uuid": "fb09fac2-02c3-4dc9-8fcd-9336b18a8f53"
},
"/dev/sdf": {
"fstype": "xfs",
"label": "",
"name": "/dev/sdf",
"size": "931.5G",
"type": "disk",
"uuid": "50c0a829-be65-4886-8f4d-7f750dbceea4"
},
"/dev/sdg": {
"fstype": "xfs",
"label": "",
"name": "/dev/sdg",
"size": "931.5G",
"type": "disk",
"uuid": "0028b32c-0f80-43e4-8de3-a6eb0487e43d"
},
"/dev/sdh": {
"fstype": "xfs",
"label": "",
"name": "/dev/sdh",
"size": "931.5G",
"type": "disk",
"uuid": "bc0d9a6e-58b2-4a88-8257-608835b5160c"
},
"/dev/sdi": {
"fstype": "xfs",
"label": "",
"name": "/dev/sdi",
"size": "931.5G",
"type": "disk",
"uuid": "f37a9ad7-bc49-4626-864e-a1831bb46d70"
},
"/dev/sdj": {
"fstype": "xfs",
"label": "",
"name": "/dev/sdj",
"size": "931.5G",
"type": "disk",
"uuid": "9e25c6d2-37ea-42bf-ade3-8a63622c7172"
},
"/dev/sdk": {
"fstype": "xfs",
"label": "",
"name": "/dev/sdk",
"size": "279.4G",
"type": "disk",
"uuid": "00fac5a6-cf60-4cb4-95a5-e2f0c0cad49f"
},
"/dev/sdl": {
"fstype": "",
"label": "",
"name": "/dev/sdl",
"size": "279.4G",
"type": "disk",
"uuid": ""
},
"/dev/sdl1": {
"fstype": "xfs",
"label": "",
"name": "/dev/sdl1",
"size": "279.4G",
"type": "partition",
"uuid": "8bd8c098-3eea-47f1-8551-2a2d5afd3de4"
}
},
"invocation": {
"module_args": {}
}
}

TASK [Read the /etc/fstab file for volume existence] ****************************************************************************************************************************************************
task path: /root/test/storage/tests/verify-role-results.yml:19
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551726.173106-17863-268434075682075 && echo ansible-tmp-1596551726.173106-17863-268434075682075="echo /root/.ansible/tmp/ansible-tmp-1596551726.173106-17863-268434075682075" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/commands/command.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmpzfc2liue TO /root/.ansible/tmp/ansible-tmp-1596551726.173106-17863-268434075682075/AnsiballZ_command.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551726.173106-17863-268434075682075/ /root/.ansible/tmp/ansible-tmp-1596551726.173106-17863-268434075682075/AnsiballZ_command.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551726.173106-17863-268434075682075/AnsiballZ_command.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551726.173106-17863-268434075682075/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"cat",
"/etc/fstab"
],
"delta": "0:00:00.003273",
"end": "2020-08-04 10:35:26.837134",
"invocation": {
"module_args": {
"_raw_params": "cat /etc/fstab",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"rc": 0,
"start": "2020-08-04 10:35:26.833861",
"stderr": "",
"stderr_lines": [],
"stdout": "\n#\n# /etc/fstab\n# Created by anaconda on Tue Aug 4 14:18:46 2020\n#\n# Accessible filesystems, by reference, are maintained under '/dev/disk/'.\n# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.\n#\n# After editing this file, run 'systemctl daemon-reload' to update systemd\n# units generated from this file.\n#\n/dev/mapper/rhel_storageqe--62-root / xfs defaults 0 0\nUUID=02369863-9365-4c2c-a2c4-141b221fdf33 /boot xfs defaults 0 0\nUUID=E3F6-B0B3 /boot/efi vfat umask=0077,shortname=winnt 0 2\n/dev/mapper/rhel_storageqe--62-home /home xfs defaults 0 0\n/dev/mapper/rhel_storageqe--62-swap none swap defaults 0 0\n/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 /opt/test1 xfs defaults 0 0",
"stdout_lines": [
"",
"#",
"# /etc/fstab",
"# Created by anaconda on Tue Aug 4 14:18:46 2020",
"#",
"# Accessible filesystems, by reference, are maintained under '/dev/disk/'.",
"# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.",
"#",
"# After editing this file, run 'systemctl daemon-reload' to update systemd",
"# units generated from this file.",
"#",
"/dev/mapper/rhel_storageqe--62-root / xfs defaults 0 0",
"UUID=02369863-9365-4c2c-a2c4-141b221fdf33 /boot xfs defaults 0 0",
"UUID=E3F6-B0B3 /boot/efi vfat umask=0077,shortname=winnt 0 2",
"/dev/mapper/rhel_storageqe--62-home /home xfs defaults 0 0",
"/dev/mapper/rhel_storageqe--62-swap none swap defaults 0 0",
"/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 /opt/test1 xfs defaults 0 0"
]
}

TASK [Read the /etc/crypttab file] **********************************************************************************************************************************************************************
task path: /root/test/storage/tests/verify-role-results.yml:24
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551726.96098-17879-164953658137720 && echo ansible-tmp-1596551726.96098-17879-164953658137720="echo /root/.ansible/tmp/ansible-tmp-1596551726.96098-17879-164953658137720" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/commands/command.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmpkwj9hgh_ TO /root/.ansible/tmp/ansible-tmp-1596551726.96098-17879-164953658137720/AnsiballZ_command.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551726.96098-17879-164953658137720/ /root/.ansible/tmp/ansible-tmp-1596551726.96098-17879-164953658137720/AnsiballZ_command.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551726.96098-17879-164953658137720/AnsiballZ_command.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551726.96098-17879-164953658137720/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"cat",
"/etc/crypttab"
],
"delta": "0:00:00.003386",
"end": "2020-08-04 10:35:27.250945",
"invocation": {
"module_args": {
"_raw_params": "cat /etc/crypttab",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"rc": 0,
"start": "2020-08-04 10:35:27.247559",
"stderr": "",
"stderr_lines": [],
"stdout": "luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 /dev/nvme0n1p1 -",
"stdout_lines": [
"luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 /dev/nvme0n1p1 -"
]
}

TASK [Verify the volumes listed in storage_pools were correctly managed] ********************************************************************************************************************************
task path: /root/test/storage/tests/verify-role-results.yml:32
included: /root/test/storage/tests/test-verify-pool.yml for localhost

TASK [set_fact] *****************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-pool.yml:5
ok: [localhost] => {
"ansible_facts": {
"_storage_pool_tests": [
"members",
"md",
"volumes"
]
},
"changed": false
}

TASK [include_tasks] ************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-pool.yml:17
included: /root/test/storage/tests/test-verify-pool-members.yml for localhost
included: /root/test/storage/tests/test-verify-pool-md.yml for localhost
included: /root/test/storage/tests/test-verify-pool-volumes.yml for localhost

TASK [set_fact] *****************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-pool-members.yml:1
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [Get the canonical device path for each member device] *********************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-pool-members.yml:7
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [set_fact] *****************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-pool-members.yml:16
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [Verify PV count] **********************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-pool-members.yml:23
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [set_fact] *****************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-pool-members.yml:29
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [set_fact] *****************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-pool-members.yml:33
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [set_fact] *****************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-pool-members.yml:37
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [Check the type of each PV] ************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-pool-members.yml:41
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [Check member encryption] **************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-pool-members.yml:50
included: /root/test/storage/tests/verify-pool-members-encryption.yml for localhost

TASK [set_fact] *****************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/verify-pool-members-encryption.yml:4
ok: [localhost] => {
"ansible_facts": {
"_storage_test_expected_crypttab_entries": "0",
"_storage_test_expected_crypttab_key_file": "-"
},
"changed": false
}

TASK [Validate pool member LUKS settings] ***************************************************************************************************************************************************************
task path: /root/test/storage/tests/verify-pool-members-encryption.yml:8
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [Validate pool member crypttab entries] ************************************************************************************************************************************************************
task path: /root/test/storage/tests/verify-pool-members-encryption.yml:15
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [set_fact] *****************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/verify-pool-members-encryption.yml:22
ok: [localhost] => {
"ansible_facts": {
"_storage_test_crypttab_entries": null,
"_storage_test_crypttab_key_file": null
},
"changed": false
}

TASK [set_fact] *****************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-pool-members.yml:53
ok: [localhost] => {
"ansible_facts": {
"_storage_test_expected_pv_count": null,
"_storage_test_expected_pv_type": null,
"_storage_test_pool_pvs": [],
"_storage_test_pool_pvs_lvm": []
},
"changed": false
}

TASK [get information about RAID] ***********************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-pool-md.yml:7
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [set_fact] *****************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-pool-md.yml:15
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [set_fact] *****************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-pool-md.yml:19
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [set_fact] *****************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-pool-md.yml:23
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [check RAID active devices count] ******************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-pool-md.yml:27
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [check RAID spare devices count] *******************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-pool-md.yml:33
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [check RAID metadata version] **********************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-pool-md.yml:39
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [set_fact] *****************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-pool-md.yml:47
ok: [localhost] => {
"ansible_facts": {
"storage_test_md_active_devices_re": null,
"storage_test_md_metadata_version_re": null,
"storage_test_md_spare_devices_re": null
},
"changed": false
}

TASK [verify the volumes] *******************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-pool-volumes.yml:3
included: /root/test/storage/tests/test-verify-volume.yml for localhost

TASK [set_fact] *****************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume.yml:2
ok: [localhost] => {
"ansible_facts": {
"_storage_test_volume_present": true,
"_storage_volume_tests": [
"mount",
"fstab",
"fs",
"device",
"encryption",
"md",
"size"
]
},
"changed": false
}

TASK [include_tasks] ************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume.yml:10
included: /root/test/storage/tests/test-verify-volume-mount.yml for localhost
included: /root/test/storage/tests/test-verify-volume-fstab.yml for localhost
included: /root/test/storage/tests/test-verify-volume-fs.yml for localhost
included: /root/test/storage/tests/test-verify-volume-device.yml for localhost
included: /root/test/storage/tests/test-verify-volume-encryption.yml for localhost
included: /root/test/storage/tests/test-verify-volume-md.yml for localhost
included: /root/test/storage/tests/test-verify-volume-size.yml for localhost

TASK [Get expected mount device based on device type] ***************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-mount.yml:6
ok: [localhost] => {
"ansible_facts": {
"storage_test_device_path": "/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5"
},
"changed": false
}

TASK [Set some facts] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-mount.yml:10
ok: [localhost] => {
"ansible_facts": {
"storage_test_mount_device_matches": [
{
"block_available": 193883505,
"block_size": 4096,
"block_total": 195253095,
"block_used": 1369590,
"device": "/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"fstype": "xfs",
"inode_available": 390696957,
"inode_total": 390696960,
"inode_used": 3,
"mount": "/opt/test1",
"options": "rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota",
"size_available": 794146836480,
"size_total": 799756677120,
"uuid": "f1698797-4201-47e1-ac23-985ab6927e03"
}
],
"storage_test_mount_expected_match_count": "1",
"storage_test_mount_point_matches": [
{
"block_available": 193883505,
"block_size": 4096,
"block_total": 195253095,
"block_used": 1369590,
"device": "/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"fstype": "xfs",
"inode_available": 390696957,
"inode_total": 390696960,
"inode_used": 3,
"mount": "/opt/test1",
"options": "rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota",
"size_available": 794146836480,
"size_total": 799756677120,
"uuid": "f1698797-4201-47e1-ac23-985ab6927e03"
}
],
"storage_test_swap_expected_matches": "0"
},
"changed": false
}

TASK [Verify the current mount state by device] *********************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-mount.yml:22
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}

TASK [Verify the current mount state by mount point] ****************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-mount.yml:31
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}

TASK [Verify the mount fs type] *************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-mount.yml:39
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}

TASK [command] ******************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-mount.yml:48
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [Gather swap info] *********************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-mount.yml:52
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [Verify swap status] *******************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-mount.yml:57
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [Unset facts] **************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-mount.yml:67
ok: [localhost] => {
"ansible_facts": {
"storage_test_mount_device_matches": null,
"storage_test_mount_expected_match_count": null,
"storage_test_mount_point_matches": null,
"storage_test_swap_expected_matches": null,
"storage_test_swaps": null,
"storage_test_sys_node": null
},
"changed": false
}

TASK [Set some variables for fstab checking] ************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-fstab.yml:2
ok: [localhost] => {
"ansible_facts": {
"storage_test_fstab_expected_id_matches": "1",
"storage_test_fstab_expected_mount_point_matches": "1",
"storage_test_fstab_id_matches": [
"/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 "
],
"storage_test_fstab_mount_point_matches": [
" /opt/test1 "
]
},
"changed": false
}

TASK [Verify that the device identifier appears in /etc/fstab] ******************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-fstab.yml:10
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}

TASK [Verify the fstab mount point] *********************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-fstab.yml:17
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}

TASK [Clean up variables] *******************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-fstab.yml:24
ok: [localhost] => {
"ansible_facts": {
"storage_test_fstab_expected_id_matches": null,
"storage_test_fstab_expected_mount_point_matches": null,
"storage_test_fstab_id_matches": null,
"storage_test_fstab_mount_point_matches": null
},
"changed": false
}

TASK [Verify fs type] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-fs.yml:4
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}

TASK [Verify fs label] **********************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-fs.yml:10
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}

TASK [See whether the device node is present] ***********************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-device.yml:4
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551731.5102558-17981-146301481834338 && echo ansible-tmp-1596551731.5102558-17981-146301481834338="echo /root/.ansible/tmp/ansible-tmp-1596551731.5102558-17981-146301481834338" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/files/stat.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmp8pgwecrb TO /root/.ansible/tmp/ansible-tmp-1596551731.5102558-17981-146301481834338/AnsiballZ_stat.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551731.5102558-17981-146301481834338/ /root/.ansible/tmp/ansible-tmp-1596551731.5102558-17981-146301481834338/AnsiballZ_stat.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551731.5102558-17981-146301481834338/AnsiballZ_stat.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551731.5102558-17981-146301481834338/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"invocation": {
"module_args": {
"checksum_algorithm": "sha1",
"follow": true,
"get_attributes": true,
"get_checksum": true,
"get_md5": false,
"get_mime": true,
"path": "/dev/nvme0n1p1"
}
},
"stat": {
"atime": 1596551717.0222013,
"attr_flags": "",
"attributes": [],
"block_size": 4096,
"blocks": 0,
"charset": "binary",
"ctime": 1596551717.0222013,
"dev": 6,
"device_type": 66306,
"executable": false,
"exists": true,
"gid": 6,
"gr_name": "disk",
"inode": 57774,
"isblk": true,
"ischr": false,
"isdir": false,
"isfifo": false,
"isgid": false,
"islnk": false,
"isreg": false,
"issock": false,
"isuid": false,
"mimetype": "inode/blockdevice",
"mode": "0660",
"mtime": 1596551717.0222013,
"nlink": 1,
"path": "/dev/nvme0n1p1",
"pw_name": "root",
"readable": true,
"rgrp": true,
"roth": false,
"rusr": true,
"size": 0,
"uid": 0,
"version": null,
"wgrp": true,
"woth": false,
"writeable": true,
"wusr": true,
"xgrp": false,
"xoth": false,
"xusr": false
}
}

TASK [Verify the presence/absence of the device node] ***************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-device.yml:10
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}

TASK [Make sure we got info about this volume] **********************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-device.yml:18
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}

TASK [(1/2) Process volume type (set initial value)] ****************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-device.yml:24
ok: [localhost] => {
"ansible_facts": {
"st_volume_type": "partition"
},
"changed": false
}

TASK [(2/2) Process volume type (get RAID value)] *******************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-device.yml:28
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [Verify the volume's device type] ******************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-device.yml:33
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}

TASK [Stat the LUKS device, if encrypted] ***************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-encryption.yml:3
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551732.8201458-18008-267787763126632 && echo ansible-tmp-1596551732.8201458-18008-267787763126632="echo /root/.ansible/tmp/ansible-tmp-1596551732.8201458-18008-267787763126632" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/files/stat.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmpvhksr38p TO /root/.ansible/tmp/ansible-tmp-1596551732.8201458-18008-267787763126632/AnsiballZ_stat.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551732.8201458-18008-267787763126632/ /root/.ansible/tmp/ansible-tmp-1596551732.8201458-18008-267787763126632/AnsiballZ_stat.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551732.8201458-18008-267787763126632/AnsiballZ_stat.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551732.8201458-18008-267787763126632/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"invocation": {
"module_args": {
"checksum_algorithm": "sha1",
"follow": true,
"get_attributes": true,
"get_checksum": true,
"get_md5": false,
"get_mime": true,
"path": "/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5"
}
},
"stat": {
"atime": 1596551717.577207,
"attr_flags": "",
"attributes": [],
"block_size": 4096,
"blocks": 0,
"charset": "binary",
"ctime": 1596551717.577207,
"dev": 6,
"device_type": 64771,
"executable": false,
"exists": true,
"gid": 6,
"gr_name": "disk",
"inode": 57791,
"isblk": true,
"ischr": false,
"isdir": false,
"isfifo": false,
"isgid": false,
"islnk": false,
"isreg": false,
"issock": false,
"isuid": false,
"mimetype": "inode/symlink",
"mode": "0660",
"mtime": 1596551717.577207,
"nlink": 1,
"path": "/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5",
"pw_name": "root",
"readable": true,
"rgrp": true,
"roth": false,
"rusr": true,
"size": 0,
"uid": 0,
"version": null,
"wgrp": true,
"woth": false,
"writeable": true,
"wusr": true,
"xgrp": false,
"xoth": false,
"xusr": false
}
}

TASK [Collect LUKS info for this volume] ****************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-encryption.yml:10
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551733.30379-18025-182321882018665 && echo ansible-tmp-1596551733.30379-18025-182321882018665="echo /root/.ansible/tmp/ansible-tmp-1596551733.30379-18025-182321882018665" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/commands/command.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmpz2zc3haj TO /root/.ansible/tmp/ansible-tmp-1596551733.30379-18025-182321882018665/AnsiballZ_command.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551733.30379-18025-182321882018665/ /root/.ansible/tmp/ansible-tmp-1596551733.30379-18025-182321882018665/AnsiballZ_command.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551733.30379-18025-182321882018665/AnsiballZ_command.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551733.30379-18025-182321882018665/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"cmd": [
"cryptsetup",
"luksDump",
"/dev/nvme0n1p1"
],
"delta": "0:00:01.304160",
"end": "2020-08-04 10:35:34.930632",
"invocation": {
"module_args": {
"_raw_params": "cryptsetup luksDump /dev/nvme0n1p1",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"rc": 0,
"start": "2020-08-04 10:35:33.626472",
"stderr": "",
"stderr_lines": [],
"stdout": "LUKS header information\nVersion: \t2\nEpoch: \t3\nMetadata area: \t16384 [bytes]\nKeyslots area: \t16744448 [bytes]\nUUID: \td1731709-dfb2-4096-a9c0-6e332d6e95e5\nLabel: \t(no label)\nSubsystem: \t(no subsystem)\nFlags: \t(no flags)\n\nData segments:\n 0: crypt\n\toffset: 16777216 [bytes]\n\tlength: (whole device)\n\tcipher: aes-xts-plain64\n\tsector: 512 [bytes]\n\nKeyslots:\n 0: luks2\n\tKey: 512 bits\n\tPriority: normal\n\tCipher: aes-xts-plain64\n\tCipher key: 512 bits\n\tPBKDF: argon2i\n\tTime cost: 4\n\tMemory: 840148\n\tThreads: 4\n\tSalt: 7f 81 ca 1c 66 76 a1 91 5b c0 81 48 50 31 c6 30 \n\t 63 2b 77 87 c1 3f cb 1d 9c 87 6a b2 8e a6 e0 91 \n\tAF stripes: 4000\n\tAF hash: sha256\n\tArea offset:32768 [bytes]\n\tArea length:258048 [bytes]\n\tDigest ID: 0\nTokens:\nDigests:\n 0: pbkdf2\n\tHash: sha256\n\tIterations: 44043\n\tSalt: 56 8b 9c 92 d4 0b cb 46 2a ec 5f eb fc 36 d2 ff \n\t 3c 43 7b ef d8 5f 96 64 0f c7 03 2d ff fe 55 25 \n\tDigest: ab 7b c5 64 ee 16 fa a2 74 94 d2 7e 7b b0 63 16 \n\t 02 ab 81 2e c4 d7 62 6d 1a d4 a4 a4 77 00 ef 88 ",
"stdout_lines": [
"LUKS header information",
"Version: \t2",
"Epoch: \t3",
"Metadata area: \t16384 [bytes]",
"Keyslots area: \t16744448 [bytes]",
"UUID: \td1731709-dfb2-4096-a9c0-6e332d6e95e5",
"Label: \t(no label)",
"Subsystem: \t(no subsystem)",
"Flags: \t(no flags)",
"",
"Data segments:",
" 0: crypt",
"\toffset: 16777216 [bytes]",
"\tlength: (whole device)",
"\tcipher: aes-xts-plain64",
"\tsector: 512 [bytes]",
"",
"Keyslots:",
" 0: luks2",
"\tKey: 512 bits",
"\tPriority: normal",
"\tCipher: aes-xts-plain64",
"\tCipher key: 512 bits",
"\tPBKDF: argon2i",
"\tTime cost: 4",
"\tMemory: 840148",
"\tThreads: 4",
"\tSalt: 7f 81 ca 1c 66 76 a1 91 5b c0 81 48 50 31 c6 30 ",
"\t 63 2b 77 87 c1 3f cb 1d 9c 87 6a b2 8e a6 e0 91 ",
"\tAF stripes: 4000",
"\tAF hash: sha256",
"\tArea offset:32768 [bytes]",
"\tArea length:258048 [bytes]",
"\tDigest ID: 0",
"Tokens:",
"Digests:",
" 0: pbkdf2",
"\tHash: sha256",
"\tIterations: 44043",
"\tSalt: 56 8b 9c 92 d4 0b cb 46 2a ec 5f eb fc 36 d2 ff ",
"\t 3c 43 7b ef d8 5f 96 64 0f c7 03 2d ff fe 55 25 ",
"\tDigest: ab 7b c5 64 ee 16 fa a2 74 94 d2 7e 7b b0 63 16 ",
"\t 02 ab 81 2e c4 d7 62 6d 1a d4 a4 a4 77 00 ef 88 "
]
}

TASK [Verify the presence/absence of the LUKS device node] **********************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-encryption.yml:16
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}

TASK [Verify that the raw device is the same as the device if not encrypted] ****************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-encryption.yml:25
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}

TASK [Make sure we got info about the LUKS volume if encrypted] *****************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-encryption.yml:31
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}

TASK [Verify the LUKS volume's device type if encrypted] ************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-encryption.yml:37
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}

TASK [Check LUKS version] *******************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-encryption.yml:42
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [Check LUKS key size] ******************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-encryption.yml:48
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [Check LUKS cipher] ********************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-encryption.yml:54
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [set_fact] *****************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-encryption.yml:60
ok: [localhost] => {
"ansible_facts": {
"_storage_test_crypttab_entries": [
"luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 /dev/nvme0n1p1 -"
],
"_storage_test_expected_crypttab_entries": "1",
"_storage_test_expected_crypttab_key_file": "-"
},
"changed": false
}

TASK [Check for /etc/crypttab entry] ********************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-encryption.yml:65
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}

TASK [Validate the format of the crypttab entry] ********************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-encryption.yml:70
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}

TASK [Check backing device of crypttab entry] ***********************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-encryption.yml:76
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}

TASK [Check key file of crypttab entry] *****************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-encryption.yml:82
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}

TASK [set_fact] *****************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-encryption.yml:88
ok: [localhost] => {
"ansible_facts": {
"_storage_test_crypttab_entries": null,
"_storage_test_expected_crypttab_entries": null,
"_storage_test_expected_crypttab_key_file": null
},
"changed": false
}

TASK [get information about RAID] ***********************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-md.yml:7
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [set_fact] *****************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-md.yml:13
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [set_fact] *****************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-md.yml:17
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [set_fact] *****************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-md.yml:21
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [check RAID active devices count] ******************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-md.yml:25
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [check RAID spare devices count] *******************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-md.yml:31
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [check RAID metadata version] **********************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-md.yml:37
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [parse the actual size of the volume] **************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-size.yml:3
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [parse the requested size of the volume] ***********************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-size.yml:9
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [debug] ********************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-size.yml:15
ok: [localhost] => {
"storage_test_actual_size": {
"changed": false,
"skip_reason": "Conditional result was False",
"skipped": true
}
}

TASK [debug] ********************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-size.yml:18
ok: [localhost] => {
"storage_test_requested_size": {
"changed": false,
"skip_reason": "Conditional result was False",
"skipped": true
}
}

TASK [assert] *******************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume-size.yml:21
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [Clean up facts] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tests/test-verify-volume.yml:16
ok: [localhost] => {
"ansible_facts": {
"_storage_test_volume_present": null
},
"changed": false
}

TASK [Clean up variable namespace] **********************************************************************************************************************************************************************
task path: /root/test/storage/tests/verify-role-results.yml:39
ok: [localhost] => {
"ansible_facts": {
"storage_test_pool": null
},
"changed": false
}

TASK [Verify the volumes with no pool were correctly managed] *******************************************************************************************************************************************
task path: /root/test/storage/tests/verify-role-results.yml:46

TASK [Clean up variable namespace] **********************************************************************************************************************************************************************
task path: /root/test/storage/tests/verify-role-results.yml:56
ok: [localhost] => {
"ansible_facts": {
"storage_test_blkinfo": null,
"storage_test_crypttab": null,
"storage_test_fstab": null,
"storage_test_volume": null
},
"changed": false
}

TASK [Remove the encryption layer] **********************************************************************************************************************************************************************
task path: /root/test/storage/tests/tests_luks.yml:40

TASK [storage : Set version specific variables] *********************************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:2
ok: [localhost] => {
"ansible_facts": {
"blivet_package_list": [
"python3-blivet",
"libblockdev-crypto",
"libblockdev-dm",
"libblockdev-lvm",
"libblockdev-mdraid",
"libblockdev-swap"
]
},
"ansible_included_var_files": [
"/root/test/storage/vars/RedHat_8.yml"
],
"changed": false
}

TASK [storage : define an empty list of pools to be used in testing] ************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:14
ok: [localhost] => {
"ansible_facts": {
"_storage_pools_list": []
},
"changed": false
}

TASK [storage : define an empty list of volumes to be used in testing] **********************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:18
ok: [localhost] => {
"ansible_facts": {
"_storage_volumes_list": []
},
"changed": false
}

TASK [storage : include the appropriate provider tasks] *************************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:22
included: /root/test/storage/tasks/main-blivet.yml for localhost

TASK [storage : get a list of rpm packages installed on host machine] ***********************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:2
skipping: [localhost] => {
"changed": false,
"skip_reason": "Conditional result was False"
}

TASK [storage : make sure blivet is available] **********************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:7
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551738.06216-18111-12286179001058 && echo ansible-tmp-1596551738.06216-18111-12286179001058="echo /root/.ansible/tmp/ansible-tmp-1596551738.06216-18111-12286179001058" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/packaging/os/dnf.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmp0nk6d_6q TO /root/.ansible/tmp/ansible-tmp-1596551738.06216-18111-12286179001058/AnsiballZ_dnf.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551738.06216-18111-12286179001058/ /root/.ansible/tmp/ansible-tmp-1596551738.06216-18111-12286179001058/AnsiballZ_dnf.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551738.06216-18111-12286179001058/AnsiballZ_dnf.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551738.06216-18111-12286179001058/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"invocation": {
"module_args": {
"allow_downgrade": false,
"autoremove": false,
"bugfix": false,
"conf_file": null,
"disable_excludes": null,
"disable_gpg_check": false,
"disable_plugin": [],
"disablerepo": [],
"download_dir": null,
"download_only": false,
"enable_plugin": [],
"enablerepo": [],
"exclude": [],
"install_repoquery": true,
"install_weak_deps": true,
"installroot": "/",
"list": null,
"lock_timeout": 30,
"name": [
"python3-blivet",
"libblockdev-crypto",
"libblockdev-dm",
"libblockdev-lvm",
"libblockdev-mdraid",
"libblockdev-swap"
],
"releasever": null,
"security": false,
"skip_broken": false,
"state": "present",
"update_cache": false,
"update_only": false,
"validate_certs": true
}
},
"msg": "Nothing to do",
"rc": 0,
"results": []
}

TASK [storage : initialize internal facts] **************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:18
ok: [localhost] => {
"ansible_facts": {
"_storage_pools": [],
"_storage_vol_defaults": [],
"_storage_vol_pools": [],
"_storage_vols_no_defaults": [],
"_storage_vols_no_defaults_by_pool": {},
"_storage_vols_w_defaults": [],
"_storage_volumes": []
},
"changed": false
}

TASK [storage : Apply defaults to pools and volumes [1/6]] **********************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:28
ok: [localhost] => (item={'name': 'foo', 'type': 'partition', 'disks': ['nvme0n1'], 'volumes': [{'name': 'test1', 'type': 'partition', 'mount_point': '/opt/test1', 'encryption': False, 'encryption_passphrase': 'yabbadabbadoo'}]}) => {
"ansible_facts": {
"_storage_pools": [
{
"disks": [
"nvme0n1"
],
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": null,
"name": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"state": "present",
"type": "partition",
"volumes": [
{
"encryption": false,
"encryption_passphrase": "yabbadabbadoo",
"mount_point": "/opt/test1",
"name": "test1",
"type": "partition"
}
]
}
]
},
"ansible_loop_var": "pool",
"changed": false,
"pool": {
"disks": [
"nvme0n1"
],
"name": "foo",
"type": "partition",
"volumes": [
{
"encryption": false,
"encryption_passphrase": "yabbadabbadoo",
"mount_point": "/opt/test1",
"name": "test1",
"type": "partition"
}
]
}
}

TASK [storage : Apply defaults to pools and volumes [2/6]] **********************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:36
ok: [localhost] => (item=[{'state': 'present', 'type': 'partition', 'encryption': False, 'encryption_passphrase': None, 'encryption_key_file': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'name': 'foo', 'disks': ['nvme0n1'], 'volumes': [{'name': 'test1', 'type': 'partition', 'mount_point': '/opt/test1', 'encryption': False, 'encryption_passphrase': 'yabbadabbadoo'}]}, {'name': 'test1', 'type': 'partition', 'mount_point': '/opt/test1', 'encryption': False, 'encryption_passphrase': 'yabbadabbadoo'}]) => {
"ansible_facts": {
"_storage_vol_defaults": [
{
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": null,
"fs_create_options": "",
"fs_label": "",
"fs_overwrite_existing": true,
"fs_type": "xfs",
"mount_check": 0,
"mount_device_identifier": "uuid",
"mount_options": "defaults",
"mount_passno": 0,
"mount_point": "",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"size": 0,
"state": "present",
"type": "lvm"
}
],
"_storage_vol_pools": [
"foo"
],
"_storage_vols_no_defaults": [
{
"encryption": false,
"encryption_passphrase": "yabbadabbadoo",
"mount_point": "/opt/test1",
"name": "test1",
"type": "partition"
}
]
},
"ansible_loop_var": "item",
"changed": false,
"item": [
{
"disks": [
"nvme0n1"
],
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": null,
"name": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"state": "present",
"type": "partition",
"volumes": [
{
"encryption": false,
"encryption_passphrase": "yabbadabbadoo",
"mount_point": "/opt/test1",
"name": "test1",
"type": "partition"
}
]
},
{
"encryption": false,
"encryption_passphrase": "yabbadabbadoo",
"mount_point": "/opt/test1",
"name": "test1",
"type": "partition"
}
]
}

TASK [storage : Apply defaults to pools and volumes [3/6]] **********************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:44
ok: [localhost] => (item=[{'name': 'test1', 'type': 'partition', 'mount_point': '/opt/test1', 'encryption': False, 'encryption_passphrase': 'yabbadabbadoo'}, {'state': 'present', 'type': 'lvm', 'size': 0, 'fs_type': 'xfs', 'fs_label': '', 'fs_create_options': '', 'fs_overwrite_existing': True, 'mount_point': '', 'mount_options': 'defaults', 'mount_check': 0, 'mount_passno': 0, 'mount_device_identifier': 'uuid', 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'encryption': False, 'encryption_passphrase': None, 'encryption_key_file': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None}]) => {
"ansible_facts": {
"_storage_vols_w_defaults": [
{
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": "yabbadabbadoo",
"fs_create_options": "",
"fs_label": "",
"fs_overwrite_existing": true,
"fs_type": "xfs",
"mount_check": 0,
"mount_device_identifier": "uuid",
"mount_options": "defaults",
"mount_passno": 0,
"mount_point": "/opt/test1",
"name": "test1",
"pool": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"size": 0,
"state": "present",
"type": "partition"
}
]
},
"ansible_index_var": "idx",
"ansible_loop_var": "item",
"changed": false,
"idx": 0,
"item": [
{
"encryption": false,
"encryption_passphrase": "yabbadabbadoo",
"mount_point": "/opt/test1",
"name": "test1",
"type": "partition"
},
{
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": null,
"fs_create_options": "",
"fs_label": "",
"fs_overwrite_existing": true,
"fs_type": "xfs",
"mount_check": 0,
"mount_device_identifier": "uuid",
"mount_options": "defaults",
"mount_passno": 0,
"mount_point": "",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"size": 0,
"state": "present",
"type": "lvm"
}
]
}

TASK [storage : Apply defaults to pools and volumes [4/6]] **********************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:52
ok: [localhost] => (item={'state': 'present', 'type': 'partition', 'encryption': False, 'encryption_passphrase': None, 'encryption_key_file': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'name': 'foo', 'disks': ['nvme0n1'], 'volumes': [{'name': 'test1', 'type': 'partition', 'mount_point': '/opt/test1', 'encryption': False, 'encryption_passphrase': 'yabbadabbadoo'}]}) => {
"ansible_facts": {
"_storage_vols_no_defaults_by_pool": {
"foo": [
{
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": "yabbadabbadoo",
"fs_create_options": "",
"fs_label": "",
"fs_overwrite_existing": true,
"fs_type": "xfs",
"mount_check": 0,
"mount_device_identifier": "uuid",
"mount_options": "defaults",
"mount_passno": 0,
"mount_point": "/opt/test1",
"name": "test1",
"pool": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"size": 0,
"state": "present",
"type": "partition"
}
]
}
},
"ansible_loop_var": "item",
"changed": false,
"item": {
"disks": [
"nvme0n1"
],
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": null,
"name": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"state": "present",
"type": "partition",
"volumes": [
{
"encryption": false,
"encryption_passphrase": "yabbadabbadoo",
"mount_point": "/opt/test1",
"name": "test1",
"type": "partition"
}
]
}
}

TASK [storage : Apply defaults to pools and volumes [5/6]] **********************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:61
ok: [localhost] => (item={'state': 'present', 'type': 'partition', 'encryption': False, 'encryption_passphrase': None, 'encryption_key_file': None, 'encryption_cipher': None, 'encryption_key_size': None, 'encryption_luks_version': None, 'raid_level': None, 'raid_device_count': None, 'raid_spare_count': None, 'raid_chunk_size': None, 'raid_metadata_version': None, 'name': 'foo', 'disks': ['nvme0n1'], 'volumes': [{'name': 'test1', 'type': 'partition', 'mount_point': '/opt/test1', 'encryption': False, 'encryption_passphrase': 'yabbadabbadoo'}]}) => {
"ansible_facts": {
"_storage_pools": [
{
"disks": [
"nvme0n1"
],
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": null,
"name": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"state": "present",
"type": "partition",
"volumes": [
{
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": "yabbadabbadoo",
"fs_create_options": "",
"fs_label": "",
"fs_overwrite_existing": true,
"fs_type": "xfs",
"mount_check": 0,
"mount_device_identifier": "uuid",
"mount_options": "defaults",
"mount_passno": 0,
"mount_point": "/opt/test1",
"name": "test1",
"pool": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"size": 0,
"state": "present",
"type": "partition"
}
]
}
]
},
"ansible_index_var": "idx",
"ansible_loop_var": "pool",
"changed": false,
"idx": 0,
"pool": {
"disks": [
"nvme0n1"
],
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": null,
"name": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"state": "present",
"type": "partition",
"volumes": [
{
"encryption": false,
"encryption_passphrase": "yabbadabbadoo",
"mount_point": "/opt/test1",
"name": "test1",
"type": "partition"
}
]
}
}

TASK [storage : Apply defaults to pools and volumes [6/6]] **********************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:72

TASK [storage : debug] **********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:84
ok: [localhost] => {
"_storage_pools": [
{
"disks": [
"nvme0n1"
],
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": null,
"name": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"state": "present",
"type": "partition",
"volumes": [
{
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": "yabbadabbadoo",
"fs_create_options": "",
"fs_label": "",
"fs_overwrite_existing": true,
"fs_type": "xfs",
"mount_check": 0,
"mount_device_identifier": "uuid",
"mount_options": "defaults",
"mount_passno": 0,
"mount_point": "/opt/test1",
"name": "test1",
"pool": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"size": 0,
"state": "present",
"type": "partition"
}
]
}
]
}

TASK [storage : debug] **********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:87
ok: [localhost] => {
"_storage_volumes": []
}

TASK [storage : get required packages] ******************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:90
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551742.6613111-18145-68523253749948 && echo ansible-tmp-1596551742.6613111-18145-68523253749948="echo /root/.ansible/tmp/ansible-tmp-1596551742.6613111-18145-68523253749948" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmp6b23aekf TO /root/.ansible/tmp/ansible-tmp-1596551742.6613111-18145-68523253749948/AnsiballZ_blivet.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551742.6613111-18145-68523253749948/ /root/.ansible/tmp/ansible-tmp-1596551742.6613111-18145-68523253749948/AnsiballZ_blivet.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551742.6613111-18145-68523253749948/AnsiballZ_blivet.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551742.6613111-18145-68523253749948/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"actions": [],
"changed": false,
"crypts": [],
"invocation": {
"module_args": {
"disklabel_type": null,
"packages_only": true,
"pools": [
{
"disks": [
"nvme0n1"
],
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": null,
"name": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"state": "present",
"type": "partition",
"volumes": [
{
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": "yabbadabbadoo",
"fs_create_options": "",
"fs_label": "",
"fs_overwrite_existing": true,
"fs_type": "xfs",
"mount_check": 0,
"mount_device_identifier": "uuid",
"mount_options": "defaults",
"mount_passno": 0,
"mount_point": "/opt/test1",
"name": "test1",
"pool": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"size": 0,
"state": "present",
"type": "partition"
}
]
}
],
"safe_mode": true,
"use_partitions": null,
"volumes": []
}
},
"leaves": [],
"mounts": [],
"packages": [
"xfsprogs"
],
"pools": [],
"volumes": []
}

TASK [storage : make sure required packages are installed] **********************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:99
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551746.6839507-18212-14130022943421 && echo ansible-tmp-1596551746.6839507-18212-14130022943421="echo /root/.ansible/tmp/ansible-tmp-1596551746.6839507-18212-14130022943421" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/packaging/os/dnf.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmpg04gg6cv TO /root/.ansible/tmp/ansible-tmp-1596551746.6839507-18212-14130022943421/AnsiballZ_dnf.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551746.6839507-18212-14130022943421/ /root/.ansible/tmp/ansible-tmp-1596551746.6839507-18212-14130022943421/AnsiballZ_dnf.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551746.6839507-18212-14130022943421/AnsiballZ_dnf.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551746.6839507-18212-14130022943421/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"invocation": {
"module_args": {
"allow_downgrade": false,
"autoremove": false,
"bugfix": false,
"conf_file": null,
"disable_excludes": null,
"disable_gpg_check": false,
"disable_plugin": [],
"disablerepo": [],
"download_dir": null,
"download_only": false,
"enable_plugin": [],
"enablerepo": [],
"exclude": [],
"install_repoquery": true,
"install_weak_deps": true,
"installroot": "/",
"list": null,
"lock_timeout": 30,
"name": [
"xfsprogs"
],
"releasever": null,
"security": false,
"skip_broken": false,
"state": "present",
"update_cache": false,
"update_only": false,
"validate_certs": true
}
},
"msg": "Nothing to do",
"rc": 0,
"results": []
}

TASK [storage : manage the pools and volumes to match the specified state] ******************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:104
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c 'echo ~root && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /root/.ansible/tmp"&& mkdir /root/.ansible/tmp/ansible-tmp-1596551750.5523744-18228-104064955521611 && echo ansible-tmp-1596551750.5523744-18228-104064955521611="echo /root/.ansible/tmp/ansible-tmp-1596551750.5523744-18228-104064955521611" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
PUT /root/.ansible/tmp/ansible-local-15515o7pn3a74/tmpljru2714 TO /root/.ansible/tmp/ansible-tmp-1596551750.5523744-18228-104064955521611/AnsiballZ_blivet.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1596551750.5523744-18228-104064955521611/ /root/.ansible/tmp/ansible-tmp-1596551750.5523744-18228-104064955521611/AnsiballZ_blivet.py && sleep 0'
EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1596551750.5523744-18228-104064955521611/AnsiballZ_blivet.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1596551750.5523744-18228-104064955521611/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
File "/tmp/ansible_blivet_payload__mpl_uj7/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 1161, in run_module
File "/tmp/ansible_blivet_payload__mpl_uj7/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 898, in manage_pool
File "/tmp/ansible_blivet_payload__mpl_uj7/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 804, in manage
File "/tmp/ansible_blivet_payload__mpl_uj7/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 786, in _manage_volumes
File "/tmp/ansible_blivet_payload__mpl_uj7/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 403, in manage
File "/tmp/ansible_blivet_payload__mpl_uj7/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 479, in _create
fatal: [localhost]: FAILED! => {
"actions": [],
"changed": false,
"crypts": [],
"invocation": {
"module_args": {
"disklabel_type": null,
"packages_only": false,
"pools": [
{
"disks": [
"nvme0n1"
],
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": null,
"name": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"state": "present",
"type": "partition",
"volumes": [
{
"encryption": false,
"encryption_cipher": null,
"encryption_key_file": null,
"encryption_key_size": null,
"encryption_luks_version": null,
"encryption_passphrase": "yabbadabbadoo",
"fs_create_options": "",
"fs_label": "",
"fs_overwrite_existing": true,
"fs_type": "xfs",
"mount_check": 0,
"mount_device_identifier": "uuid",
"mount_options": "defaults",
"mount_passno": 0,
"mount_point": "/opt/test1",
"name": "test1",
"pool": "foo",
"raid_chunk_size": null,
"raid_device_count": null,
"raid_level": null,
"raid_metadata_version": null,
"raid_spare_count": null,
"size": 0,
"state": "present",
"type": "partition"
}
]
}
],
"safe_mode": false,
"use_partitions": null,
"volumes": []
}
},
"leaves": [],
"mounts": [],
"msg": "partition allocation failed for volume 'test1'",
"packages": [],
"pools": [],
"volumes": []
}

PLAY RECAP **********************************************************************************************************************************************************************************************
localhost : ok=117 changed=5 unreachable=0 failed=1 skipped=54 rescued=0 ignored=0

/tmp/blivet.log

# cat /tmp/blivet.log | tail -300

  PVs = ['existing 277.81 GiB partition sda3 (30) with existing lvmpv']
  LVs = ['existing 199.97 GiB lvmlv rhel_storageqe-62-home (43) with existing xfs '
 'filesystem',
 'existing 70 GiB lvmlv rhel_storageqe-62-root (56) with existing xfs '
 'filesystem',
 'existing 7.84 GiB lvmlv rhel_storageqe-62-swap (69) with existing swap']
  segment type = linear percent = 0
  VG space used = 70 GiB
2020-08-04 10:35:53,708 INFO program/MainThread: Running [9] dmsetup info -co subsystem --noheadings rhel_storageqe--62-root ...
2020-08-04 10:35:53,714 INFO program/MainThread: stdout[9]: LVM

2020-08-04 10:35:53,715 INFO program/MainThread: stderr[9]:
2020-08-04 10:35:53,715 INFO program/MainThread: ...done [9] (exit code: 0)
2020-08-04 10:35:53,719 DEBUG blivet/MainThread: DeviceTree.handle_format: name: rhel_storageqe-62-root ;
2020-08-04 10:35:53,719 DEBUG blivet/MainThread: no type or existing type for rhel_storageqe--62-root, bailing
2020-08-04 10:35:53,722 DEBUG blivet/MainThread: DeviceTree.handle_device: name: rhel_storageqe--62-swap ; info: {'DEVLINKS': '/dev/disk/by-id/dm-uuid-LVM-DrL6pCi2vQjtrIPEXfDd43GVwv6yUXdwM5iHQzFnhGKLdXhcxFch2QeRkI3VKiNr '
'/dev/rhel_storageqe-62/swap '
'/dev/disk/by-id/dm-name-rhel_storageqe--62-swap '
'/dev/disk/by-uuid/aeaa2293-343b-4399-afa5-7d2ceafac06e '
'/dev/mapper/rhel_storageqe--62-swap',
'DEVNAME': '/dev/dm-1',
'DEVPATH': '/devices/virtual/block/dm-1',
'DEVTYPE': 'disk',
'DM_ACTIVATION': '1',
'DM_LV_NAME': 'swap',
'DM_NAME': 'rhel_storageqe--62-swap',
'DM_SUSPENDED': '0',
'DM_UDEV_DISABLE_LIBRARY_FALLBACK_FLAG': '1',
'DM_UDEV_PRIMARY_SOURCE_FLAG': '1',
'DM_UDEV_RULES_VSN': '2',
'DM_UUID': 'LVM-DrL6pCi2vQjtrIPEXfDd43GVwv6yUXdwM5iHQzFnhGKLdXhcxFch2QeRkI3VKiNr',
'DM_VG_NAME': 'rhel_storageqe-62',
'ID_FS_TYPE': 'swap',
'ID_FS_USAGE': 'other',
'ID_FS_UUID': 'aeaa2293-343b-4399-afa5-7d2ceafac06e',
'ID_FS_UUID_ENC': 'aeaa2293-343b-4399-afa5-7d2ceafac06e',
'ID_FS_VERSION': '1',
'MAJOR': '253',
'MINOR': '1',
'SUBSYSTEM': 'block',
'SYS_NAME': 'dm-1',
'SYS_PATH': '/sys/devices/virtual/block/dm-1',
'TAGS': ':systemd:',
'USEC_INITIALIZED': '8032184'} ;
2020-08-04 10:35:53,723 INFO blivet/MainThread: scanning rhel_storageqe--62-swap (/sys/devices/virtual/block/dm-1)...
2020-08-04 10:35:53,727 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: rhel_storageqe--62-swap ; incomplete: False ; hidden: False ;
2020-08-04 10:35:53,730 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 7.84 GiB lvmlv rhel_storageqe-62-swap (69) with existing swap
2020-08-04 10:35:53,735 DEBUG blivet/MainThread: rhel_storageqe-62 size is 277.81 GiB
2020-08-04 10:35:53,737 DEBUG blivet/MainThread: vg rhel_storageqe-62 has 0 B free
2020-08-04 10:35:53,738 DEBUG blivet/MainThread: rhel_storageqe-62 size is 277.81 GiB
2020-08-04 10:35:53,740 DEBUG blivet/MainThread: vg rhel_storageqe-62 has 0 B free
2020-08-04 10:35:53,730 INFO blivet/MainThread: got device: LVMLogicalVolumeDevice instance (0x7f744a217748) --
name = rhel_storageqe-62-swap status = True id = 69
children = []
parents = ['existing 277.81 GiB lvmvg rhel_storageqe-62 (39)']
uuid = M5iHQz-FnhG-KLdX-hcxF-ch2Q-eRkI-3VKiNr size = 7.84 GiB
format = existing swap
major = 0 minor = 0 exists = True protected = False
sysfs path = /sys/devices/virtual/block/dm-1
target size = 7.84 GiB path = /dev/mapper/rhel_storageqe--62-swap
format args = [] original_format = swap target = None dm_uuid = None VG device = LVMVolumeGroupDevice instance (0x7f744a2b9198) --
name = rhel_storageqe-62 status = True id = 39
children = ['existing 199.97 GiB lvmlv rhel_storageqe-62-home (43) with existing xfs '
'filesystem',
'existing 70 GiB lvmlv rhel_storageqe-62-root (56) with existing xfs '
'filesystem',
'existing 7.84 GiB lvmlv rhel_storageqe-62-swap (69) with existing swap']
parents = ['existing 277.81 GiB partition sda3 (30) with existing lvmpv']
uuid = DrL6pC-i2vQ-jtrI-PEXf-Dd43-GVwv-6yUXdw size = 277.81 GiB
format = existing None
major = 0 minor = 0 exists = True protected = False
sysfs path =
target size = 277.81 GiB path = /dev/rhel_storageqe--62
format args = [] original_format = None free = 0 B PE Size = 4 MiB PE Count = 71119
PE Free = 0 PV Count = 1
modified = False extents = 71119 free space = 0 B
free extents = 0 reserved percent = 0 reserved space = 0 B
PVs = ['existing 277.81 GiB partition sda3 (30) with existing lvmpv']
LVs = ['existing 199.97 GiB lvmlv rhel_storageqe-62-home (43) with existing xfs '
'filesystem',
'existing 70 GiB lvmlv rhel_storageqe-62-root (56) with existing xfs '
'filesystem',
'existing 7.84 GiB lvmlv rhel_storageqe-62-swap (69) with existing swap']
segment type = linear percent = 0
VG space used = 7.84 GiB
2020-08-04 10:35:53,741 INFO program/MainThread: Running [10] dmsetup info -co subsystem --noheadings rhel_storageqe--62-swap ...
2020-08-04 10:35:53,747 INFO program/MainThread: stdout[10]: LVM

2020-08-04 10:35:53,747 INFO program/MainThread: stderr[10]:
2020-08-04 10:35:53,747 INFO program/MainThread: ...done [10] (exit code: 0)
2020-08-04 10:35:53,751 DEBUG blivet/MainThread: DeviceTree.handle_format: name: rhel_storageqe-62-swap ;
2020-08-04 10:35:53,751 DEBUG blivet/MainThread: no type or existing type for rhel_storageqe--62-swap, bailing
2020-08-04 10:35:53,755 DEBUG blivet/MainThread: DeviceTree.handle_device: name: rhel_storageqe--62-home ; info: {'DEVLINKS': '/dev/rhel_storageqe-62/home '
'/dev/disk/by-id/dm-name-rhel_storageqe--62-home '
'/dev/mapper/rhel_storageqe--62-home '
'/dev/disk/by-uuid/25082e54-84e7-4945-87a4-532894d69113 '
'/dev/disk/by-id/dm-uuid-LVM-DrL6pCi2vQjtrIPEXfDd43GVwv6yUXdwN0C3KvQUeXpTjzOZ9P1blbg5bVV4tzxT',
'DEVNAME': '/dev/dm-2',
'DEVPATH': '/devices/virtual/block/dm-2',
'DEVTYPE': 'disk',
'DM_ACTIVATION': '1',
'DM_LV_NAME': 'home',
'DM_NAME': 'rhel_storageqe--62-home',
'DM_SUSPENDED': '0',
'DM_UDEV_DISABLE_LIBRARY_FALLBACK_FLAG': '1',
'DM_UDEV_PRIMARY_SOURCE_FLAG': '1',
'DM_UDEV_RULES_VSN': '2',
'DM_UUID': 'LVM-DrL6pCi2vQjtrIPEXfDd43GVwv6yUXdwN0C3KvQUeXpTjzOZ9P1blbg5bVV4tzxT',
'DM_VG_NAME': 'rhel_storageqe-62',
'ID_FS_TYPE': 'xfs',
'ID_FS_USAGE': 'filesystem',
'ID_FS_UUID': '25082e54-84e7-4945-87a4-532894d69113',
'ID_FS_UUID_ENC': '25082e54-84e7-4945-87a4-532894d69113',
'MAJOR': '253',
'MINOR': '2',
'SUBSYSTEM': 'block',
'SYS_NAME': 'dm-2',
'SYS_PATH': '/sys/devices/virtual/block/dm-2',
'TAGS': ':systemd:',
'USEC_INITIALIZED': '17451740'} ;
2020-08-04 10:35:53,755 INFO blivet/MainThread: scanning rhel_storageqe--62-home (/sys/devices/virtual/block/dm-2)...
2020-08-04 10:35:53,759 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: rhel_storageqe--62-home ; incomplete: False ; hidden: False ;
2020-08-04 10:35:53,762 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 199.97 GiB lvmlv rhel_storageqe-62-home (43) with existing xfs filesystem
2020-08-04 10:35:53,767 DEBUG blivet/MainThread: rhel_storageqe-62 size is 277.81 GiB
2020-08-04 10:35:53,769 DEBUG blivet/MainThread: vg rhel_storageqe-62 has 0 B free
2020-08-04 10:35:53,770 DEBUG blivet/MainThread: rhel_storageqe-62 size is 277.81 GiB
2020-08-04 10:35:53,772 DEBUG blivet/MainThread: vg rhel_storageqe-62 has 0 B free
2020-08-04 10:35:53,762 INFO blivet/MainThread: got device: LVMLogicalVolumeDevice instance (0x7f744a2b9ac8) --
name = rhel_storageqe-62-home status = True id = 43
children = []
parents = ['existing 277.81 GiB lvmvg rhel_storageqe-62 (39)']
uuid = N0C3Kv-QUeX-pTjz-OZ9P-1blb-g5bV-V4tzxT size = 199.97 GiB
format = existing xfs filesystem
major = 0 minor = 0 exists = True protected = False
sysfs path = /sys/devices/virtual/block/dm-2
target size = 199.97 GiB path = /dev/mapper/rhel_storageqe--62-home
format args = [] original_format = xfs target = None dm_uuid = None VG device = LVMVolumeGroupDevice instance (0x7f744a2b9198) --
name = rhel_storageqe-62 status = True id = 39
children = ['existing 199.97 GiB lvmlv rhel_storageqe-62-home (43) with existing xfs '
'filesystem',
'existing 70 GiB lvmlv rhel_storageqe-62-root (56) with existing xfs '
'filesystem',
'existing 7.84 GiB lvmlv rhel_storageqe-62-swap (69) with existing swap']
parents = ['existing 277.81 GiB partition sda3 (30) with existing lvmpv']
uuid = DrL6pC-i2vQ-jtrI-PEXf-Dd43-GVwv-6yUXdw size = 277.81 GiB
format = existing None
major = 0 minor = 0 exists = True protected = False
sysfs path =
target size = 277.81 GiB path = /dev/rhel_storageqe--62
format args = [] original_format = None free = 0 B PE Size = 4 MiB PE Count = 71119
PE Free = 0 PV Count = 1
modified = False extents = 71119 free space = 0 B
free extents = 0 reserved percent = 0 reserved space = 0 B
PVs = ['existing 277.81 GiB partition sda3 (30) with existing lvmpv']
LVs = ['existing 199.97 GiB lvmlv rhel_storageqe-62-home (43) with existing xfs '
'filesystem',
'existing 70 GiB lvmlv rhel_storageqe-62-root (56) with existing xfs '
'filesystem',
'existing 7.84 GiB lvmlv rhel_storageqe-62-swap (69) with existing swap']
segment type = linear percent = 0
VG space used = 199.97 GiB
2020-08-04 10:35:53,773 INFO program/MainThread: Running [11] dmsetup info -co subsystem --noheadings rhel_storageqe--62-home ...
2020-08-04 10:35:53,779 INFO program/MainThread: stdout[11]: LVM

2020-08-04 10:35:53,779 INFO program/MainThread: stderr[11]:
2020-08-04 10:35:53,779 INFO program/MainThread: ...done [11] (exit code: 0)
2020-08-04 10:35:53,783 DEBUG blivet/MainThread: DeviceTree.handle_format: name: rhel_storageqe-62-home ;
2020-08-04 10:35:53,783 DEBUG blivet/MainThread: no type or existing type for rhel_storageqe--62-home, bailing
2020-08-04 10:35:53,787 DEBUG blivet/MainThread: DeviceTree.handle_device: name: luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 ; info: {'DEVLINKS': '/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-d1731709dfb24096a9c06e332d6e95e5-luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 '
'/dev/disk/by-uuid/f1698797-4201-47e1-ac23-985ab6927e03 '
'/dev/disk/by-id/dm-name-luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 '
'/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5',
'DEVNAME': '/dev/dm-3',
'DEVPATH': '/devices/virtual/block/dm-3',
'DEVTYPE': 'disk',
'DM_NAME': 'luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5',
'DM_SUSPENDED': '0',
'DM_UDEV_DISABLE_LIBRARY_FALLBACK_FLAG': '1',
'DM_UDEV_PRIMARY_SOURCE_FLAG': '1',
'DM_UDEV_RULES_VSN': '2',
'DM_UUID': 'CRYPT-LUKS2-d1731709dfb24096a9c06e332d6e95e5-luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5',
'ID_FS_TYPE': 'xfs',
'ID_FS_USAGE': 'filesystem',
'ID_FS_UUID': 'f1698797-4201-47e1-ac23-985ab6927e03',
'ID_FS_UUID_ENC': 'f1698797-4201-47e1-ac23-985ab6927e03',
'MAJOR': '253',
'MINOR': '3',
'SUBSYSTEM': 'block',
'SYS_NAME': 'dm-3',
'SYS_PATH': '/sys/devices/virtual/block/dm-3',
'TAGS': ':systemd:',
'USEC_INITIALIZED': '375553204'} ;
2020-08-04 10:35:53,787 INFO blivet/MainThread: scanning luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 (/sys/devices/virtual/block/dm-3)...
2020-08-04 10:35:53,791 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 ; incomplete: False ; hidden: False ;
2020-08-04 10:35:53,794 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None
2020-08-04 10:35:53,794 INFO program/MainThread: Running [12] dmsetup info -co subsystem --noheadings luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 ...
2020-08-04 10:35:53,800 INFO program/MainThread: stdout[12]: CRYPT

2020-08-04 10:35:53,800 INFO program/MainThread: stderr[12]:
2020-08-04 10:35:53,800 INFO program/MainThread: ...done [12] (exit code: 0)
2020-08-04 10:35:53,806 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: nvme0n1p1 ; incomplete: False ; hidden: False ;
2020-08-04 10:35:53,810 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 745.21 GiB partition nvme0n1p1 (213) with existing luks
2020-08-04 10:35:53,811 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 224
2020-08-04 10:35:53,815 DEBUG blivet/MainThread: PartitionDevice.add_child: name: nvme0n1p1 ; child: luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 ; kids: 0 ;
2020-08-04 10:35:53,815 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 225
2020-08-04 10:35:53,818 DEBUG blivet/MainThread: LUKSDevice._set_format: luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 ; type: None ; current: None ;
2020-08-04 10:35:53,822 DEBUG blivet/MainThread: LUKSDevice.update_sysfs_path: luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 ; status: True ;
2020-08-04 10:35:53,822 DEBUG blivet/MainThread: luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 sysfs_path set to /sys/devices/virtual/block/dm-3
2020-08-04 10:35:53,826 DEBUG blivet/MainThread: LUKSDevice.read_current_size: exists: True ; path: /dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 ; sysfs_path: /sys/devices/virtual/block/dm-3 ;
2020-08-04 10:35:53,826 DEBUG blivet/MainThread: updated luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 size to 745.2 GiB (745.2 GiB)
2020-08-04 10:35:53,827 INFO blivet/MainThread: added luks/dm-crypt luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 (id 223) to device tree
2020-08-04 10:35:53,827 INFO blivet/MainThread: got device: LUKSDevice instance (0x7f744a236860) --
name = luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 status = True id = 223
children = []
parents = ['existing 745.21 GiB partition nvme0n1p1 (213) with existing luks']
uuid = None size = 745.2 GiB
format = existing None
major = 0 minor = 0 exists = True protected = False
sysfs path = /sys/devices/virtual/block/dm-3
target size = 0 B path = /dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5
format args = [] original_format = None target = crypt dm_uuid = None
2020-08-04 10:35:53,828 INFO program/MainThread: Running [13] dmsetup info -co subsystem --noheadings luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 ...
2020-08-04 10:35:53,833 INFO program/MainThread: stdout[13]: CRYPT

2020-08-04 10:35:53,834 INFO program/MainThread: stderr[13]:
2020-08-04 10:35:53,834 INFO program/MainThread: ...done [13] (exit code: 0)
2020-08-04 10:35:53,834 INFO program/MainThread: Running [14] dmsetup info -co subsystem --noheadings luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 ...
2020-08-04 10:35:53,840 INFO program/MainThread: stdout[14]: CRYPT

2020-08-04 10:35:53,840 INFO program/MainThread: stderr[14]:
2020-08-04 10:35:53,840 INFO program/MainThread: ...done [14] (exit code: 0)
2020-08-04 10:35:53,844 DEBUG blivet/MainThread: DeviceTree.handle_format: name: luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 ;
2020-08-04 10:35:53,848 DEBUG blivet/MainThread: AppleBootstrapFS.supported: supported: True ;
2020-08-04 10:35:53,848 DEBUG blivet/MainThread: get_format('appleboot') returning AppleBootstrapFS instance with object id 227
2020-08-04 10:35:53,852 DEBUG blivet/MainThread: EFIFS.supported: supported: True ;
2020-08-04 10:35:53,852 DEBUG blivet/MainThread: get_format('efi') returning EFIFS instance with object id 228
2020-08-04 10:35:53,856 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ;
2020-08-04 10:35:53,856 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 229
2020-08-04 10:35:53,861 DEBUG blivet/MainThread: MacEFIFS.supported: supported: True ;
2020-08-04 10:35:53,861 DEBUG blivet/MainThread: get_format('macefi') returning MacEFIFS instance with object id 230
2020-08-04 10:35:53,861 INFO blivet/MainThread: type detected on 'luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5' is 'xfs'
2020-08-04 10:35:53,864 DEBUG blivet/MainThread: XFS.supported: supported: True ;
2020-08-04 10:35:53,864 DEBUG blivet/MainThread: get_format('xfs') returning XFS instance with object id 231
2020-08-04 10:35:53,867 DEBUG blivet/MainThread: LUKSDevice._set_format: luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 ; type: xfs ; current: None ;
2020-08-04 10:35:53,868 INFO blivet/MainThread: got format: existing xfs filesystem
2020-08-04 10:35:53,868 INFO program/MainThread: Running... udevadm settle --timeout=300
2020-08-04 10:35:53,888 DEBUG program/MainThread: Return code: 0
2020-08-04 10:35:53,917 INFO blivet/MainThread: edd: MBR signature on nvme0n1 is zero. new disk image?
2020-08-04 10:35:53,917 INFO blivet/MainThread: edd: MBR signature on sda is zero. new disk image?
2020-08-04 10:35:53,917 INFO blivet/MainThread: edd: MBR signature on sdl is zero. new disk image?
2020-08-04 10:35:53,917 INFO blivet/MainThread: edd: collected mbr signatures: {}
2020-08-04 10:35:53,924 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/mapper/rhel_storageqe--62-root ; incomplete: False ; hidden: False ;
2020-08-04 10:35:53,929 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned existing 70 GiB lvmlv rhel_storageqe-62-root (56) with existing xfs filesystem
2020-08-04 10:35:53,929 DEBUG blivet/MainThread: resolved '/dev/mapper/rhel_storageqe--62-root' to 'rhel_storageqe-62-root' (lvmlv)
2020-08-04 10:35:53,930 DEBUG blivet/MainThread: resolved 'UUID=02369863-9365-4c2c-a2c4-141b221fdf33' to 'sda2' (partition)
2020-08-04 10:35:53,930 DEBUG blivet/MainThread: resolved 'UUID=E3F6-B0B3' to 'sda1' (partition)
2020-08-04 10:35:53,933 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/mapper/rhel_storageqe--62-home ; incomplete: False ; hidden: False ;
2020-08-04 10:35:53,937 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned existing 199.97 GiB lvmlv rhel_storageqe-62-home (43) with existing xfs filesystem
2020-08-04 10:35:53,938 DEBUG blivet/MainThread: resolved '/dev/mapper/rhel_storageqe--62-home' to 'rhel_storageqe-62-home' (lvmlv)
2020-08-04 10:35:53,941 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/mapper/rhel_storageqe--62-swap ; incomplete: False ; hidden: False ;
2020-08-04 10:35:53,945 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned existing 7.84 GiB lvmlv rhel_storageqe-62-swap (69) with existing swap
2020-08-04 10:35:53,945 DEBUG blivet/MainThread: resolved '/dev/mapper/rhel_storageqe--62-swap' to 'rhel_storageqe-62-swap' (lvmlv)
2020-08-04 10:35:53,948 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 ; incomplete: False ; hidden: False ;
2020-08-04 10:35:53,952 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned existing 745.2 GiB luks/dm-crypt luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5 (223) with existing xfs filesystem
2020-08-04 10:35:53,952 DEBUG blivet/MainThread: resolved '/dev/mapper/luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5' to 'luks-d1731709-dfb2-4096-a9c0-6e332d6e95e5' (luks/dm-crypt)
2020-08-04 10:35:53,956 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: nvme0n1 ; incomplete: False ; hidden: False ;
2020-08-04 10:35:53,960 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned existing 745.21 GiB disk nvme0n1 (203) with existing gpt disklabel
2020-08-04 10:35:53,960 DEBUG blivet/MainThread: resolved 'nvme0n1' to 'nvme0n1' (disk)
2020-08-04 10:35:53,964 DEBUG blivet/MainThread: DeviceTree.get_device_by_name: name: nvme0n11 ; incomplete: False ; hidden: False ;
2020-08-04 10:35:53,968 DEBUG blivet/MainThread: DeviceTree.get_device_by_name returned None
2020-08-04 10:35:53,971 DEBUG blivet/MainThread: DeviceTree.get_device_by_path: path: /dev/nvme0n11 ; incomplete: False ; hidden: False ;
2020-08-04 10:35:53,975 DEBUG blivet/MainThread: DeviceTree.get_device_by_path returned None
2020-08-04 10:35:53,976 DEBUG blivet/MainThread: failed to resolve '/dev/nvme0n11'
2020-08-04 10:35:53,980 DEBUG blivet/MainThread: XFS.supported: supported: True ;
2020-08-04 10:35:53,980 DEBUG blivet/MainThread: get_format('xfs') returning XFS instance with object id 233
2020-08-04 10:35:53,984 DEBUG blivet/MainThread: XFS.supported: supported: True ;
2020-08-04 10:35:53,984 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 235
2020-08-04 10:35:53,989 DEBUG blivet/MainThread: DiskDevice.add_child: name: nvme0n1 ; child: req0 ; kids: 1 ;
2020-08-04 10:35:53,992 DEBUG blivet/MainThread: PartitionDevice._set_format: req0 ; type: xfs ; current: None ;
2020-08-04 10:35:53,997 DEBUG blivet/MainThread: DiskDevice.remove_child: name: nvme0n1 ; child: req0 ; kids: 2 ;
2020-08-04 10:35:53,998 INFO blivet/MainThread: added partition req0 (id 234) to device tree
2020-08-04 10:35:53,998 INFO blivet/MainThread: registered action: [237] create device partition req0 (id 234)
2020-08-04 10:35:53,999 DEBUG blivet/MainThread: get_format('None') returning DeviceFormat instance with object id 239
2020-08-04 10:35:54,005 DEBUG blivet/MainThread: PartitionDevice._set_format: req0 ; type: xfs ; current: xfs ;
2020-08-04 10:35:54,006 INFO blivet/MainThread: registered action: [238] create format xfs filesystem mounted at /opt/test1 on partition req0 (id 234)
2020-08-04 10:35:54,011 DEBUG blivet/MainThread: DiskDevice.setup: nvme0n1 ; orig: False ; status: True ; controllable: True ;
2020-08-04 10:35:54,015 DEBUG blivet/MainThread: DiskDevice.setup: sda ; orig: False ; status: True ; controllable: True ;
2020-08-04 10:35:54,018 DEBUG blivet/MainThread: DiskDevice.setup: sdl ; orig: False ; status: True ; controllable: True ;
2020-08-04 10:35:54,019 DEBUG blivet/MainThread: removing all non-preexisting partitions ['nvme0n1p1(id 213)', 'req0(id 234)', 'sda1(id 12)', 'sda2(id 20)', 'sda3(id 30)', 'sdl1(id 153)'] from disk(s) ['nvme0n1', 'sda', 'sdl']
2020-08-04 10:35:54,021 DEBUG blivet/MainThread: allocate_partitions: disks=['nvme0n1', 'sda', 'sdl'] ; partitions=['nvme0n1p1(id 213)', 'req0(id 234)', 'sda1(id 12)', 'sda2(id 20)', 'sda3(id 30)', 'sdl1(id 153)']
2020-08-04 10:35:54,022 DEBUG blivet/MainThread: removing all non-preexisting partitions ['req0(id 234)'] from disk(s) ['nvme0n1', 'sda', 'sdl']
2020-08-04 10:35:54,022 DEBUG blivet/MainThread: allocating partition: req0 ; id: 234 ; disks: ['nvme0n1'] ;
boot: False ; primary: False ; size: 256 MiB ; grow: True ; max_size: 0 B ; start: None ; end: None
2020-08-04 10:35:54,023 DEBUG blivet/MainThread: checking freespace on nvme0n1
2020-08-04 10:35:54,024 DEBUG blivet/MainThread: get_best_free_space_region: disk=/dev/nvme0n1 part_type=0 req_size=256 MiB boot=False best=None grow=True start=None
2020-08-04 10:35:54,024 DEBUG blivet/MainThread: checking 34-2047 (1007 KiB)
2020-08-04 10:35:54,024 DEBUG blivet/MainThread: current free range is 34-2047 (1007 KiB)
2020-08-04 10:35:54,025 DEBUG blivet/MainThread: checking 1562822656-1562824334 (839.5 KiB)
2020-08-04 10:35:54,025 DEBUG blivet/MainThread: current free range is 1562822656-1562824334 (839.5 KiB)
2020-08-04 10:35:54,025 DEBUG blivet/MainThread: not enough free space for primary -- trying logical

storage: fatal: [localhost]: FAILED! => {"reason": "couldn't resolve module/action 'mount'.

Observed this error when I use the latest ansible, but cannot reproduce it with ansible 2.9.6, pls help check it.

[root@storageqe-62 storage]# ansible-playbook -i inventory tests/tests_default.yml
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point.

PLAY [Ensure that the role runs with default parameters] ******************************************************************************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : Set version specific variables] ***************************************************************************************************************************************************************************************************************************************************************************
ok: [localhost] => (item=/root/storage/vars/RedHat-8.yml)

TASK [storage : define an empty list of pools to be used in testing] ******************************************************************************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : define an empty list of volumes to be used in testing] ****************************************************************************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : include the appropriate provider tasks] *******************************************************************************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"reason": "couldn't resolve module/action 'mount'. This often indicates a misspelling, missing collection, or incorrect module path.\n\nThe error appears to be in '/root/storage/tasks/main-blivet.yml': line 132, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n#\n- name: manage mounts to match the specified state\n ^ here\n"}

PLAY RECAP ****************************************************************************************************************************************************************************************************************************************************************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0

[root@storageqe-62 storage]# ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any point.
ansible 2.10.0.dev0
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible_base-2.10.0.dev0-py3.6.egg/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Dec 5 2019, 15:45:45) [GCC 8.3.1 20191121]

storage: role doesn't support 78 characters' length for volume name

playbook:

  • hosts: all
    become: true
    vars:
    volume_group_size: '10g'
    volume_size: '80g'
    storage_safe_mode: false

    tasks:

    • include_role:
      name: storage

    • include_tasks: get_unused_disk.yml
      vars:
      min_size: "{{ volume_group_size }}"
      max_return: 2

    • name: Create three LVM logical volumes under one volume group
      include_role:
      name: storage
      vars:
      storage_pools:
      - name: foo1
      disks: ["{{ unused_disks[0] }}"]
      volumes:
      - name: abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz
      size: "{{ volume_size }}"
      mount_point: '/opt/test1'

    • name: Clean up
      include_role:
      name: storage
      vars:
      storage_pools:
      - name: foo1
      disks: ["{{ unused_disks[0] }}"]
      state: absent
      volumes: []

Execution log:

TASK [storage : Apply defaults to pools and volumes [6/6]] **************************************************************************************************************************************************

TASK [storage : debug] **************************************************************************************************************************************************************************************
ok: [localhost] => {
"_storage_pools": [
{
"disks": [
"sdd"
],
"name": "foo1",
"state": "present",
"type": "lvm",
"volumes": [
{
"fs_create_options": "",
"fs_label": "",
"fs_overwrite_existing": true,
"fs_type": "xfs",
"mount_check": 0,
"mount_device_identifier": "uuid",
"mount_options": "defaults",
"mount_passno": 0,
"mount_point": "/opt/test1",
"name": "abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz",
"pool": "foo1",
"size": "80g",
"state": "present",
"type": "lvm"
}
]
}
]
}

TASK [storage : debug] **************************************************************************************************************************************************************************************
ok: [localhost] => {
"_storage_volumes": []
}

TASK [storage : get required packages] **********************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"actions": [], "changed": false, "leaves": [], "mounts": [], "msg": "failed to set up volume 'abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz'", "packages": [], "pools": [], "volumes": []}

PLAY RECAP **************************************************************************************************************************************************************************************************
localhost : ok=33 changed=0 unreachable=0 failed=1 skipped=12 rescued=0 ignored=0

If an option is omitted on existing volume, it gets reset to the default.

Omitted mount_point: the volume gets unmounted.

Type switching: if I omit fs_type:, the volume gets reformatted with the default type (xfs). Wouldn't it be more reasonable to keep the existing filesystem? (note that with storage_safe_mode introduced in #43, any destructive operation is prevented - the role errors out. Still, there is the question whether even attempting to change the type to default is reasonable.)

Originally posted by @pcahyna in #43 (comment)

idempotence tests are weak

One option for better idempotence tests would be to save the data we collect for verifications and compare the data for the two runs instead of (or in addition to) doing the normal verification.

Document default to destructive behavior

@dwlehman reminder that we need to clearly communicate to end users via upstream documentation that the current behavior is to default to destructively overwriting existing and conflicting volume groups and filesystems.

We will need to file a separate issue to default to safely preserving data and having the role fail in the event of conflicts while providing the user some option to specify that they desire overwriting and destroying existing state. But for now we simply need to clearly and emphatically communicate the behavior.

storage: size doesn't work for partition pool

playbook

---
- hosts: all
  become: true
  vars:
    mount_location: '/opt/test1'
    volume_group_size: '5g'
    volume_size_before: '4g'
    volume_size_after: '10g'
    storage_safe_mode: false    

  tasks:
    - include_role:
        name: storage

    - include_tasks: get_unused_disk.yml
      vars:
        min_size: "{{ volume_group_size }}"
        max_return: 1

    - name: Create pool with partition
      include_role:
        name: storage
      vars:
          storage_pools:
            - name: foo
              disks: "{{ unused_disks }}"
              type: partition
              volumes:
                - name: test1
                  type: partition
                  fs_type: 'ext4'
                  size: '4g'
                  mount_point: "{{ mount_location }}"

ansible log

ASK [storage : manage the pools and volumes to match the specified state] ****************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:104
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1593845989.9086514-186337-143479670088877 && echo ansible-tmp-1593845989.9086514-186337-143479670088877="` echo /root/.ansible/tmp/ansible-tmp-1593845989.9086514-186337-143479670088877 `" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
<localhost> PUT /root/.ansible/tmp/ansible-local-1856838dp9htp0/tmpghd9odlw TO /root/.ansible/tmp/ansible-tmp-1593845989.9086514-186337-143479670088877/AnsiballZ_blivet.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1593845989.9086514-186337-143479670088877/ /root/.ansible/tmp/ansible-tmp-1593845989.9086514-186337-143479670088877/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1593845989.9086514-186337-143479670088877/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1593845989.9086514-186337-143479670088877/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
    "actions": [
        {
            "action": "create format",
            "device": "/dev/sdd",
            "fs_type": "disklabel"
        },
        {
            "action": "create device",
            "device": "/dev/sdd1",
            "fs_type": null
        },
        {
            "action": "create format",
            "device": "/dev/sdd1",
            "fs_type": "ext4"
        }
    ],
    "changed": true,
    "crypts": [],
    "invocation": {
        "module_args": {
            "disklabel_type": null,
            "packages_only": false,
            "pools": [
                {
                    "disks": [
                        "sdd"
                    ],
                    "encryption": false,
                    "encryption_cipher": null,
                    "encryption_key_file": null,
                    "encryption_key_size": null,
                    "encryption_luks_version": null,
                    "encryption_passphrase": null,
                    "name": "foo",
                    "raid_level": null,
                    "state": "present",
                    "type": "partition",
                    "volumes": [
                        {
                            "_device": "/dev/sdd1",
                            "_kernel_device": "/dev/sdd1",
                            "_mount_id": "UUID=724bb9aa-9ccb-40d3-bb7b-d15fb76d7004",
                            "_raw_device": "/dev/sdd1",
                            "_raw_kernel_device": "/dev/sdd1",
                            "encryption": false,
                            "encryption_cipher": null,
                            "encryption_key_file": null,
                            "encryption_key_size": null,
                            "encryption_luks_version": null,
                            "encryption_passphrase": null,
                            "fs_create_options": "",
                            "fs_label": "",
                            "fs_overwrite_existing": true,
                            "fs_type": "ext4",
                            "mount_check": 0,
                            "mount_device_identifier": "uuid",
                            "mount_options": "defaults",
                            "mount_passno": 0,
                            "mount_point": "/opt/test1",
                            "name": "test1",
                            "pool": "foo",
                            "raid_chunk_size": null,
                            "raid_device_count": null,
                            "raid_level": null,
                            "raid_metadata_version": null,
                            "raid_spare_count": null,
                            "size": "4g",
                            "state": "present",
                            "type": "partition"
                        }
                    ]
                }
            ],
            "safe_mode": false,
            "use_partitions": null,
            "volumes": []
        }
    },
    "leaves": [
        "/dev/sda1",
        "/dev/sda2",
        "/dev/mapper/rhel_storageqe--62-home",
        "/dev/mapper/rhel_storageqe--62-root",
        "/dev/mapper/rhel_storageqe--62-swap",
        "/dev/sdb",
        "/dev/sdh",
        "/dev/sdi",
        "/dev/sdj",
        "/dev/sdc",
        "/dev/sdk",
        "/dev/sdl1",
        "/dev/sde",
        "/dev/sdf",
        "/dev/sdg",
        "/dev/nvme1n2",
        "/dev/sdd1"
    ],
    "mounts": [
        {
            "dump": 0,
            "fstype": "ext4",
            "opts": "defaults",
            "passno": 0,
            "path": "/opt/test1",
            "src": "UUID=724bb9aa-9ccb-40d3-bb7b-d15fb76d7004",
            "state": "mounted"
        }
    ],
    "packages": [
        "xfsprogs",
        "lvm2",
        "dosfstools",
        "e2fsprogs"
    ],
    "pools": [
        {
            "disks": [
                "sdd"
            ],
            "encryption": false,
            "encryption_cipher": null,
            "encryption_key_file": null,
            "encryption_key_size": null,
            "encryption_luks_version": null,
            "encryption_passphrase": null,
            "name": "foo",
            "raid_level": null,
            "state": "present",
            "type": "partition",
            "volumes": [
                {
                    "_device": "/dev/sdd1",
                    "_kernel_device": "/dev/sdd1",
                    "_mount_id": "UUID=724bb9aa-9ccb-40d3-bb7b-d15fb76d7004",
                    "_raw_device": "/dev/sdd1",
                    "_raw_kernel_device": "/dev/sdd1",
                    "encryption": false,
                    "encryption_cipher": null,
                    "encryption_key_file": null,
                    "encryption_key_size": null,
                    "encryption_luks_version": null,
                    "encryption_passphrase": null,
                    "fs_create_options": "",
                    "fs_label": "",
                    "fs_overwrite_existing": true,
                    "fs_type": "ext4",
                    "mount_check": 0,
                    "mount_device_identifier": "uuid",
                    "mount_options": "defaults",
                    "mount_passno": 0,
                    "mount_point": "/opt/test1",
                    "name": "test1",
                    "pool": "foo",
                    "raid_chunk_size": null,
                    "raid_device_count": null,
                    "raid_level": null,
                    "raid_metadata_version": null,
                    "raid_spare_count": null,
                    "size": "4g",
                    "state": "present",
                    "type": "partition"
                }
            ]
        }
    ],
    "volumes": []
}

TASK [storage : debug] ********************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:113
ok: [localhost] => {
    "blivet_output": {
        "actions": [
            {
                "action": "create format",
                "device": "/dev/sdd",
                "fs_type": "disklabel"
            },
            {
                "action": "create device",
                "device": "/dev/sdd1",
                "fs_type": null
            },
            {
                "action": "create format",
                "device": "/dev/sdd1",
                "fs_type": "ext4"
            }
        ],
        "changed": true,
        "crypts": [],
        "failed": false,
        "leaves": [
            "/dev/sda1",
            "/dev/sda2",
            "/dev/mapper/rhel_storageqe--62-home",
            "/dev/mapper/rhel_storageqe--62-root",
            "/dev/mapper/rhel_storageqe--62-swap",
            "/dev/sdb",
            "/dev/sdh",
            "/dev/sdi",
            "/dev/sdj",
            "/dev/sdc",
            "/dev/sdk",
            "/dev/sdl1",
            "/dev/sde",
            "/dev/sdf",
            "/dev/sdg",
            "/dev/nvme1n2",
            "/dev/sdd1"
        ],
        "mounts": [
            {
                "dump": 0,
                "fstype": "ext4",
                "opts": "defaults",
                "passno": 0,
                "path": "/opt/test1",
                "src": "UUID=724bb9aa-9ccb-40d3-bb7b-d15fb76d7004",
                "state": "mounted"
            }
        ],
        "packages": [
            "xfsprogs",
            "lvm2",
            "dosfstools",
            "e2fsprogs"
        ],
        "pools": [
            {
                "disks": [
                    "sdd"
                ],
                "encryption": false,
                "encryption_cipher": null,
                "encryption_key_file": null,
                "encryption_key_size": null,
                "encryption_luks_version": null,
                "encryption_passphrase": null,
                "name": "foo",
                "raid_level": null,
                "state": "present",
                "type": "partition",
                "volumes": [
                    {
                        "_device": "/dev/sdd1",
                        "_kernel_device": "/dev/sdd1",
                        "_mount_id": "UUID=724bb9aa-9ccb-40d3-bb7b-d15fb76d7004",
                        "_raw_device": "/dev/sdd1",
                        "_raw_kernel_device": "/dev/sdd1",
                        "encryption": false,
                        "encryption_cipher": null,
                        "encryption_key_file": null,
                        "encryption_key_size": null,
                        "encryption_luks_version": null,
                        "encryption_passphrase": null,
                        "fs_create_options": "",
                        "fs_label": "",
                        "fs_overwrite_existing": true,
                        "fs_type": "ext4",
                        "mount_check": 0,
                        "mount_device_identifier": "uuid",
                        "mount_options": "defaults",
                        "mount_passno": 0,
                        "mount_point": "/opt/test1",
                        "name": "test1",
                        "pool": "foo",
                        "raid_chunk_size": null,
                        "raid_device_count": null,
                        "raid_level": null,
                        "raid_metadata_version": null,
                        "raid_spare_count": null,
                        "size": "4g",
                        "state": "present",
                        "type": "partition"
                    }
                ]
            }
        ],
        "volumes": []
    }
}

TASK [storage : set the list of pools for test verification] ******************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:116
ok: [localhost] => {
    "ansible_facts": {
        "_storage_pools_list": [
            {
                "disks": [
                    "sdd"
                ],
                "encryption": false,
                "encryption_cipher": null,
                "encryption_key_file": null,
                "encryption_key_size": null,
                "encryption_luks_version": null,
                "encryption_passphrase": null,
                "name": "foo",
                "raid_level": null,
                "state": "present",
                "type": "partition",
                "volumes": [
                    {
                        "_device": "/dev/sdd1",
                        "_kernel_device": "/dev/sdd1",
                        "_mount_id": "UUID=724bb9aa-9ccb-40d3-bb7b-d15fb76d7004",
                        "_raw_device": "/dev/sdd1",
                        "_raw_kernel_device": "/dev/sdd1",
                        "encryption": false,
                        "encryption_cipher": null,
                        "encryption_key_file": null,
                        "encryption_key_size": null,
                        "encryption_luks_version": null,
                        "encryption_passphrase": null,
                        "fs_create_options": "",
                        "fs_label": "",
                        "fs_overwrite_existing": true,
                        "fs_type": "ext4",
                        "mount_check": 0,
                        "mount_device_identifier": "uuid",
                        "mount_options": "defaults",
                        "mount_passno": 0,
                        "mount_point": "/opt/test1",
                        "name": "test1",
                        "pool": "foo",
                        "raid_chunk_size": null,
                        "raid_device_count": null,
                        "raid_level": null,
                        "raid_metadata_version": null,
                        "raid_spare_count": null,
                        "size": "4g",
                        "state": "present",
                        "type": "partition"
                    }
                ]
            }
        ]
    },
    "changed": false
}

TASK [storage : set the list of volumes for test verification] ****************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:120
ok: [localhost] => {
    "ansible_facts": {
        "_storage_volumes_list": []
    },
    "changed": false
}

TASK [storage : remove obsolete mounts] ***************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:136

TASK [storage : tell systemd to refresh its view of /etc/fstab] ***************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:147
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1593846001.2313185-186566-44315424536100 && echo ansible-tmp-1593846001.2313185-186566-44315424536100="` echo /root/.ansible/tmp/ansible-tmp-1593846001.2313185-186566-44315424536100 `" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/system/systemd.py
<localhost> PUT /root/.ansible/tmp/ansible-local-1856838dp9htp0/tmpf6tkcx9w TO /root/.ansible/tmp/ansible-tmp-1593846001.2313185-186566-44315424536100/AnsiballZ_systemd.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1593846001.2313185-186566-44315424536100/ /root/.ansible/tmp/ansible-tmp-1593846001.2313185-186566-44315424536100/AnsiballZ_systemd.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1593846001.2313185-186566-44315424536100/AnsiballZ_systemd.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1593846001.2313185-186566-44315424536100/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "changed": false,
    "invocation": {
        "module_args": {
            "daemon_reexec": false,
            "daemon_reload": true,
            "enabled": null,
            "force": null,
            "masked": null,
            "name": null,
            "no_block": false,
            "scope": null,
            "state": null,
            "user": null
        }
    },
    "name": null,
    "status": {}
}

TASK [storage : set up new/current mounts] ************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:152
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1593846003.272524-186605-26342796838830 && echo ansible-tmp-1593846003.272524-186605-26342796838830="` echo /root/.ansible/tmp/ansible-tmp-1593846003.272524-186605-26342796838830 `" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/system/mount.py
<localhost> PUT /root/.ansible/tmp/ansible-local-1856838dp9htp0/tmpro_sbuq7 TO /root/.ansible/tmp/ansible-tmp-1593846003.272524-186605-26342796838830/AnsiballZ_mount.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1593846003.272524-186605-26342796838830/ /root/.ansible/tmp/ansible-tmp-1593846003.272524-186605-26342796838830/AnsiballZ_mount.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1593846003.272524-186605-26342796838830/AnsiballZ_mount.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1593846003.272524-186605-26342796838830/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => (item={'src': 'UUID=724bb9aa-9ccb-40d3-bb7b-d15fb76d7004', 'path': '/opt/test1', 'fstype': 'ext4', 'opts': 'defaults', 'dump': 0, 'passno': 0, 'state': 'mounted'}) => {
    "ansible_loop_var": "mount_info",
    "changed": true,
    "dump": "0",
    "fstab": "/etc/fstab",
    "fstype": "ext4",
    "invocation": {
        "module_args": {
            "backup": false,
            "boot": true,
            "dump": null,
            "fstab": null,
            "fstype": "ext4",
            "opts": "defaults",
            "passno": null,
            "path": "/opt/test1",
            "src": "UUID=724bb9aa-9ccb-40d3-bb7b-d15fb76d7004",
            "state": "mounted"
        }
    },
    "mount_info": {
        "dump": 0,
        "fstype": "ext4",
        "opts": "defaults",
        "passno": 0,
        "path": "/opt/test1",
        "src": "UUID=724bb9aa-9ccb-40d3-bb7b-d15fb76d7004",
        "state": "mounted"
    },
    "name": "/opt/test1",
    "opts": "defaults",
    "passno": "0",
    "src": "UUID=724bb9aa-9ccb-40d3-bb7b-d15fb76d7004"
}

TASK [storage : tell systemd to refresh its view of /etc/fstab] ***************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:163
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1593846004.2413008-186629-241556125438887 && echo ansible-tmp-1593846004.2413008-186629-241556125438887="` echo /root/.ansible/tmp/ansible-tmp-1593846004.2413008-186629-241556125438887 `" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/system/systemd.py
<localhost> PUT /root/.ansible/tmp/ansible-local-1856838dp9htp0/tmp77agcjux TO /root/.ansible/tmp/ansible-tmp-1593846004.2413008-186629-241556125438887/AnsiballZ_systemd.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1593846004.2413008-186629-241556125438887/ /root/.ansible/tmp/ansible-tmp-1593846004.2413008-186629-241556125438887/AnsiballZ_systemd.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1593846004.2413008-186629-241556125438887/AnsiballZ_systemd.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1593846004.2413008-186629-241556125438887/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "changed": false,
    "invocation": {
        "module_args": {
            "daemon_reexec": false,
            "daemon_reload": true,
            "enabled": null,
            "force": null,
            "masked": null,
            "name": null,
            "no_block": false,
            "scope": null,
            "state": null,
            "user": null
        }
    },
    "name": null,
    "status": {}
}

TASK [storage : Manage /etc/crypttab to account for changes we just made] *****************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:171

TASK [storage : Update facts] *************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:186
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1593846005.4868796-186672-253555565078477 && echo ansible-tmp-1593846005.4868796-186672-253555565078477="` echo /root/.ansible/tmp/ansible-tmp-1593846005.4868796-186672-253555565078477 `" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/system/setup.py
<localhost> PUT /root/.ansible/tmp/ansible-local-1856838dp9htp0/tmp51gd2aez TO /root/.ansible/tmp/ansible-tmp-1593846005.4868796-186672-253555565078477/AnsiballZ_setup.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1593846005.4868796-186672-253555565078477/ /root/.ansible/tmp/ansible-tmp-1593846005.4868796-186672-253555565078477/AnsiballZ_setup.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1593846005.4868796-186672-253555565078477/AnsiballZ_setup.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1593846005.4868796-186672-253555565078477/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
META: ran handlers
META: ran handlers

PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost                  : ok=43   changed=2    unreachable=0    failed=0    skipped=17   rescued=0    ignored=0  

The size: 4g doesn't work for sdd disk

[root@storageqe-62 storage]# lsblk /dev/sdd
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sdd      8:48   0 111.8G  0 disk 
โ””โ”€sdd1   8:49   0 111.8G  0 part 

storage: format entire disk with ext4 fs on RHEL7 will be failed

Hi
I found this issue when I format one entire disk with ext4 fs on RHEL7, I've to add "fs_create_options: '-F' to fix it, but this option fix still not merged.

playbook

[root@storageqe-62 storage]# cat tests/b.yml 
---
- hosts: all
  become: true
  vars:
    mount_location: '/opt/test1'
    volume_group_size: '5g'

  tasks:
    - include_role:
        name: storage

    - include_tasks: get_unused_disk.yml
      vars:
        min_size: "{{ volume_group_size }}"
        max_return: 1

    - name: tests default fs
      block:
        - name: tests default fs
          include_role:
            name: storage
          vars:
            storage_volumes:
              - name: images
                type: disk
                fs_type: 'ext4'
#                fs_create_options: '-F'
                disks: "{{ unused_disks }}" 
                mount_point: /opt/images

execution log

TASK [storage : manage the pools and volumes to match the specified state] *******************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:104
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1594026521.65-26203-200113320838419 && echo ansible-tmp-1594026521.65-26203-200113320838419="` echo /root/.ansible/tmp/ansible-tmp-1594026521.65-26203-200113320838419 `" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
<localhost> PUT /root/.ansible/tmp/ansible-local-25540Evjebv/tmpuruiKs TO /root/.ansible/tmp/ansible-tmp-1594026521.65-26203-200113320838419/AnsiballZ_blivet.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1594026521.65-26203-200113320838419/ /root/.ansible/tmp/ansible-tmp-1594026521.65-26203-200113320838419/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1594026521.65-26203-200113320838419/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1594026521.65-26203-200113320838419/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
WARNING: The below traceback may *not* be related to the actual failure.
  File "/tmp/ansible_blivet_payload_UdK0Ld/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 1173, in run_module
  File "/usr/lib/python2.7/site-packages/blivet3/actionlist.py", line 48, in wrapped_func
    return func(obj, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/blivet3/actionlist.py", line 327, in process
    action.execute(callbacks)
  File "/usr/lib/python2.7/site-packages/blivet3/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/blivet3/deviceaction.py", line 637, in execute
    options=self.device.format_args)
  File "/usr/lib/python2.7/site-packages/blivet3/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/blivet3/formats/__init__.py", line 515, in create
    self._create(**kwargs)
  File "/usr/lib/python2.7/site-packages/blivet3/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/blivet3/formats/fs.py", line 409, in _create
    raise FormatCreateError(e, self.device)
fatal: [localhost]: FAILED! => {
    "actions": [], 
    "changed": false, 
    "crypts": [], 
    "invocation": {
        "module_args": {
            "disklabel_type": null, 
            "packages_only": false, 
            "pools": [], 
            "safe_mode": true, 
            "use_partitions": null, 
            "volumes": [
                {
                    "_device": "/dev/sdd", 
                    "_mount_id": "/dev/sdd", 
                    "_raw_device": "/dev/sdd", 
                    "disks": [
                        "sdd"
                    ], 
                    "encryption": false, 
                    "encryption_cipher": null, 
                    "encryption_key_file": null, 
                    "encryption_key_size": null, 
                    "encryption_luks_version": null, 
                    "encryption_passphrase": null, 
                    "fs_create_options": "", 
                    "fs_label": "", 
                    "fs_overwrite_existing": true, 
                    "fs_type": "ext4", 
                    "mount_check": 0, 
                    "mount_device_identifier": "uuid", 
                    "mount_options": "defaults", 
                    "mount_passno": 0, 
                    "mount_point": "/opt/images", 
                    "name": "images", 
                    "raid_chunk_size": null, 
                    "raid_device_count": null, 
                    "raid_level": null, 
                    "raid_metadata_version": null, 
                    "raid_spare_count": null, 
                    "size": 0, 
                    "state": "present", 
                    "type": "disk"
                }
            ]
        }
    }, 
    "leaves": [], 
    "mounts": [], 
    "msg": "Failed to commit changes to disk: (FSError('format failed: 1',), u'/dev/sdd')", 
    "packages": [
        "dosfstools", 
        "xfsprogs", 
        "e2fsprogs", 
        "lvm2"
    ], 
    "pools": [], 
    "volumes": []
}

PLAY RECAP ***********************************************************************************************************************************************************************************************
localhost                  : ok=31   changed=0    unreachable=0    failed=1    skipped=19   rescued=0    ignored=0 

cat /tmp/blivet3.log

2020-07-06 05:08:46,224 INFO program/MainThread: Running... mke2fs -t ext4 /dev/sdd
2020-07-06 05:08:46,233 INFO program/MainThread: stdout:
2020-07-06 05:08:46,233 INFO program/MainThread: /dev/sdd is entire device, not just one partition!
2020-07-06 05:08:46,233 INFO program/MainThread: Proceed anyway? (y,n) 
2020-07-06 05:08:46,233 INFO program/MainThread: stderr:
2020-07-06 05:08:46,233 INFO program/MainThread: mke2fs 1.42.9 (28-Dec-2013)
2020-07-06 05:08:46,234 DEBUG program/MainThread: Return code: 1

"required tools for file system 'ext3' are missing" when changing FS_type from swap to ext3 using rhel-system-roles-1.0-14.el8.noarch

Hi David,
I found an issue that "required tools for file system 'ext3' are missing" when changing FS_type from swap to ext3 using rhel-system-roles-1.0-14.el8.noarch on rhel-8.3 VM. You can run this playbook tests_swap.yml on a clean VM to reproduce this problem. Then if you change ext3 to ext4 and then back to ext3, the problem cannot be reproduced

[root@rhel83-2 tests]# rpm -qa | grep rhel-system-role
rhel-system-roles-1.0-14.el8.noarch
[root@rhel83-2 tests]# cat /etc/redhat-release
Red Hat Enterprise Linux release 8.3 Beta (Ootpa)

playbook

---
- hosts: all
  become: true
  vars:
    storage_safe_mode: false
    mount_location: '/opt/test'
    volume_size: '5g'

  tasks:
    - include_role:
        name: storage

    - include_tasks: get_unused_disk.yml
      vars:
        min_size: "{{ volume_size }}"
        max_return: 1

    - name: Create a disk device with swap
      include_role:
        name: storage
      vars:
        storage_volumes:
          - name: test1
            type: disk
            disks: "{{ unused_disks }}"
            fs_type: 'swap'

    - include_tasks: verify-role-results.yml

    - name: Change the disk device file system type to ext3
      include_role:
        name: storage
      vars:
        storage_volumes:
          - name: test1
            type: disk
            mount_point: "{{ mount_location }}"
            fs_type: ext3
            disks: "{{ unused_disks }}"

    - include_tasks: verify-role-results.yml

    - name: Repeat the previous invocation to verify idempotence
      include_role:
        name: storage
      vars:
        storage_volumes:
          - name: test1
            type: disk
            mount_point: "{{ mount_location }}"
            fs_type: ext3
            disks: "{{ unused_disks }}"

    - include_tasks: verify-role-results.yml

    - name: Change it back to swap
      include_role:
        name: storage
      vars:
        storage_volumes:
          - name: test1
            type: disk
            disks: "{{ unused_disks }}"
            fs_type: 'swap'

    - include_tasks: verify-role-results.yml

    - name: Repeat the previous invocation to verify idempotence
      include_role:
        name: storage
      vars:
        storage_volumes:
          - name: test1
            type: disk
            disks: "{{ unused_disks }}"
            fs_type: 'swap'

    - include_tasks: verify-role-results.yml

    - name: Clean up
      include_role:
        name: storage
      vars:
        storage_volumes:
          - name: test1
            type: disk
            disks: "{{ unused_disks }}"
            mount_point: "{{ mount_location }}"
            state: absent

    - include_tasks: verify-role-results.yml

The full traceback

The full traceback is:
  File "/tmp/ansible_blivet_payload_gt4wtd8f/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 1150, in run_module
  File "/tmp/ansible_blivet_payload_gt4wtd8f/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 868, in manage_volume
  File "/tmp/ansible_blivet_payload_gt4wtd8f/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 353, in manage
  File "/tmp/ansible_blivet_payload_gt4wtd8f/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 385, in _create
  File "/tmp/ansible_blivet_payload_gt4wtd8f/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 331, in _reformat
  File "/tmp/ansible_blivet_payload_gt4wtd8f/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 284, in _get_format
fatal: [127.0.0.1]: FAILED! => {
    "actions": [],
    "changed": false,
    "crypts": [],
    "invocation": {
        "module_args": {
            "disklabel_type": null,
            "packages_only": false,
            "pools": [],
            "safe_mode": false,
            "use_partitions": null,
            "volumes": [
                {
                    "disks": [
                        "sdb"
                    ],
                    "encryption": false,
                    "encryption_cipher": null,
                    "encryption_key_file": null,
                    "encryption_key_size": null,
                    "encryption_luks_version": null,
                    "encryption_passphrase": null,
                    "fs_create_options": "",
                    "fs_label": "",
                    "fs_overwrite_existing": true,
                    "fs_type": "ext3",
                    "mount_check": 0,
                    "mount_device_identifier": "uuid",
                    "mount_options": "defaults",
                    "mount_passno": 0,
                    "mount_point": "/opt/test",
                    "name": "test1",
                    "raid_chunk_size": null,
                    "raid_device_count": null,
                    "raid_level": null,
                    "raid_metadata_version": null,
                    "raid_spare_count": null,
                    "size": 0,
                    "state": "present",
                    "type": "disk"
                }
            ]
        }
    },
    "leaves": [],
    "mounts": [],
    "msg": "required tools for file system 'ext3' are missing",
    "packages": [],
    "pools": [],
    "volumes": []
}

PLAY RECAP **************************************************************************************
127.0.0.1                  : ok=88   changed=5    unreachable=0    failed=1    skipped=46   rescued=0    ignored=0  

storage: ext2/3/4 resize function doesn't work

#97 should fix this issue.

#cat resize.yml
---
- hosts: all
  become: true
  vars:
    mount_location: '/opt/test1'
    volume_group_size: '5g'
    volume_size_before: '10g'
    volume_size_after: '15g'
    storage_safe_mode: false

  tasks:
    - include_role:
        name: storage

    - include_tasks: get_unused_disk.yml
      vars:
        min_size: "{{ volume_group_size }}"
        max_return: 1

    - name: Create one LVM logical volume with "{{ volume_size_before }}" under one volume group
      include_role:
        name: storage
      vars:
          storage_pools:
            - name: foo
              disks: "{{ unused_disks }}"
              type: lvm
              volumes:
                - name: test1
                  fs_type: 'ext4'
                  size: "{{ volume_size_before }}"
                  mount_point: "{{ mount_location }}"

    - shell: lsblk | grep foo-test1

    - shell: mount | grep foo-test1

    - include_tasks: verify-role-results.yml

    - name: Change volume_size  "{{ volume_size_after }}"
      include_role:
        name: storage
      vars:
          storage_pools:
            - name: foo
              type: lvm
              disks: "{{ unused_disks }}"
              volumes:
                - name: test1
                  fs_type: 'ext4'
                  size: "{{ volume_size_after }}"
                  mount_point: "{{ mount_location }}"

    - shell: lsblk | grep foo-test1

    - shell: mount | grep foo-test1

    - include_tasks: verify-role-results.yml

    - name: Clean up
      include_role:
        name: storage
      vars:
          storage_pools:
            - name: foo
              disks: "{{ unused_disks }}"
              state: absent
              volumes:
                - name: test1
                  size: "{{ volume_size_after }}"
                  mount_point: "{{ mount_location }}"

    - include_tasks: verify-role-results.yml

Creating new LV in existing VG requires specifying disk

I want to create a new LV in an existing VG. I would like it to assume the existing setup and only specify the new things to add.

Today, I do this with a simple lvcreate command to create a new LV in an existing VG. I only need to provide the VGname, LVname, and size:
lvcreate -L1G -s -n /dev/virtual-machines/ha1-snapshot /dev/virtual-machines/ha1

However, this does not seem to work with the following playbook, as it errors suggesting it cannot lookup the disks.

- hosts: all
  remote_user: root

#  become: yes
#  become_method: sudo
#  become_user: root

  vars:

  tasks:
    - name: create some test storage
      include_role:
        name: linux-system-roles.storage
      vars:
        storage_pools:
          - name: fedora_alderaan
            # type: lvm
            state: present
            volumes:
              - name: test
                size: "1G"    
                # type: lvm
                # fs_type: xfs
                fs_label: "test"
                mount_point: '/mnt/test'

storage: tests_lvm_errors.yml failed due to "Kernel module ext3 not available"

Hi
I found tests_lvm_errors will be failed if kernel module ext4 not insmod(not insmod by default), this should be one issue for storage role.

The ext4 module will be insmod if we do mount operation.

playbook

tests/tests_lvm_errors.yml

$ansible-playbook -i inventory tests/tests_lvm_errors.yml -vvvv

TASK [Try to replace a pool by a file system on disk in safe mode] **************************************************************************************************************************************************
task path: /root/test/storage/tests/tests_lvm_errors.yml:397

TASK [storage : Set version specific variables] *********************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:2
ok: [localhost] => (item=/root/test/storage/vars/RedHat-8.yml) => {
    "ansible_facts": {
        "blivet_package_list": [
            "python3-blivet",
            "libblockdev-dm",
            "libblockdev-lvm",
            "libblockdev-part"
        ]
    },
    "ansible_included_var_files": [
        "/root/test/storage/vars/RedHat-8.yml"
    ],
    "ansible_loop_var": "item",
    "changed": false,
    "item": "/root/test/storage/vars/RedHat-8.yml"
}

TASK [storage : define an empty list of pools to be used in testing] ************************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:9
ok: [localhost] => {
    "ansible_facts": {
        "_storage_pools_list": []
    },
    "changed": false
}

TASK [storage : define an empty list of volumes to be used in testing] **********************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:13
ok: [localhost] => {
    "ansible_facts": {
        "_storage_volumes_list": []
    },
    "changed": false
}

TASK [storage : include the appropriate provider tasks] *************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:17
included: /root/test/storage/tasks/main-blivet.yml for localhost

TASK [storage : get a list of rpm packages installed on host machine] ***********************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:2
skipping: [localhost] => {
    "changed": false,
    "skip_reason": "Conditional result was False"
}

TASK [storage : make sure blivet is available] **********************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:7
Running dnf
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1592294818.0539384-4642-202795476816248 && echo ansible-tmp-1592294818.0539384-4642-202795476816248="` echo /root/.ansible/tmp/ansible-tmp-1592294818.0539384-4642-202795476816248 `" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/packaging/os/dnf.py
<localhost> PUT /root/.ansible/tmp/ansible-local-2092m5baly1_/tmpod75yu7a TO /root/.ansible/tmp/ansible-tmp-1592294818.0539384-4642-202795476816248/AnsiballZ_dnf.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1592294818.0539384-4642-202795476816248/ /root/.ansible/tmp/ansible-tmp-1592294818.0539384-4642-202795476816248/AnsiballZ_dnf.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1592294818.0539384-4642-202795476816248/AnsiballZ_dnf.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1592294818.0539384-4642-202795476816248/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "changed": false,
    "invocation": {
        "module_args": {
            "allow_downgrade": false,
            "autoremove": false,
            "bugfix": false,
            "conf_file": null,
            "disable_excludes": null,
            "disable_gpg_check": false,
            "disable_plugin": [],
            "disablerepo": [],
            "download_dir": null,
            "download_only": false,
            "enable_plugin": [],
            "enablerepo": [],
            "exclude": [],
            "install_repoquery": true,
            "install_weak_deps": true,
            "installroot": "/",
            "list": null,
            "lock_timeout": 30,
            "name": [
                "python3-blivet",
                "libblockdev-dm",
                "libblockdev-lvm",
                "libblockdev-part"
            ],
            "releasever": null,
            "security": false,
            "skip_broken": false,
            "state": "present",
            "update_cache": false,
            "update_only": false,
            "validate_certs": true
        }
    },
    "msg": "Nothing to do",
    "rc": 0,
    "results": []
}

TASK [storage : initialize internal facts] **************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:18
ok: [localhost] => {
    "ansible_facts": {
        "_storage_pools": [],
        "_storage_vol_defaults": [],
        "_storage_vol_pools": [],
        "_storage_vols_no_defaults": [],
        "_storage_vols_no_defaults_by_pool": {},
        "_storage_vols_w_defaults": [],
        "_storage_volumes": []
    },
    "changed": false
}

TASK [storage : Apply defaults to pools and volumes [1/6]] **********************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:28

TASK [storage : Apply defaults to pools and volumes [2/6]] **********************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:36

TASK [storage : Apply defaults to pools and volumes [3/6]] **********************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:44

TASK [storage : Apply defaults to pools and volumes [4/6]] **********************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:52

TASK [storage : Apply defaults to pools and volumes [5/6]] **********************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:61

TASK [storage : Apply defaults to pools and volumes [6/6]] **********************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:72
ok: [localhost] => (item={'name': 'test1', 'type': 'disk', 'fs_type': 'ext3', 'disks': ['sdk']}) => {
    "ansible_facts": {
        "_storage_volumes": [
            {
                "disks": [
                    "sdk"
                ],
                "fs_create_options": "",
                "fs_label": "",
                "fs_overwrite_existing": true,
                "fs_type": "ext3",
                "mount_check": 0,
                "mount_device_identifier": "uuid",
                "mount_options": "defaults",
                "mount_passno": 0,
                "mount_point": "",
                "name": "test1",
                "size": 0,
                "state": "present",
                "type": "disk"
            }
        ]
    },
    "ansible_loop_var": "volume",
    "changed": false,
    "volume": {
        "disks": [
            "sdk"
        ],
        "fs_type": "ext3",
        "name": "test1",
        "type": "disk"
    }
}

TASK [storage : debug] **********************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:84
ok: [localhost] => {
    "_storage_pools": []
}

TASK [storage : debug] **********************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:87
ok: [localhost] => {
    "_storage_volumes": [
        {
            "disks": [
                "sdk"
            ],
            "fs_create_options": "",
            "fs_label": "",
            "fs_overwrite_existing": true,
            "fs_type": "ext3",
            "mount_check": 0,
            "mount_device_identifier": "uuid",
            "mount_options": "defaults",
            "mount_passno": 0,
            "mount_point": "",
            "name": "test1",
            "size": 0,
            "state": "present",
            "type": "disk"
        }
    ]
}

TASK [storage : get required packages] ******************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:90
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1592294822.8819156-4676-31572033814412 && echo ansible-tmp-1592294822.8819156-4676-31572033814412="` echo /root/.ansible/tmp/ansible-tmp-1592294822.8819156-4676-31572033814412 `" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
<localhost> PUT /root/.ansible/tmp/ansible-local-2092m5baly1_/tmpqex7kqkz TO /root/.ansible/tmp/ansible-tmp-1592294822.8819156-4676-31572033814412/AnsiballZ_blivet.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1592294822.8819156-4676-31572033814412/ /root/.ansible/tmp/ansible-tmp-1592294822.8819156-4676-31572033814412/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1592294822.8819156-4676-31572033814412/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1592294822.8819156-4676-31572033814412/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "actions": [],
    "changed": false,
    "invocation": {
        "module_args": {
            "disklabel_type": null,
            "packages_only": true,
            "pools": [],
            "safe_mode": true,
            "use_partitions": null,
            "volumes": [
                {
                    "disks": [
                        "sdk"
                    ],
                    "fs_create_options": "",
                    "fs_label": "",
                    "fs_overwrite_existing": true,
                    "fs_type": "ext3",
                    "mount_check": 0,
                    "mount_device_identifier": "uuid",
                    "mount_options": "defaults",
                    "mount_passno": 0,
                    "mount_point": "",
                    "name": "test1",
                    "size": 0,
                    "state": "present",
                    "type": "disk"
                }
            ]
        }
    },
    "leaves": [],
    "mounts": [],
    "packages": [
        "e2fsprogs"
    ],
    "pools": [],
    "volumes": []
}

TASK [storage : make sure required packages are installed] **********************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:99
Running dnf
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1592294826.6364615-4733-173430448428291 && echo ansible-tmp-1592294826.6364615-4733-173430448428291="` echo /root/.ansible/tmp/ansible-tmp-1592294826.6364615-4733-173430448428291 `" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/packaging/os/dnf.py
<localhost> PUT /root/.ansible/tmp/ansible-local-2092m5baly1_/tmpigd7fnjy TO /root/.ansible/tmp/ansible-tmp-1592294826.6364615-4733-173430448428291/AnsiballZ_dnf.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1592294826.6364615-4733-173430448428291/ /root/.ansible/tmp/ansible-tmp-1592294826.6364615-4733-173430448428291/AnsiballZ_dnf.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1592294826.6364615-4733-173430448428291/AnsiballZ_dnf.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1592294826.6364615-4733-173430448428291/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "changed": false,
    "invocation": {
        "module_args": {
            "allow_downgrade": false,
            "autoremove": false,
            "bugfix": false,
            "conf_file": null,
            "disable_excludes": null,
            "disable_gpg_check": false,
            "disable_plugin": [],
            "disablerepo": [],
            "download_dir": null,
            "download_only": false,
            "enable_plugin": [],
            "enablerepo": [],
            "exclude": [],
            "install_repoquery": true,
            "install_weak_deps": true,
            "installroot": "/",
            "list": null,
            "lock_timeout": 30,
            "name": [
                "e2fsprogs"
            ],
            "releasever": null,
            "security": false,
            "skip_broken": false,
            "state": "present",
            "update_cache": false,
            "update_only": false,
            "validate_certs": true
        }
    },
    "msg": "Nothing to do",
    "rc": 0,
    "results": []
}

TASK [storage : manage the pools and volumes to match the specified state] ******************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:104
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1592294830.480519-4749-39146031242317 && echo ansible-tmp-1592294830.480519-4749-39146031242317="` echo /root/.ansible/tmp/ansible-tmp-1592294830.480519-4749-39146031242317 `" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
<localhost> PUT /root/.ansible/tmp/ansible-local-2092m5baly1_/tmpl59s_7ae TO /root/.ansible/tmp/ansible-tmp-1592294830.480519-4749-39146031242317/AnsiballZ_blivet.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1592294830.480519-4749-39146031242317/ /root/.ansible/tmp/ansible-tmp-1592294830.480519-4749-39146031242317/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1592294830.480519-4749-39146031242317/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1592294830.480519-4749-39146031242317/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
  File "/tmp/ansible_blivet_payload_9tzqeryj/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 820, in run_module
  File "/tmp/ansible_blivet_payload_9tzqeryj/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 607, in manage_volume
  File "/tmp/ansible_blivet_payload_9tzqeryj/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 280, in manage
  File "/tmp/ansible_blivet_payload_9tzqeryj/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 250, in _reformat
  File "/tmp/ansible_blivet_payload_9tzqeryj/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 207, in _get_format
fatal: [localhost]: FAILED! => {
    "actions": [],
    "changed": false,
    "invocation": {
        "module_args": {
            "disklabel_type": null,
            "packages_only": false,
            "pools": [],
            "safe_mode": true,
            "use_partitions": null,
            "volumes": [
                {
                    "disks": [
                        "sdk"
                    ],
                    "fs_create_options": "",
                    "fs_label": "",
                    "fs_overwrite_existing": true,
                    "fs_type": "ext3",
                    "mount_check": 0,
                    "mount_device_identifier": "uuid",
                    "mount_options": "defaults",
                    "mount_passno": 0,
                    "mount_point": "",
                    "name": "test1",
                    "size": 0,
                    "state": "present",
                    "type": "disk"
                }
            ]
        }
    },
    "leaves": [],
    "mounts": [],
    "msg": "required tools for file system 'ext3' are missing",
    "packages": [],
    "pools": [],
    "volumes": []
}

TASK [Check that we failed in the role] *****************************************************************************************************************************************************************************
task path: /root/test/storage/tests/tests_lvm_errors.yml:413
ok: [localhost] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [Verify the output] ********************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/tests_lvm_errors.yml:419
fatal: [localhost]: FAILED! => {
    "assertion": "blivet_output.failed and blivet_output.msg|regex_search('cannot remove existing formatting on volume.*in safe mode') and not blivet_output.changed",
    "changed": false,
    "evaluated_to": false,
    "msg": "Unexpected behavior w/ existing data on specified disks"
}

PLAY RECAP **********************************************************************************************************************************************************************************************************
localhost                  : ok=208  changed=1    unreachable=0    failed=1    skipped=50   rescued=11   ignored=0   

$ tail -30 /tmp/blivet.log

  VG space used = 1024 MiB
2020-06-16 04:07:13,901 INFO program/MainThread: Running [13] dmsetup info -co subsystem --noheadings testpool1-testvol1 ...
2020-06-16 04:07:13,907 INFO program/MainThread: stdout[13]: LVM

2020-06-16 04:07:13,907 INFO program/MainThread: stderr[13]: 
2020-06-16 04:07:13,907 INFO program/MainThread: ...done [13] (exit code: 0)
2020-06-16 04:07:13,913 DEBUG blivet/MainThread:                    DeviceTree.handle_format: name: testpool1-testvol1 ;
2020-06-16 04:07:13,913 DEBUG blivet/MainThread: no type or existing type for testpool1-testvol1, bailing
2020-06-16 04:07:13,913 INFO program/MainThread: Running... udevadm settle --timeout=300
2020-06-16 04:07:13,935 DEBUG program/MainThread: Return code: 0
2020-06-16 04:07:13,970 INFO blivet/MainThread: edd: MBR signature on sda is zero. new disk image?
2020-06-16 04:07:13,970 INFO blivet/MainThread: edd: collected mbr signatures: {'sdl': '0x000178c0'}
2020-06-16 04:07:13,978 DEBUG blivet/MainThread:                DeviceTree.get_device_by_path: path: /dev/mapper/rhel_storageqe--62-root ; incomplete: False ; hidden: False ;
2020-06-16 04:07:13,982 DEBUG blivet/MainThread:                DeviceTree.get_device_by_path returned existing 70 GiB lvmlv rhel_storageqe-62-root (56) with existing xfs filesystem
2020-06-16 04:07:13,982 DEBUG blivet/MainThread: resolved '/dev/mapper/rhel_storageqe--62-root' to 'rhel_storageqe-62-root' (lvmlv)
2020-06-16 04:07:13,983 DEBUG blivet/MainThread: resolved 'UUID=0c459216-6a71-4860-8e5f-97bfc9c93095' to 'sda2' (partition)
2020-06-16 04:07:13,983 DEBUG blivet/MainThread: resolved 'UUID=3189-4B31' to 'sda1' (partition)
2020-06-16 04:07:13,986 DEBUG blivet/MainThread:                DeviceTree.get_device_by_path: path: /dev/mapper/rhel_storageqe--62-home ; incomplete: False ; hidden: False ;
2020-06-16 04:07:13,990 DEBUG blivet/MainThread:                DeviceTree.get_device_by_path returned existing 199.93 GiB lvmlv rhel_storageqe-62-home (43) with existing xfs filesystem
2020-06-16 04:07:13,991 DEBUG blivet/MainThread: resolved '/dev/mapper/rhel_storageqe--62-home' to 'rhel_storageqe-62-home' (lvmlv)
2020-06-16 04:07:13,994 DEBUG blivet/MainThread:                DeviceTree.get_device_by_path: path: /dev/mapper/rhel_storageqe--62-swap ; incomplete: False ; hidden: False ;
2020-06-16 04:07:13,997 DEBUG blivet/MainThread:                DeviceTree.get_device_by_path returned existing 7.88 GiB lvmlv rhel_storageqe-62-swap (69) with existing swap
2020-06-16 04:07:13,998 DEBUG blivet/MainThread: resolved '/dev/mapper/rhel_storageqe--62-swap' to 'rhel_storageqe-62-swap' (lvmlv)
2020-06-16 04:07:14,001 DEBUG blivet/MainThread:                  DeviceTree.get_device_by_name: name: sdk ; incomplete: False ; hidden: False ;
2020-06-16 04:07:14,005 DEBUG blivet/MainThread:                  DeviceTree.get_device_by_name returned existing 279.4 GiB disk sdk (133) with existing lvmpv
2020-06-16 04:07:14,006 DEBUG blivet/MainThread: resolved 'sdk' to 'sdk' (disk)
2020-06-16 04:07:14,011 DEBUG blivet/MainThread:                   Ext3FS.supported: supported: True ;
2020-06-16 04:07:14,011 DEBUG blivet/MainThread: Kernel module ext3 not available
2020-06-16 04:07:14,011 DEBUG blivet/MainThread: get_format('ext3') returning Ext3FS instance with object id 225
2020-06-16 04:07:14,014 DEBUG blivet/MainThread:                Ext3FS.supported: supported: False ;

storage: tests_swap failed with one 280GB disk

The tests failed when I use one 1TB disk, it reports "device is too large for new format" from the error output, but it works when I use "mkswap on this disk"

ASK [Create a disk device with swap] ********************************************************************************************************************************************************************************************************
task path: /root/test/storage/tests/tests_swap.yml:18

TASK [storage : Set version specific variables] **********************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:2
ok: [localhost] => (item=/root/test/storage/vars/RedHat-8.yml) => {
    "ansible_facts": {
        "blivet_package_list": [
            "python3-blivet",
            "libblockdev-crypto",
            "libblockdev-dm",
            "libblockdev-lvm",
            "libblockdev-mdraid",
            "libblockdev-swap"
        ]
    },
    "ansible_included_var_files": [
        "/root/test/storage/vars/RedHat-8.yml"
    ],
    "ansible_loop_var": "item",
    "changed": false,
    "item": "/root/test/storage/vars/RedHat-8.yml"
}

TASK [storage : define an empty list of pools to be used in testing] *************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:9
ok: [localhost] => {
    "ansible_facts": {
        "_storage_pools_list": []
    },
    "changed": false
}

TASK [storage : define an empty list of volumes to be used in testing] ***********************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:13
ok: [localhost] => {
    "ansible_facts": {
        "_storage_volumes_list": []
    },
    "changed": false
}

TASK [storage : include the appropriate provider tasks] **************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main.yml:17
included: /root/test/storage/tasks/main-blivet.yml for localhost

TASK [storage : get a list of rpm packages installed on host machine] ************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:2
skipping: [localhost] => {
    "changed": false,
    "skip_reason": "Conditional result was False"
}

TASK [storage : make sure blivet is available] ***********************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:7
Running dnf
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1592374364.8981004-202334-29846248641960 && echo ansible-tmp-1592374364.8981004-202334-29846248641960="` echo /root/.ansible/tmp/ansible-tmp-1592374364.8981004-202334-29846248641960 `" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/packaging/os/dnf.py
<localhost> PUT /root/.ansible/tmp/ansible-local-201969g7nl4la9/tmp5_e_9irq TO /root/.ansible/tmp/ansible-tmp-1592374364.8981004-202334-29846248641960/AnsiballZ_dnf.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1592374364.8981004-202334-29846248641960/ /root/.ansible/tmp/ansible-tmp-1592374364.8981004-202334-29846248641960/AnsiballZ_dnf.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1592374364.8981004-202334-29846248641960/AnsiballZ_dnf.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1592374364.8981004-202334-29846248641960/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "changed": false,
    "invocation": {
        "module_args": {
            "allow_downgrade": false,
            "autoremove": false,
            "bugfix": false,
            "conf_file": null,
            "disable_excludes": null,
            "disable_gpg_check": false,
            "disable_plugin": [],
            "disablerepo": [],
            "download_dir": null,
            "download_only": false,
            "enable_plugin": [],
            "enablerepo": [],
            "exclude": [],
            "install_repoquery": true,
            "install_weak_deps": true,
            "installroot": "/",
            "list": null,
            "lock_timeout": 30,
            "name": [
                "python3-blivet",
                "libblockdev-crypto",
                "libblockdev-dm",
                "libblockdev-lvm",
                "libblockdev-mdraid",
                "libblockdev-swap"
            ],
            "releasever": null,
            "security": false,
            "skip_broken": false,
            "state": "present",
            "update_cache": false,
            "update_only": false,
            "validate_certs": true
        }
    },
    "msg": "Nothing to do",
    "rc": 0,
    "results": []
}

TASK [storage : initialize internal facts] ***************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:18
ok: [localhost] => {
    "ansible_facts": {
        "_storage_pools": [],
        "_storage_vol_defaults": [],
        "_storage_vol_pools": [],
        "_storage_vols_no_defaults": [],
        "_storage_vols_no_defaults_by_pool": {},
        "_storage_vols_w_defaults": [],
        "_storage_volumes": []
    },
    "changed": false
}

TASK [storage : Apply defaults to pools and volumes [1/6]] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:28

TASK [storage : Apply defaults to pools and volumes [2/6]] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:36

TASK [storage : Apply defaults to pools and volumes [3/6]] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:44

TASK [storage : Apply defaults to pools and volumes [4/6]] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:52

TASK [storage : Apply defaults to pools and volumes [5/6]] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:61

TASK [storage : Apply defaults to pools and volumes [6/6]] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:72
ok: [localhost] => (item={'name': 'test1', 'type': 'disk', 'disks': ['sdj'], 'fs_type': 'swap'}) => {
    "ansible_facts": {
        "_storage_volumes": [
            {
                "disks": [
                    "sdj"
                ],
                "encryption": false,
                "encryption_cipher": null,
                "encryption_key_file": null,
                "encryption_key_size": null,
                "encryption_luks_version": null,
                "encryption_passphrase": null,
                "fs_create_options": "",
                "fs_label": "",
                "fs_overwrite_existing": true,
                "fs_type": "swap",
                "mount_check": 0,
                "mount_device_identifier": "uuid",
                "mount_options": "defaults",
                "mount_passno": 0,
                "mount_point": "",
                "name": "test1",
                "raid_chunk_size": null,
                "raid_device_count": null,
                "raid_level": null,
                "raid_metadata_version": null,
                "raid_spare_count": null,
                "size": 0,
                "state": "present",
                "type": "disk"
            }
        ]
    },
    "ansible_loop_var": "volume",
    "changed": false,
    "volume": {
        "disks": [
            "sdj"
        ],
        "fs_type": "swap",
        "name": "test1",
        "type": "disk"
    }
}

TASK [storage : debug] ***********************************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:84
ok: [localhost] => {
    "_storage_pools": []
}

TASK [storage : debug] ***********************************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:87
ok: [localhost] => {
    "_storage_volumes": [
        {
            "disks": [
                "sdj"
            ],
            "encryption": false,
            "encryption_cipher": null,
            "encryption_key_file": null,
            "encryption_key_size": null,
            "encryption_luks_version": null,
            "encryption_passphrase": null,
            "fs_create_options": "",
            "fs_label": "",
            "fs_overwrite_existing": true,
            "fs_type": "swap",
            "mount_check": 0,
            "mount_device_identifier": "uuid",
            "mount_options": "defaults",
            "mount_passno": 0,
            "mount_point": "",
            "name": "test1",
            "raid_chunk_size": null,
            "raid_device_count": null,
            "raid_level": null,
            "raid_metadata_version": null,
            "raid_spare_count": null,
            "size": 0,
            "state": "present",
            "type": "disk"
        }
    ]
}

TASK [storage : get required packages] *******************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:90
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1592374369.7118692-202368-111377782914919 && echo ansible-tmp-1592374369.7118692-202368-111377782914919="` echo /root/.ansible/tmp/ansible-tmp-1592374369.7118692-202368-111377782914919 `" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
<localhost> PUT /root/.ansible/tmp/ansible-local-201969g7nl4la9/tmp4ot4o6pr TO /root/.ansible/tmp/ansible-tmp-1592374369.7118692-202368-111377782914919/AnsiballZ_blivet.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1592374369.7118692-202368-111377782914919/ /root/.ansible/tmp/ansible-tmp-1592374369.7118692-202368-111377782914919/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1592374369.7118692-202368-111377782914919/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1592374369.7118692-202368-111377782914919/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "actions": [],
    "changed": false,
    "crypts": [],
    "invocation": {
        "module_args": {
            "disklabel_type": null,
            "packages_only": true,
            "pools": [],
            "safe_mode": true,
            "use_partitions": null,
            "volumes": [
                {
                    "disks": [
                        "sdj"
                    ],
                    "encryption": false,
                    "encryption_cipher": null,
                    "encryption_key_file": null,
                    "encryption_key_size": null,
                    "encryption_luks_version": null,
                    "encryption_passphrase": null,
                    "fs_create_options": "",
                    "fs_label": "",
                    "fs_overwrite_existing": true,
                    "fs_type": "swap",
                    "mount_check": 0,
                    "mount_device_identifier": "uuid",
                    "mount_options": "defaults",
                    "mount_passno": 0,
                    "mount_point": "",
                    "name": "test1",
                    "raid_chunk_size": null,
                    "raid_device_count": null,
                    "raid_level": null,
                    "raid_metadata_version": null,
                    "raid_spare_count": null,
                    "size": 0,
                    "state": "present",
                    "type": "disk"
                }
            ]
        }
    },
    "leaves": [],
    "mounts": [],
    "packages": [],
    "pools": [],
    "volumes": []
}

TASK [storage : make sure required packages are installed] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:99
Running dnf
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1592374373.259246-202437-45788962828849 && echo ansible-tmp-1592374373.259246-202437-45788962828849="` echo /root/.ansible/tmp/ansible-tmp-1592374373.259246-202437-45788962828849 `" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/packaging/os/dnf.py
<localhost> PUT /root/.ansible/tmp/ansible-local-201969g7nl4la9/tmpa8fhw0ws TO /root/.ansible/tmp/ansible-tmp-1592374373.259246-202437-45788962828849/AnsiballZ_dnf.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1592374373.259246-202437-45788962828849/ /root/.ansible/tmp/ansible-tmp-1592374373.259246-202437-45788962828849/AnsiballZ_dnf.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1592374373.259246-202437-45788962828849/AnsiballZ_dnf.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1592374373.259246-202437-45788962828849/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "changed": false,
    "invocation": {
        "module_args": {
            "allow_downgrade": false,
            "autoremove": false,
            "bugfix": false,
            "conf_file": null,
            "disable_excludes": null,
            "disable_gpg_check": false,
            "disable_plugin": [],
            "disablerepo": [],
            "download_dir": null,
            "download_only": false,
            "enable_plugin": [],
            "enablerepo": [],
            "exclude": [],
            "install_repoquery": true,
            "install_weak_deps": true,
            "installroot": "/",
            "list": null,
            "lock_timeout": 30,
            "name": [],
            "releasever": null,
            "security": false,
            "skip_broken": false,
            "state": "present",
            "update_cache": false,
            "update_only": false,
            "validate_certs": true
        }
    },
    "msg": "Nothing to do",
    "rc": 0,
    "results": []
}

TASK [storage : manage the pools and volumes to match the specified state] *******************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:104
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1592374376.9155169-202453-71324387066761 && echo ansible-tmp-1592374376.9155169-202453-71324387066761="` echo /root/.ansible/tmp/ansible-tmp-1592374376.9155169-202453-71324387066761 `" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
<localhost> PUT /root/.ansible/tmp/ansible-local-201969g7nl4la9/tmpb81wdwno TO /root/.ansible/tmp/ansible-tmp-1592374376.9155169-202453-71324387066761/AnsiballZ_blivet.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1592374376.9155169-202453-71324387066761/ /root/.ansible/tmp/ansible-tmp-1592374376.9155169-202453-71324387066761/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1592374376.9155169-202453-71324387066761/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1592374376.9155169-202453-71324387066761/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
  File "/root/.ansible/tmp/ansible-tmp-1592374376.9155169-202453-71324387066761/AnsiballZ_blivet.py", line 102, in <module>
    _ansiballz_main()
  File "/root/.ansible/tmp/ansible-tmp-1592374376.9155169-202453-71324387066761/AnsiballZ_blivet.py", line 94, in _ansiballz_main
    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
  File "/root/.ansible/tmp/ansible-tmp-1592374376.9155169-202453-71324387066761/AnsiballZ_blivet.py", line 40, in invoke_module
    runpy.run_module(mod_name='ansible.modules.blivet', init_globals=None, run_name='__main__', alter_sys=True)
  File "/usr/lib64/python3.6/runpy.py", line 205, in run_module
    return _run_module_code(code, init_globals, run_name, mod_spec)
  File "/usr/lib64/python3.6/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/tmp/ansible_blivet_payload_5h2_lfy7/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 1190, in <module>
  File "/tmp/ansible_blivet_payload_5h2_lfy7/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 1187, in main
  File "/tmp/ansible_blivet_payload_5h2_lfy7/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 1150, in run_module
  File "/tmp/ansible_blivet_payload_5h2_lfy7/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 868, in manage_volume
  File "/tmp/ansible_blivet_payload_5h2_lfy7/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 353, in manage
  File "/tmp/ansible_blivet_payload_5h2_lfy7/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 385, in _create
  File "/tmp/ansible_blivet_payload_5h2_lfy7/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 340, in _reformat
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/blivet.py", line 828, in format_device
    raise e
  File "/usr/lib/python3.6/site-packages/blivet/blivet.py", line 824, in format_device
    self.devicetree.actions.add(create_ac)
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/actionlist.py", line 76, in add
    action.apply()
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/deviceaction.py", line 607, in apply
    self.device.format = self._format
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/devices/storage.py", line 781, in <lambda>
    lambda d, f: d._set_format(f),
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/devices/storage.py", line 732, in _set_format
    raise errors.DeviceError("device is too large for new format")
blivet.errors.DeviceError: device is too large for new format
fatal: [localhost]: FAILED! => {
    "changed": false,
    "module_stderr": "Traceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-tmp-1592374376.9155169-202453-71324387066761/AnsiballZ_blivet.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-tmp-1592374376.9155169-202453-71324387066761/AnsiballZ_blivet.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-tmp-1592374376.9155169-202453-71324387066761/AnsiballZ_blivet.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible.modules.blivet', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_blivet_payload_5h2_lfy7/ansible_blivet_payload.zip/ansible/modules/blivet.py\", line 1190, in <module>\n  File \"/tmp/ansible_blivet_payload_5h2_lfy7/ansible_blivet_payload.zip/ansible/modules/blivet.py\", line 1187, in main\n  File \"/tmp/ansible_blivet_payload_5h2_lfy7/ansible_blivet_payload.zip/ansible/modules/blivet.py\", line 1150, in run_module\n  File \"/tmp/ansible_blivet_payload_5h2_lfy7/ansible_blivet_payload.zip/ansible/modules/blivet.py\", line 868, in manage_volume\n  File \"/tmp/ansible_blivet_payload_5h2_lfy7/ansible_blivet_payload.zip/ansible/modules/blivet.py\", line 353, in manage\n  File \"/tmp/ansible_blivet_payload_5h2_lfy7/ansible_blivet_payload.zip/ansible/modules/blivet.py\", line 385, in _create\n  File \"/tmp/ansible_blivet_payload_5h2_lfy7/ansible_blivet_payload.zip/ansible/modules/blivet.py\", line 340, in _reformat\n  File \"/usr/lib/python3.6/site-packages/blivet/threads.py\", line 53, in run_with_lock\n    return m(*args, **kwargs)\n  File \"/usr/lib/python3.6/site-packages/blivet/blivet.py\", line 828, in format_device\n    raise e\n  File \"/usr/lib/python3.6/site-packages/blivet/blivet.py\", line 824, in format_device\n    self.devicetree.actions.add(create_ac)\n  File \"/usr/lib/python3.6/site-packages/blivet/threads.py\", line 53, in run_with_lock\n    return m(*args, **kwargs)\n  File \"/usr/lib/python3.6/site-packages/blivet/actionlist.py\", line 76, in add\n    action.apply()\n  File \"/usr/lib/python3.6/site-packages/blivet/threads.py\", line 53, in run_with_lock\n    return m(*args, **kwargs)\n  File \"/usr/lib/python3.6/site-packages/blivet/deviceaction.py\", line 607, in apply\n    self.device.format = self._format\n  File \"/usr/lib/python3.6/site-packages/blivet/threads.py\", line 53, in run_with_lock\n    return m(*args, **kwargs)\n  File \"/usr/lib/python3.6/site-packages/blivet/devices/storage.py\", line 781, in <lambda>\n    lambda d, f: d._set_format(f),\n  File \"/usr/lib/python3.6/site-packages/blivet/threads.py\", line 53, in run_with_lock\n    return m(*args, **kwargs)\n  File \"/usr/lib/python3.6/site-packages/blivet/devices/storage.py\", line 732, in _set_format\n    raise errors.DeviceError(\"device is too large for new format\")\nblivet.errors.DeviceError: device is too large for new format\n",
    "module_stdout": "",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 1
}

PLAY RECAP ***********************************************************************************************************************************************************************************************************************************
localhost                  : ok=31   changed=0    unreachable=0    failed=1    skipped=19   rescued=0    ignored=0   
[root@storageqe-62 storage]# lsblk | grep sdj
sdj                           8:144  0 931.5G  0 disk 
[root@storageqe-62 storage]# mkswap /dev/sdj 
Setting up swapspace version 1, size = 931.5 GiB (1000204881920 bytes)
no label, UUID=85eb5383-f60f-4e9a-8644-79cd1ea51c18
[root@storageqe-62 storage]# echo $?
0

storage: ntfs not support

environment

# uname -r
5.6.6-300.fc32.x86_64
# cat /etc/redhat-release
Fedora release 32 (Thirty Two)

playbook

# cat tests/ntfs_disk.yml 
- hosts: all
  vars:
    mount_location: '/opt/test1'
    volume_group_size: '10g'
    volume_size_before: '5g'
    volume_size_after: '9g'

  tasks:
    - include_role:
        name: storage

    - include_tasks: get_unused_disk.yml
      vars:
        min_size: "{{ volume_group_size }}"
        max_return: 1

    - name: Create one ntfs fs disk 
      include_role:
        name: storage
      vars:
          storage_volumes:
            - name: foo
              disks:
                - sdf
              type: disk 
              fs_type: 'ntfs'
              mount_point: "{{ mount_location }}"

execution log

---snip---
TASK [storage : manage the pools and volumes to match the specified state] ****************************************************************************************************************************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:104
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1597658552.6994646-28894-122065178217497 && echo ansible-tmp-1597658552.6994646-28894-122065178217497="` echo /root/.ansible/tmp/ansible-tmp-1597658552.6994646-28894-122065178217497 `" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
<localhost> PUT /root/.ansible/tmp/ansible-local-28512dx15fz8s/tmp26hbxlib TO /root/.ansible/tmp/ansible-tmp-1597658552.6994646-28894-122065178217497/AnsiballZ_blivet.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1597658552.6994646-28894-122065178217497/ /root/.ansible/tmp/ansible-tmp-1597658552.6994646-28894-122065178217497/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1597658552.6994646-28894-122065178217497/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1597658552.6994646-28894-122065178217497/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
  File "/tmp/ansible_blivet_payload_y36wzto6/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 1181, in run_module
  File "/tmp/ansible_blivet_payload_y36wzto6/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 898, in manage_volume
  File "/tmp/ansible_blivet_payload_y36wzto6/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 415, in manage
  File "/tmp/ansible_blivet_payload_y36wzto6/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 447, in _create
  File "/tmp/ansible_blivet_payload_y36wzto6/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 393, in _reformat
  File "/tmp/ansible_blivet_payload_y36wzto6/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 338, in _get_format
fatal: [localhost]: FAILED! => {
    "actions": [],
    "changed": false,
    "crypts": [],
    "invocation": {
        "module_args": {
            "disklabel_type": null,
            "packages_only": false,
            "pools": [],
            "safe_mode": true,
            "use_partitions": null,
            "volumes": [
                {
                    "disks": [
                        "sdf"
                    ],
                    "encryption": false,
                    "encryption_cipher": null,
                    "encryption_key": null,
                    "encryption_key_size": null,
                    "encryption_luks_version": null,
                    "encryption_password": null,
                    "fs_create_options": "",
                    "fs_label": "",
                    "fs_overwrite_existing": true,
                    "fs_type": "ntfs",
                    "mount_check": 0,
                    "mount_device_identifier": "uuid",
                    "mount_options": "defaults",
                    "mount_passno": 0,
                    "mount_point": "/opt/test1",
                    "name": "foo",
                    "raid_chunk_size": null,
                    "raid_device_count": null,
                    "raid_level": null,
                    "raid_metadata_version": null,
                    "raid_spare_count": null,
                    "size": 0,
                    "state": "present",
                    "type": "disk"
                }
            ]
        }
    },
    "leaves": [],
    "mounts": [],
    "msg": "required tools for file system 'ntfs' are missing",
    "packages": [],
    "pools": [],
    "volumes": []
}

PLAY RECAP ********************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
localhost                  : ok=31   changed=0    unreachable=0    failed=1    skipped=19   rescued=0    ignored=0   

cat /tmp/blivet.log | tail -20

2020-08-17 06:02:35,028 INFO program/MainThread: Running... udevadm settle --timeout=300
2020-08-17 06:02:35,051 DEBUG program/MainThread: Return code: 0
2020-08-17 06:02:35,078 INFO blivet/MainThread: edd: MBR signature on sda is zero. new disk image?
2020-08-17 06:02:35,078 INFO blivet/MainThread: edd: MBR signature on sdl is zero. new disk image?
2020-08-17 06:02:35,078 INFO blivet/MainThread: edd: collected mbr signatures: {}
2020-08-17 06:02:35,083 DEBUG blivet/MainThread:                DeviceTree.get_device_by_path: path: /dev/mapper/fedora_storageqe--62-root ; incomplete: False ; hidden: False ;
2020-08-17 06:02:35,086 DEBUG blivet/MainThread:                DeviceTree.get_device_by_path returned existing 15 GiB lvmlv fedora_storageqe-62-root (43) with existing xfs filesystem
2020-08-17 06:02:35,086 DEBUG blivet/MainThread: resolved '/dev/mapper/fedora_storageqe--62-root' to 'fedora_storageqe-62-root' (lvmlv)
2020-08-17 06:02:35,087 DEBUG blivet/MainThread: resolved 'UUID=439fbba7-c553-44d9-ac42-3cad0a46290e' to 'sda2' (partition)
2020-08-17 06:02:35,087 DEBUG blivet/MainThread: resolved 'UUID=1328-582B' to 'sda1' (partition)
2020-08-17 06:02:35,089 DEBUG blivet/MainThread:                DeviceTree.get_device_by_path: path: /dev/mapper/fedora_storageqe--62-swap ; incomplete: False ; hidden: False ;
2020-08-17 06:02:35,091 DEBUG blivet/MainThread:                DeviceTree.get_device_by_path returned existing 7.88 GiB lvmlv fedora_storageqe-62-swap (56) with existing swap
2020-08-17 06:02:35,092 DEBUG blivet/MainThread: resolved '/dev/mapper/fedora_storageqe--62-swap' to 'fedora_storageqe-62-swap' (lvmlv)
2020-08-17 06:02:35,092 DEBUG blivet/MainThread: failed to resolve 'UUID=ab31a5b5-d7fb-4c63-8946-eaefd30e1f52'
2020-08-17 06:02:35,094 DEBUG blivet/MainThread:                  DeviceTree.get_device_by_name: name: sdf ; incomplete: False ; hidden: False ;
2020-08-17 06:02:35,097 DEBUG blivet/MainThread:                  DeviceTree.get_device_by_name returned existing 931.51 GiB disk sdf (165)
2020-08-17 06:02:35,097 DEBUG blivet/MainThread: resolved 'sdf' to 'sdf' (disk)
2020-08-17 06:02:35,100 DEBUG blivet/MainThread:                    NTFS.supported: supported: False ;
2020-08-17 06:02:35,100 DEBUG blivet/MainThread: get_format('ntfs') returning NTFS instance with object id 185
2020-08-17 06:02:35,102 DEBUG blivet/MainThread:                 NTFS.supported: supported: False ;

storage: calltrace observed when set type: partition for storage_pools

From the description of pull/64

I designed one case about storage_pool with parition type, see my bellow env/playbook/execution log

[1]
Fix key for partition pool...
This leads to a failure (crash?) any time a pool of type 'partition' is present.

environment: RHEL-8.2

playbook

---
- hosts: all
  become: true
  vars:
    mount_location: '/opt/test1'
    volume_group_size: '5g'
    volume_size: '4g'

  tasks:
    - include_role:
        name: storage

    - include_tasks: get_unused_disk.yml
      vars:
        min_size: "{{ volume_group_size }}"
        max_return: 3

    - name: Test for Fix key for partition pool 
      block:
        - name: Fix key for partition pool
          include_role:
            name: storage
          vars:
            storage_safe_mode: false
            storage_pools:
              - name: vg
                disks: "{{ unused_disks }}"
                type: partition
                volumes:
                  - name: lv 
                    size: "{{ volume_size }}"
                    mount_point: "{{ mount_location }}"

ansible-playbook -i inventory tests/a.yml -vvvv

---snip---
TASK [storage : debug] ***************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:84
ok: [localhost] => {
    "_storage_pools": [
        {
            "disks": [
                "sdb",
                "sde",
                "sdf"
            ],
            "encryption": false,
            "encryption_cipher": null,
            "encryption_key_file": null,
            "encryption_key_size": null,
            "encryption_luks_version": null,
            "encryption_passphrase": null,
            "name": "vg",
            "raid_level": null,
            "state": "present",
            "type": "partition",
            "volumes": [
                {
                    "encryption": false,
                    "encryption_cipher": null,
                    "encryption_key_file": null,
                    "encryption_key_size": null,
                    "encryption_luks_version": null,
                    "encryption_passphrase": null,
                    "fs_create_options": "",
                    "fs_label": "",
                    "fs_overwrite_existing": true,
                    "fs_type": "xfs",
                    "mount_check": 0,
                    "mount_device_identifier": "uuid",
                    "mount_options": "defaults",
                    "mount_passno": 0,
                    "mount_point": "/opt/test1",
                    "name": "lv",
                    "pool": "vg",
                    "raid_chunk_size": null,
                    "raid_device_count": null,
                    "raid_level": null,
                    "raid_metadata_version": null,
                    "raid_spare_count": null,
                    "size": "4g",
                    "state": "present",
                    "type": "lvm"
                }
            ]
        }
    ]
}

TASK [storage : debug] ***************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:87
ok: [localhost] => {
    "_storage_volumes": []
}

TASK [storage : get required packages] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:90
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1593503378.2319095-10404-78465090616019 && echo ansible-tmp-1593503378.2319095-10404-78465090616019="` echo /root/.ansible/tmp/ansible-tmp-1593503378.2319095-10404-78465090616019 `" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
<localhost> PUT /root/.ansible/tmp/ansible-local-9913h7i6w68k/tmpm1zzb2ru TO /root/.ansible/tmp/ansible-tmp-1593503378.2319095-10404-78465090616019/AnsiballZ_blivet.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1593503378.2319095-10404-78465090616019/ /root/.ansible/tmp/ansible-tmp-1593503378.2319095-10404-78465090616019/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1593503378.2319095-10404-78465090616019/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1593503378.2319095-10404-78465090616019/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "actions": [],
    "changed": false,
    "crypts": [],
    "invocation": {
        "module_args": {
            "disklabel_type": null,
            "packages_only": true,
            "pools": [
                {
                    "disks": [
                        "sdb",
                        "sde",
                        "sdf"
                    ],
                    "encryption": false,
                    "encryption_cipher": null,
                    "encryption_key_file": null,
                    "encryption_key_size": null,
                    "encryption_luks_version": null,
                    "encryption_passphrase": null,
                    "name": "vg",
                    "raid_level": null,
                    "state": "present",
                    "type": "partition",
                    "volumes": [
                        {
                            "encryption": false,
                            "encryption_cipher": null,
                            "encryption_key_file": null,
                            "encryption_key_size": null,
                            "encryption_luks_version": null,
                            "encryption_passphrase": null,
                            "fs_create_options": "",
                            "fs_label": "",
                            "fs_overwrite_existing": true,
                            "fs_type": "xfs",
                            "mount_check": 0,
                            "mount_device_identifier": "uuid",
                            "mount_options": "defaults",
                            "mount_passno": 0,
                            "mount_point": "/opt/test1",
                            "name": "lv",
                            "pool": "vg",
                            "raid_chunk_size": null,
                            "raid_device_count": null,
                            "raid_level": null,
                            "raid_metadata_version": null,
                            "raid_spare_count": null,
                            "size": "4g",
                            "state": "present",
                            "type": "lvm"
                        }
                    ]
                }
            ],
            "safe_mode": true,
            "use_partitions": null,
            "volumes": []
        }
    },
    "leaves": [],
    "mounts": [],
    "packages": [
        "lvm2",
        "xfsprogs"
    ],
    "pools": [],
    "volumes": []
}

TASK [storage : make sure required packages are installed] ***************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:99
Running dnf
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1593503384.8154094-10536-209921407276320 && echo ansible-tmp-1593503384.8154094-10536-209921407276320="` echo /root/.ansible/tmp/ansible-tmp-1593503384.8154094-10536-209921407276320 `" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/packaging/os/dnf.py
<localhost> PUT /root/.ansible/tmp/ansible-local-9913h7i6w68k/tmpd6re1p33 TO /root/.ansible/tmp/ansible-tmp-1593503384.8154094-10536-209921407276320/AnsiballZ_dnf.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1593503384.8154094-10536-209921407276320/ /root/.ansible/tmp/ansible-tmp-1593503384.8154094-10536-209921407276320/AnsiballZ_dnf.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1593503384.8154094-10536-209921407276320/AnsiballZ_dnf.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1593503384.8154094-10536-209921407276320/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "changed": false,
    "invocation": {
        "module_args": {
            "allow_downgrade": false,
            "autoremove": false,
            "bugfix": false,
            "conf_file": null,
            "disable_excludes": null,
            "disable_gpg_check": false,
            "disable_plugin": [],
            "disablerepo": [],
            "download_dir": null,
            "download_only": false,
            "enable_plugin": [],
            "enablerepo": [],
            "exclude": [],
            "install_repoquery": true,
            "install_weak_deps": true,
            "installroot": "/",
            "list": null,
            "lock_timeout": 30,
            "name": [
                "lvm2",
                "xfsprogs"
            ],
            "releasever": null,
            "security": false,
            "skip_broken": false,
            "state": "present",
            "update_cache": false,
            "update_only": false,
            "validate_certs": true
        }
    },
    "msg": "Nothing to do",
    "rc": 0,
    "results": []
}

TASK [storage : manage the pools and volumes to match the specified state] ***********************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:104
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1593503389.0837007-10552-256949382030857 && echo ansible-tmp-1593503389.0837007-10552-256949382030857="` echo /root/.ansible/tmp/ansible-tmp-1593503389.0837007-10552-256949382030857 `" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
<localhost> PUT /root/.ansible/tmp/ansible-local-9913h7i6w68k/tmp20hb2ct9 TO /root/.ansible/tmp/ansible-tmp-1593503389.0837007-10552-256949382030857/AnsiballZ_blivet.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1593503389.0837007-10552-256949382030857/ /root/.ansible/tmp/ansible-tmp-1593503389.0837007-10552-256949382030857/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1593503389.0837007-10552-256949382030857/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1593503389.0837007-10552-256949382030857/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
  File "/root/.ansible/tmp/ansible-tmp-1593503389.0837007-10552-256949382030857/AnsiballZ_blivet.py", line 102, in <module>
    _ansiballz_main()
  File "/root/.ansible/tmp/ansible-tmp-1593503389.0837007-10552-256949382030857/AnsiballZ_blivet.py", line 94, in _ansiballz_main
    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
  File "/root/.ansible/tmp/ansible-tmp-1593503389.0837007-10552-256949382030857/AnsiballZ_blivet.py", line 40, in invoke_module
    runpy.run_module(mod_name='ansible.modules.blivet', init_globals=None, run_name='__main__', alter_sys=True)
  File "/usr/lib64/python3.6/runpy.py", line 205, in run_module
    return _run_module_code(code, init_globals, run_name, mod_spec)
  File "/usr/lib64/python3.6/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/tmp/ansible_blivet_payload_pv3jltzf/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 1197, in <module>
  File "/tmp/ansible_blivet_payload_pv3jltzf/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 1194, in main
  File "/tmp/ansible_blivet_payload_pv3jltzf/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 1147, in run_module
  File "/tmp/ansible_blivet_payload_pv3jltzf/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 884, in manage_pool
  File "/tmp/ansible_blivet_payload_pv3jltzf/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 790, in manage
  File "/tmp/ansible_blivet_payload_pv3jltzf/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 772, in _manage_volumes
  File "/tmp/ansible_blivet_payload_pv3jltzf/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 357, in manage
  File "/tmp/ansible_blivet_payload_pv3jltzf/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 461, in _create
AttributeError: 'DiskDevice' object has no attribute 'free_space'
fatal: [localhost]: FAILED! => {
    "changed": false,
    "module_stderr": "Traceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-tmp-1593503389.0837007-10552-256949382030857/AnsiballZ_blivet.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-tmp-1593503389.0837007-10552-256949382030857/AnsiballZ_blivet.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-tmp-1593503389.0837007-10552-256949382030857/AnsiballZ_blivet.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible.modules.blivet', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_blivet_payload_pv3jltzf/ansible_blivet_payload.zip/ansible/modules/blivet.py\", line 1197, in <module>\n  File \"/tmp/ansible_blivet_payload_pv3jltzf/ansible_blivet_payload.zip/ansible/modules/blivet.py\", line 1194, in main\n  File \"/tmp/ansible_blivet_payload_pv3jltzf/ansible_blivet_payload.zip/ansible/modules/blivet.py\", line 1147, in run_module\n  File \"/tmp/ansible_blivet_payload_pv3jltzf/ansible_blivet_payload.zip/ansible/modules/blivet.py\", line 884, in manage_pool\n  File \"/tmp/ansible_blivet_payload_pv3jltzf/ansible_blivet_payload.zip/ansible/modules/blivet.py\", line 790, in manage\n  File \"/tmp/ansible_blivet_payload_pv3jltzf/ansible_blivet_payload.zip/ansible/modules/blivet.py\", line 772, in _manage_volumes\n  File \"/tmp/ansible_blivet_payload_pv3jltzf/ansible_blivet_payload.zip/ansible/modules/blivet.py\", line 357, in manage\n  File \"/tmp/ansible_blivet_payload_pv3jltzf/ansible_blivet_payload.zip/ansible/modules/blivet.py\", line 461, in _create\nAttributeError: 'DiskDevice' object has no attribute 'free_space'\n",
    "module_stdout": "",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 1
}

PLAY RECAP ***************************************************************************************************************************************************************************************************************
localhost                  : ok=35   changed=0    unreachable=0    failed=1    skipped=15   rescued=0    ignored=0  

storage: resize function for vfat FS does not work

Hi,
I found resize function for vfat FS does not work on rhel7. The size of disk is 20G. The size of VG is 20G. volume size has not changed when do resize from 19g to 25g, but the output by the terminal shows passed and does not report ERROR "exceed the size of disk".

playbook

---
- hosts: all
  become: true
  vars:
    mount_location: '/opt/test1'
    volume_group_size: '20g'
    volume_size_before: '19g'
    volume_size_after: '25g'
    storage_safe_mode: false

  tasks:
    - include_role:
        name: storage

    - include_tasks: get_unused_disk.yml
      vars:
        min_size: "{{ volume_group_size }}"
        max_return: 1

    - name: Create one LVM logical volume with "{{ volume_size_before }}" under one volume group
      include_role:
        name: storage
      vars:
          storage_pools:
            - name: foo
              disks: "{{ unused_disks }}"
              type: lvm
              volumes:
                - name: test1
                  fs_type: 'vfat'
                  size: "{{ volume_size_before }}"
                  mount_point: "{{ mount_location }}"

    - shell: lsblk | grep foo-test1

    - shell: mount | grep foo-test1

    - include_tasks: verify-role-results.yml

    - name: Change volume_size  "{{ volume_size_after }}"
      include_role:
        name: storage
      vars:
          storage_pools:
            - name: foo
              type: lvm
              disks: "{{ unused_disks }}"
              volumes:
                - name: test1
                  fs_type: 'vfat'
                  size: "{{ volume_size_after }}"
                  mount_point: "{{ mount_location }}"

    - shell: lsblk | grep foo-test1

    - shell: mount | grep foo-test1

    - include_tasks: verify-role-results.yml

    - name: Clean up
      include_role:
        name: storage
      vars:
          storage_pools:
            - name: foo
              disks: "{{ unused_disks }}"
              state: absent
              volumes:
                - name: test1
                  size: "{{ volume_size_after }}"
                  mount_point: "{{ mount_location }}"

    - include_tasks: verify-role-results.yml

output log

task path: /root/ansible-test/upstream/storage/tasks/main-blivet.yml:113
ok: [192.168.122.96] => {
    "blivet_output": {
        "actions": [],
        "changed": false,
        "failed": false,
        "leaves": [
            "/dev/vda1",
            "/dev/mapper/rhel_ansible--el7-swap",
            "/dev/mapper/rhel_ansible--el7-root",
            "/dev/mapper/foo-test1",
            "/dev/vdc",
            "/dev/sr0"
        ],
        "mounts": [
            {
                "dump": 0,
                "fstype": "vfat",
                "opts": "defaults",
                                                     "passno": 0,
                "path": "/opt/test1",
                "src": "/dev/mapper/foo-test1",
                "state": "mounted"
            }
        ],
        "packages": [
            "dosfstools",
            "xfsprogs",
            "e2fsprogs",
            "lvm2"
        ],
        "pools": [''' 
            {
                "disks": [
                    "vdb"
                ],
                "name": "foo",
                "state": "present",
                "type": "lvm",
                "volumes": [
                    {
                        "_device": "/dev/mapper/foo-test1",
                        "_mount_id": "/dev/mapper/foo-test1",
                        "fs_create_options": "",
                        "fs_label": "",
                        "fs_overwrite_existing": true,
                        "fs_type": "vfat",
                        "mount_check": 0,
                        "mount_device_identifier": "uuid",
                        "mount_options": "defaults",
                        "mount_passno": 0,
                        "mount_point": "/opt/test1",
                        "name": "test1",
                        "pool": "foo",
                        "size": "25g",
                        "state": "present",
                        "type": "lvm"
                    }
                ]
            }
        ],
        "volumes": []
    }
}


PLAY RECAP **************************************************************************************
192.168.122.96             : ok=173  changed=8    unreachable=0    failed=0    skipped=29   rescued=0    ignored=0

Make lvm size optional

Make lvm size option optional if only a single lvm is specified and assume 100% free space.
If multiple lvms are specified, space should be required.

storage: tests_deps.yml failed on latest RHEL7

[root@storageqe-36 storage]# ansible-playbook -i localhost tests//tests_deps.yml -vvvv
ansible-playbook 2.9.10

PLAYBOOK: tests_deps.yml ***********************************************************************************************************************************************************************
Positional arguments: tests//tests_deps.yml
become_method: sudo
inventory: (u'/root/storage/localhost',)
forks: 5
tags: (u'all',)
verbosity: 4
connection: smart
timeout: 10
1 plays in tests//tests_deps.yml

PLAY [all localhost] ***************************************************************************************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************************************************************************************
task path: /root/storage/tests/tests_deps.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1594659500.54-6087-74345776708991 && echo ansible-tmp-1594659500.54-6087-74345776708991="` echo /root/.ansible/tmp/ansible-tmp-1594659500.54-6087-74345776708991 `" ) && sleep 0'
Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-60788bGWHN/tmpztnz3F TO /root/.ansible/tmp/ansible-tmp-1594659500.54-6087-74345776708991/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1594659500.54-6087-74345776708991/ /root/.ansible/tmp/ansible-tmp-1594659500.54-6087-74345776708991/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 /root/.ansible/tmp/ansible-tmp-1594659500.54-6087-74345776708991/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1594659500.54-6087-74345776708991/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
META: ran handlers

TASK [include_role : storage] ******************************************************************************************************************************************************************
task path: /root/storage/tests/tests_deps.yml:6

TASK [storage : Set version specific variables] ************************************************************************************************************************************************
task path: /root/storage/tasks/main.yml:2
ok: [localhost] => {
    "ansible_facts": {
        "blivet_package_list": [
            "python-enum34", 
            "python-blivet3", 
            "libblockdev-crypto", 
            "libblockdev-dm", 
            "libblockdev-lvm", 
            "libblockdev-mdraid", 
            "libblockdev-swap"
        ]
    }, 
    "ansible_included_var_files": [
        "/root/storage/vars/RedHat_7.yml"
    ], 
    "changed": false
}

TASK [storage : define an empty list of pools to be used in testing] ***************************************************************************************************************************
task path: /root/storage/tasks/main.yml:14
ok: [localhost] => {
    "ansible_facts": {
        "_storage_pools_list": []
    }, 
    "changed": false
}

TASK [storage : define an empty list of volumes to be used in testing] *************************************************************************************************************************
task path: /root/storage/tasks/main.yml:18
ok: [localhost] => {
    "ansible_facts": {
        "_storage_volumes_list": []
    }, 
    "changed": false
}

TASK [storage : include the appropriate provider tasks] ****************************************************************************************************************************************
task path: /root/storage/tasks/main.yml:22
included: /root/storage/tasks/main-blivet.yml for localhost

TASK [storage : get a list of rpm packages installed on host machine] **************************************************************************************************************************
task path: /root/storage/tasks/main-blivet.yml:2
skipping: [localhost] => {
    "changed": false, 
    "skip_reason": "Conditional result was False"
}

TASK [storage : make sure blivet is available] *************************************************************************************************************************************************
task path: /root/storage/tasks/main-blivet.yml:7
Running yum
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1594659501.73-6252-41623003377280 && echo ansible-tmp-1594659501.73-6252-41623003377280="` echo /root/.ansible/tmp/ansible-tmp-1594659501.73-6252-41623003377280 `" ) && sleep 0'
Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-60788bGWHN/tmpsB87Lb TO /root/.ansible/tmp/ansible-tmp-1594659501.73-6252-41623003377280/AnsiballZ_yum.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1594659501.73-6252-41623003377280/ /root/.ansible/tmp/ansible-tmp-1594659501.73-6252-41623003377280/AnsiballZ_yum.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 /root/.ansible/tmp/ansible-tmp-1594659501.73-6252-41623003377280/AnsiballZ_yum.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1594659501.73-6252-41623003377280/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "changed": false, 
    "invocation": {
        "module_args": {
            "allow_downgrade": false, 
            "autoremove": false, 
            "bugfix": false, 
            "conf_file": null, 
            "disable_excludes": null, 
            "disable_gpg_check": false, 
            "disable_plugin": [], 
            "disablerepo": [], 
            "download_dir": null, 
            "download_only": false, 
            "enable_plugin": [], 
            "enablerepo": [], 
            "exclude": [], 
            "install_repoquery": true, 
            "install_weak_deps": true, 
            "installroot": "/", 
            "list": null, 
            "lock_timeout": 30, 
            "name": [
                "python-enum34", 
                "python-blivet3", 
                "libblockdev-crypto", 
                "libblockdev-dm", 
                "libblockdev-lvm", 
                "libblockdev-mdraid", 
                "libblockdev-swap"
            ], 
            "releasever": null, 
            "security": false, 
            "skip_broken": false, 
            "state": "present", 
            "update_cache": false, 
            "update_only": false, 
            "use_backend": "auto", 
            "validate_certs": true
        }
    }, 
    "msg": "", 
    "rc": 0, 
    "results": [
        "python-enum34-1.0.4-1.el7.noarch providing python-enum34 is already installed", 
        "1:python2-blivet3-3.1.3-3.el7.noarch providing python-blivet3 is already installed", 
        "libblockdev-crypto-2.18-5.el7.x86_64 providing libblockdev-crypto is already installed", 
        "libblockdev-dm-2.18-5.el7.x86_64 providing libblockdev-dm is already installed", 
        "libblockdev-lvm-2.18-5.el7.x86_64 providing libblockdev-lvm is already installed", 
        "libblockdev-mdraid-2.18-5.el7.x86_64 providing libblockdev-mdraid is already installed", 
        "libblockdev-swap-2.18-5.el7.x86_64 providing libblockdev-swap is already installed"
    ]
}

TASK [storage : initialize internal facts] *****************************************************************************************************************************************************
task path: /root/storage/tasks/main-blivet.yml:18
ok: [localhost] => {
    "ansible_facts": {
        "_storage_pools": [], 
        "_storage_vol_defaults": [], 
        "_storage_vol_pools": [], 
        "_storage_vols_no_defaults": [], 
        "_storage_vols_no_defaults_by_pool": {}, 
        "_storage_vols_w_defaults": [], 
        "_storage_volumes": []
    }, 
    "changed": false
}

TASK [storage : Apply defaults to pools and volumes [1/6]] *************************************************************************************************************************************
task path: /root/storage/tasks/main-blivet.yml:28

TASK [storage : Apply defaults to pools and volumes [2/6]] *************************************************************************************************************************************
task path: /root/storage/tasks/main-blivet.yml:36

TASK [storage : Apply defaults to pools and volumes [3/6]] *************************************************************************************************************************************
task path: /root/storage/tasks/main-blivet.yml:44

TASK [storage : Apply defaults to pools and volumes [4/6]] *************************************************************************************************************************************
task path: /root/storage/tasks/main-blivet.yml:52

TASK [storage : Apply defaults to pools and volumes [5/6]] *************************************************************************************************************************************
task path: /root/storage/tasks/main-blivet.yml:61

TASK [storage : Apply defaults to pools and volumes [6/6]] *************************************************************************************************************************************
task path: /root/storage/tasks/main-blivet.yml:72

TASK [storage : debug] *************************************************************************************************************************************************************************
task path: /root/storage/tasks/main-blivet.yml:84
ok: [localhost] => {
    "_storage_pools": []
}

TASK [storage : debug] *************************************************************************************************************************************************************************
task path: /root/storage/tasks/main-blivet.yml:87
ok: [localhost] => {
    "_storage_volumes": []
}

TASK [storage : get required packages] *********************************************************************************************************************************************************
task path: /root/storage/tasks/main-blivet.yml:90
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1594659502.79-6301-123524061725182 && echo ansible-tmp-1594659502.79-6301-123524061725182="` echo /root/.ansible/tmp/ansible-tmp-1594659502.79-6301-123524061725182 `" ) && sleep 0'
Using module file /root/storage/library/blivet.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-60788bGWHN/tmpxpoH0Z TO /root/.ansible/tmp/ansible-tmp-1594659502.79-6301-123524061725182/AnsiballZ_blivet.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1594659502.79-6301-123524061725182/ /root/.ansible/tmp/ansible-tmp-1594659502.79-6301-123524061725182/AnsiballZ_blivet.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 /root/.ansible/tmp/ansible-tmp-1594659502.79-6301-123524061725182/AnsiballZ_blivet.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1594659502.79-6301-123524061725182/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "actions": [], 
    "changed": false, 
    "crypts": [], 
    "invocation": {
        "module_args": {
            "disklabel_type": null, 
            "packages_only": true, 
            "pools": [], 
            "safe_mode": true, 
            "use_partitions": null, 
            "volumes": []
        }
    }, 
    "leaves": [], 
    "mounts": [], 
    "packages": [], 
    "pools": [], 
    "volumes": []
}

TASK [storage : make sure required packages are installed] *************************************************************************************************************************************
task path: /root/storage/tasks/main-blivet.yml:99
Running yum
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1594659503.16-6327-159188571252203 && echo ansible-tmp-1594659503.16-6327-159188571252203="` echo /root/.ansible/tmp/ansible-tmp-1594659503.16-6327-159188571252203 `" ) && sleep 0'
Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-60788bGWHN/tmp8L77RA TO /root/.ansible/tmp/ansible-tmp-1594659503.16-6327-159188571252203/AnsiballZ_yum.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1594659503.16-6327-159188571252203/ /root/.ansible/tmp/ansible-tmp-1594659503.16-6327-159188571252203/AnsiballZ_yum.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 /root/.ansible/tmp/ansible-tmp-1594659503.16-6327-159188571252203/AnsiballZ_yum.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1594659503.16-6327-159188571252203/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "changed": false, 
    "invocation": {
        "module_args": {
            "allow_downgrade": false, 
            "autoremove": false, 
            "bugfix": false, 
            "conf_file": null, 
            "disable_excludes": null, 
            "disable_gpg_check": false, 
            "disable_plugin": [], 
            "disablerepo": [], 
            "download_dir": null, 
            "download_only": false, 
            "enable_plugin": [], 
            "enablerepo": [], 
            "exclude": [], 
            "install_repoquery": true, 
            "install_weak_deps": true, 
            "installroot": "/", 
            "list": null, 
            "lock_timeout": 30, 
            "name": [], 
            "releasever": null, 
            "security": false, 
            "skip_broken": false, 
            "state": "present", 
            "update_cache": false, 
            "update_only": false, 
            "use_backend": "auto", 
            "validate_certs": true
        }
    }, 
    "msg": "", 
    "rc": 0, 
    "results": []
}

TASK [storage : manage the pools and volumes to match the specified state] *********************************************************************************************************************
task path: /root/storage/tasks/main-blivet.yml:104
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1594659503.57-6357-176730183967573 && echo ansible-tmp-1594659503.57-6357-176730183967573="` echo /root/.ansible/tmp/ansible-tmp-1594659503.57-6357-176730183967573 `" ) && sleep 0'
Using module file /root/storage/library/blivet.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-60788bGWHN/tmpfhYXke TO /root/.ansible/tmp/ansible-tmp-1594659503.57-6357-176730183967573/AnsiballZ_blivet.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1594659503.57-6357-176730183967573/ /root/.ansible/tmp/ansible-tmp-1594659503.57-6357-176730183967573/AnsiballZ_blivet.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 /root/.ansible/tmp/ansible-tmp-1594659503.57-6357-176730183967573/AnsiballZ_blivet.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1594659503.57-6357-176730183967573/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "actions": [], 
    "changed": false, 
    "crypts": [], 
    "invocation": {
        "module_args": {
            "disklabel_type": null, 
            "packages_only": false, 
            "pools": [], 
            "safe_mode": true, 
            "use_partitions": null, 
            "volumes": []
        }
    }, 
    "leaves": [], 
    "mounts": [], 
    "packages": [], 
    "pools": [], 
    "volumes": []
}

TASK [storage : debug] *************************************************************************************************************************************************************************
task path: /root/storage/tasks/main-blivet.yml:113
ok: [localhost] => {
    "blivet_output": {
        "actions": [], 
        "changed": false, 
        "crypts": [], 
        "failed": false, 
        "leaves": [], 
        "mounts": [], 
        "packages": [], 
        "pools": [], 
        "volumes": []
    }
}

TASK [storage : set the list of pools for test verification] ***********************************************************************************************************************************
task path: /root/storage/tasks/main-blivet.yml:116
ok: [localhost] => {
    "ansible_facts": {
        "_storage_pools_list": []
    }, 
    "changed": false
}

TASK [storage : set the list of volumes for test verification] *********************************************************************************************************************************
task path: /root/storage/tasks/main-blivet.yml:120
ok: [localhost] => {
    "ansible_facts": {
        "_storage_volumes_list": []
    }, 
    "changed": false
}

TASK [storage : remove obsolete mounts] ********************************************************************************************************************************************************
task path: /root/storage/tasks/main-blivet.yml:136

TASK [storage : tell systemd to refresh its view of /etc/fstab] ********************************************************************************************************************************
task path: /root/storage/tasks/main-blivet.yml:147
skipping: [localhost] => {
    "changed": false, 
    "skip_reason": "Conditional result was False"
}

TASK [storage : set up new/current mounts] *****************************************************************************************************************************************************
task path: /root/storage/tasks/main-blivet.yml:152

TASK [storage : tell systemd to refresh its view of /etc/fstab] ********************************************************************************************************************************
task path: /root/storage/tasks/main-blivet.yml:163
skipping: [localhost] => {
    "changed": false, 
    "skip_reason": "Conditional result was False"
}

TASK [storage : Manage /etc/crypttab to account for changes we just made] **********************************************************************************************************************
task path: /root/storage/tasks/main-blivet.yml:171

TASK [storage : Update facts] ******************************************************************************************************************************************************************
task path: /root/storage/tasks/main-blivet.yml:186
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1594659504.12-6399-102550370915255 && echo ansible-tmp-1594659504.12-6399-102550370915255="` echo /root/.ansible/tmp/ansible-tmp-1594659504.12-6399-102550370915255 `" ) && sleep 0'
Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-60788bGWHN/tmp3EBuyq TO /root/.ansible/tmp/ansible-tmp-1594659504.12-6399-102550370915255/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1594659504.12-6399-102550370915255/ /root/.ansible/tmp/ansible-tmp-1594659504.12-6399-102550370915255/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 /root/.ansible/tmp/ansible-tmp-1594659504.12-6399-102550370915255/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1594659504.12-6399-102550370915255/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]

TASK [test lvm and xfs package deps] ***********************************************************************************************************************************************************
task path: /root/storage/tests/tests_deps.yml:9
included: /root/storage/tests/run_blivet.yml for localhost

TASK [test lvm and xfs package deps] ***********************************************************************************************************************************************************
task path: /root/storage/tests/run_blivet.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1594659504.71-6554-80081735670704 && echo ansible-tmp-1594659504.71-6554-80081735670704="` echo /root/.ansible/tmp/ansible-tmp-1594659504.71-6554-80081735670704 `" ) && sleep 0'
Using module file /root/storage/library/blivet.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-60788bGWHN/tmpkYuvAh TO /root/.ansible/tmp/ansible-tmp-1594659504.71-6554-80081735670704/AnsiballZ_blivet.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1594659504.71-6554-80081735670704/ /root/.ansible/tmp/ansible-tmp-1594659504.71-6554-80081735670704/AnsiballZ_blivet.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 /root/.ansible/tmp/ansible-tmp-1594659504.71-6554-80081735670704/AnsiballZ_blivet.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1594659504.71-6554-80081735670704/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "actions": [], 
    "changed": false, 
    "crypts": [], 
    "invocation": {
        "module_args": {
            "disklabel_type": null, 
            "packages_only": true, 
            "pools": [
                {
                    "disks": [], 
                    "name": "foo", 
                    "state": "present", 
                    "type": "lvm", 
                    "volumes": [
                        {
                            "fs_type": "xfs", 
                            "mountpoint": "/foo", 
                            "name": "test1", 
                            "state": "present", 
                            "type": "lvm"
                        }
                    ]
                }
            ], 
            "safe_mode": true, 
            "use_partitions": true, 
            "volumes": []
        }
    }, 
    "leaves": [], 
    "mounts": [], 
    "packages": [
        "lvm2", 
        "xfsprogs"
    ], 
    "pools": [], 
    "volumes": []
}

TASK [assert] **********************************************************************************************************************************************************************************
task path: /root/storage/tests/tests_deps.yml:25
fatal: [localhost]: FAILED! => {
    "msg": "The conditional check '[u'lvm2', u'xfsprogs'] == ['lvm2', 'xfsprogs']' failed. The error was: template error while templating string: expected token ',', got 'string'. String: {% if [u'lvm2', u'xfsprogs'] == ['lvm2', 'xfsprogs'] %} True {% else %} False {% endif %}"
}

PLAY RECAP *************************************************************************************************************************************************************************************
localhost                  : ok=18   changed=0    unreachable=0    failed=1    skipped=12   rescued=0    ignored=0  

Changing the mount point on an existing lvm volume results in two mount points in /proc/mounts

Playbook used:

---
- hosts: localhost
  become: true

  roles:
    - name: storage
      storage_pools:
        - name: foo
          disks: ['vdb']
          volumes:
            - name: test1
              size: 10g
              mount_point: '/opt/test1'

    - name: storage
      storage_pools:
        - name: foo
          disks: ['vdb']
          volumes:
            - name: test1
              size: 10g
              mount_point: '/opt/test2'

/proc/mounts:

$ cat /proc/mounts
/dev/mapper/foo-test1 /opt/test1 xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0
/dev/mapper/foo-test1 /opt/test2 xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0

Lsblk Output (before umounting):

vdb                                         252:16   0  20G  0 disk 
โ””โ”€foo-test1                                 253:2    0  10G  0 lvm  /opt/test2
$ sudo umount /opt/test2

Lsblk Output (after unmounting /opt/test2):

vdb                                         252:16   0  20G  0 disk 
โ””โ”€foo-test1                                 253:2    0  10G  0 lvm  /opt/test1

Other relevant info:

  • This was pretty reproducable.
  • There were two entries for that UUID in /etc/fstab and /proc/mounts

Playbook Log:


PLAY [localhost] **************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : get a list of rpm packages installed on host machine] *********************************************************************************************************************************************
ok: [localhost]

TASK [storage : manage pools] *************************************************************************************************************************************************************************************
included: /home/tim/Documents/storage/tasks/pool-default.yml for localhost

TASK [storage : set_fact] *****************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : debug] ********************************************************************************************************************************************************************************************
ok: [localhost] => {
    "pool": {
        "_create": false, 
        "_orig_members": [], 
        "_preexist": false, 
        "_remove": false, 
        "disks": [
            "vdb"
        ], 
        "name": "foo", 
        "state": "present", 
        "type": "lvm", 
        "volumes": [
            {
                "mount_point": "/opt/test1", 
                "name": "test1", 
                "size": "10g"
            }
        ]
    }
}

TASK [storage : Resolve disks] ************************************************************************************************************************************************************************************
ok: [localhost] => (item=vdb)

TASK [storage : debug] ********************************************************************************************************************************************************************************************
ok: [localhost] => {
    "resolved_disks": {
        "changed": false, 
        "msg": "All items completed", 
        "results": [
            {
                "_ansible_ignore_errors": null, 
                "_ansible_item_label": "vdb", 
                "_ansible_item_result": true, 
                "_ansible_no_log": false, 
                "_ansible_parsed": true, 
                "changed": false, 
                "device": "/dev/vdb", 
                "failed": false, 
                "invocation": {
                    "module_args": {
                        "spec": "vdb"
                    }
                }, 
                "item": "vdb"
            }
        ]
    }
}

TASK [storage : set list of resolved disk paths] ******************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : parse the specified size] *************************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : see if pool already exists] ***********************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : manage removal of pool volumes] *******************************************************************************************************************************************************************
skipping: [localhost] => (item={u'mount_point': u'/opt/test1', u'name': u'test1', u'size': u'10g'}) 

TASK [storage : Manage the Specified Pool] ************************************************************************************************************************************************************************
included: /home/tim/Documents/storage/tasks/pool-partitions-default.yml for localhost
included: /home/tim/Documents/storage/tasks/vg-default.yml for localhost

TASK [storage : manage pool partitions] ***************************************************************************************************************************************************************************
skipping: [localhost] => (item=/dev/vdb) 

TASK [storage : Install LVM2 commands as needed] ******************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : Run the setup module to use ansible_facts.lvm] ****************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : Ensure the setup module runs above by setting a flag that lvm was installed] **********************************************************************************************************************
skipping: [localhost]

TASK [storage : collect list of current pvs] **********************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : Set pvs based on disk set] ************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : Set pv partitions based on disk set] **************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : collect list of current pvs] **********************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : Set pvs from current vg] **************************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : configure vg] *************************************************************************************************************************************************************************************
changed: [localhost]

TASK [storage : wipe pvs] *****************************************************************************************************************************************************************************************

TASK [storage : manage pool volumes] ******************************************************************************************************************************************************************************
included: /home/tim/Documents/storage/tasks/volume-default.yml for localhost

TASK [storage : set_fact] *****************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : debug] ********************************************************************************************************************************************************************************************
ok: [localhost] => {
    "volume": {
        "_create": false, 
        "_device": "", 
        "_mount": false, 
        "_orig_fs_type": "", 
        "_orig_mount_point": "", 
        "_preexist": false, 
        "_remove": false, 
        "_wipe": false, 
        "fs_create_options": "", 
        "fs_destroy_options": "-af", 
        "fs_label": "", 
        "fs_overwrite_existing": true, 
        "fs_type": "xfs", 
        "mount_check": 0, 
        "mount_device_identifier": "uuid", 
        "mount_options": "defaults", 
        "mount_passno": 0, 
        "mount_point": "/opt/test1", 
        "name": "test1", 
        "size": "10g", 
        "state": "present", 
        "type": "lvm"
    }
}

TASK [storage : Resolve disks] ************************************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : debug] ********************************************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : set list of resolved disk paths] ******************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : set final device path for whole disk] *************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : set final device path for partition] **************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : debug] ********************************************************************************************************************************************************************************************
ok: [localhost] => {
    "volume": {
        "_create": false, 
        "_device": "", 
        "_mount": false, 
        "_orig_fs_type": "", 
        "_orig_mount_point": "", 
        "_preexist": false, 
        "_remove": false, 
        "_wipe": false, 
        "fs_create_options": "", 
        "fs_destroy_options": "-af", 
        "fs_label": "", 
        "fs_overwrite_existing": true, 
        "fs_type": "xfs", 
        "mount_check": 0, 
        "mount_device_identifier": "uuid", 
        "mount_options": "defaults", 
        "mount_passno": 0, 
        "mount_point": "/opt/test1", 
        "name": "test1", 
        "size": "10g", 
        "state": "present", 
        "type": "lvm"
    }
}

TASK [storage : set final device path for lv] *********************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : stat the final device file] ***********************************************************************************************************************************************************************
included: /home/tim/Documents/storage/tasks/stat_device.yml for localhost

TASK [storage : Stat the final device file] ***********************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : set_fact] *****************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : find current fs type] *****************************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : save current fs type] *****************************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : set_fact] *****************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : set_fact] *****************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : set_fact] *****************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : set_fact] *****************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : parse the specified size] *************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : set up partition parameters] **********************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : Manage the Specified Volume] **********************************************************************************************************************************************************************
included: /home/tim/Documents/storage/tasks/partition-default.yml for localhost
included: /home/tim/Documents/storage/tasks/lv-default.yml for localhost
included: /home/tim/Documents/storage/tasks/fs-default.yml for localhost
included: /home/tim/Documents/storage/tasks/mount-default.yml for localhost

TASK [storage : manage a partition] *******************************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : Install LVM2 commands as needed] ******************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : Run the setup module to use ansible_facts.lvm] ****************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : Ensure the setup module runs above by setting a flag that lvm was installed] **********************************************************************************************************************
skipping: [localhost]

TASK [storage : Make sure LV exists] ******************************************************************************************************************************************************************************
changed: [localhost]
####### DEBUG ########
{
    "lvol_args": {
        "pvs": null, 
        "force": true, 
        "vg": "foo", 
        "lv": "test1", 
        "resizefs": false, 
        "state": "present", 
        "thinpool": null, 
        "snapshot": null, 
        "active": true, 
        "shrink": false, 
        "opts": null, 
        "size": "10g"
    }
}
####### DEBUG ########

TASK [storage : Stat the final device file] ***********************************************************************************************************************************************************************
included: /home/tim/Documents/storage/tasks/stat_device.yml for localhost

TASK [storage : Stat the final device file] ***********************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : Install xfsprogs for xfs file system type] ********************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : set_fact] *****************************************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : Install e2fsprogs for ext file system type] *******************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : set_fact] *****************************************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : Install util-linux as needed] *********************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : set_fact] *****************************************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : unmount fs if we're going to reformat] ************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : Remove file system as needed] *********************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : Create filesystem as needed] **********************************************************************************************************************************************************************
changed: [localhost]

TASK [storage : Stat the final device file] ***********************************************************************************************************************************************************************
included: /home/tim/Documents/storage/tasks/stat_device.yml for localhost

TASK [storage : Stat the final device file] ***********************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : set path-based device identifier to be used in /etc/fstab] ****************************************************************************************************************************************
ok: [localhost]

TASK [storage : collect file system UUID] *************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : set uuid-based device identifier to be used in /etc/fstab] ****************************************************************************************************************************************
ok: [localhost]

TASK [storage : configure mount state (1/2)] **********************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : configure mount state (2/2)] **********************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : Set up the mount] *********************************************************************************************************************************************************************************
changed: [localhost]

TASK [storage : tell systemd to refresh its view of /etc/fstab] ***************************************************************************************************************************************************
changed: [localhost]

TASK [storage : debug] ********************************************************************************************************************************************************************************************
ok: [localhost] => {
    "msg": "Done with test1"
}

TASK [storage : set_fact] *****************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : Update facts] *************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : debug] ********************************************************************************************************************************************************************************************
ok: [localhost] => {
    "msg": "Done with pool foo"
}

TASK [storage : set_fact] *****************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : Update facts] *************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : manage volumes] ***********************************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : get a list of rpm packages installed on host machine] *********************************************************************************************************************************************
ok: [localhost]

TASK [storage : manage pools] *************************************************************************************************************************************************************************************
included: /home/tim/Documents/storage/tasks/pool-default.yml for localhost

TASK [storage : set_fact] *****************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : debug] ********************************************************************************************************************************************************************************************
ok: [localhost] => {
    "pool": {
        "_create": false, 
        "_orig_members": [], 
        "_preexist": false, 
        "_remove": false, 
        "disks": [
            "vdb"
        ], 
        "name": "foo", 
        "state": "present", 
        "type": "lvm", 
        "volumes": [
            {
                "mount_point": "/opt/test2", 
                "name": "test1", 
                "size": "10g"
            }
        ]
    }
}

TASK [storage : Resolve disks] ************************************************************************************************************************************************************************************
ok: [localhost] => (item=vdb)

TASK [storage : debug] ********************************************************************************************************************************************************************************************
ok: [localhost] => {
    "resolved_disks": {
        "changed": false, 
        "msg": "All items completed", 
        "results": [
            {
                "_ansible_ignore_errors": null, 
                "_ansible_item_label": "vdb", 
                "_ansible_item_result": true, 
                "_ansible_no_log": false, 
                "_ansible_parsed": true, 
                "changed": false, 
                "device": "/dev/vdb", 
                "failed": false, 
                "invocation": {
                    "module_args": {
                        "spec": "vdb"
                    }
                }, 
                "item": "vdb"
            }
        ]
    }
}

TASK [storage : set list of resolved disk paths] ******************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : parse the specified size] *************************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : see if pool already exists] ***********************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : manage removal of pool volumes] *******************************************************************************************************************************************************************
skipping: [localhost] => (item={u'mount_point': u'/opt/test2', u'name': u'test1', u'size': u'10g'}) 

TASK [storage : Manage the Specified Pool] ************************************************************************************************************************************************************************
included: /home/tim/Documents/storage/tasks/pool-partitions-default.yml for localhost
included: /home/tim/Documents/storage/tasks/vg-default.yml for localhost

TASK [storage : manage pool partitions] ***************************************************************************************************************************************************************************
skipping: [localhost] => (item=/dev/vdb) 

TASK [storage : Install LVM2 commands as needed] ******************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : Run the setup module to use ansible_facts.lvm] ****************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : Ensure the setup module runs above by setting a flag that lvm was installed] **********************************************************************************************************************
skipping: [localhost]

TASK [storage : collect list of current pvs] **********************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : Set pvs based on disk set] ************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : Set pv partitions based on disk set] **************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : collect list of current pvs] **********************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : Set pvs from current vg] **************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : configure vg] *************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : wipe pvs] *****************************************************************************************************************************************************************************************
skipping: [localhost] => (item=/dev/vdb) 

TASK [storage : manage pool volumes] ******************************************************************************************************************************************************************************
included: /home/tim/Documents/storage/tasks/volume-default.yml for localhost

TASK [storage : set_fact] *****************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : debug] ********************************************************************************************************************************************************************************************
ok: [localhost] => {
    "volume": {
        "_create": false, 
        "_device": "", 
        "_mount": false, 
        "_orig_fs_type": "", 
        "_orig_mount_point": "", 
        "_preexist": false, 
        "_remove": false, 
        "_wipe": false, 
        "fs_create_options": "", 
        "fs_destroy_options": "-af", 
        "fs_label": "", 
        "fs_overwrite_existing": true, 
        "fs_type": "xfs", 
        "mount_check": 0, 
        "mount_device_identifier": "uuid", 
        "mount_options": "defaults", 
        "mount_passno": 0, 
        "mount_point": "/opt/test2", 
        "name": "test1", 
        "size": "10g", 
        "state": "present", 
        "type": "lvm"
    }
}

TASK [storage : Resolve disks] ************************************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : debug] ********************************************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : set list of resolved disk paths] ******************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : set final device path for whole disk] *************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : set final device path for partition] **************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : debug] ********************************************************************************************************************************************************************************************
ok: [localhost] => {
    "volume": {
        "_create": false, 
        "_device": "", 
        "_mount": false, 
        "_orig_fs_type": "", 
        "_orig_mount_point": "", 
        "_preexist": false, 
        "_remove": false, 
        "_wipe": false, 
        "fs_create_options": "", 
        "fs_destroy_options": "-af", 
        "fs_label": "", 
        "fs_overwrite_existing": true, 
        "fs_type": "xfs", 
        "mount_check": 0, 
        "mount_device_identifier": "uuid", 
        "mount_options": "defaults", 
        "mount_passno": 0, 
        "mount_point": "/opt/test2", 
        "name": "test1", 
        "size": "10g", 
        "state": "present", 
        "type": "lvm"
    }
}

TASK [storage : set final device path for lv] *********************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : stat the final device file] ***********************************************************************************************************************************************************************
included: /home/tim/Documents/storage/tasks/stat_device.yml for localhost

TASK [storage : Stat the final device file] ***********************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : set_fact] *****************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : find current fs type] *****************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : save current fs type] *****************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : set_fact] *****************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : set_fact] *****************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : set_fact] *****************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : set_fact] *****************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : parse the specified size] *************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : set up partition parameters] **********************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : Manage the Specified Volume] **********************************************************************************************************************************************************************
included: /home/tim/Documents/storage/tasks/partition-default.yml for localhost
included: /home/tim/Documents/storage/tasks/lv-default.yml for localhost
included: /home/tim/Documents/storage/tasks/fs-default.yml for localhost
included: /home/tim/Documents/storage/tasks/mount-default.yml for localhost

TASK [storage : manage a partition] *******************************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : Install LVM2 commands as needed] ******************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : Run the setup module to use ansible_facts.lvm] ****************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : Ensure the setup module runs above by setting a flag that lvm was installed] **********************************************************************************************************************
skipping: [localhost]

TASK [storage : Make sure LV exists] ******************************************************************************************************************************************************************************
ok: [localhost]
####### DEBUG ########
{
    "lvol_args": {
        "pvs": null, 
        "force": true, 
        "vg": "foo", 
        "lv": "test1", 
        "resizefs": false, 
        "state": "present", 
        "thinpool": null, 
        "snapshot": null, 
        "active": true, 
        "shrink": false, 
        "opts": null, 
        "size": "10g"
    }
}
####### DEBUG ########

TASK [storage : Stat the final device file] ***********************************************************************************************************************************************************************
included: /home/tim/Documents/storage/tasks/stat_device.yml for localhost

TASK [storage : Stat the final device file] ***********************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : Install xfsprogs for xfs file system type] ********************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : set_fact] *****************************************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : Install e2fsprogs for ext file system type] *******************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : set_fact] *****************************************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : Install util-linux as needed] *********************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : set_fact] *****************************************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : unmount fs if we're going to reformat] ************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : Remove file system as needed] *********************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : Create filesystem as needed] **********************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : Stat the final device file] ***********************************************************************************************************************************************************************
included: /home/tim/Documents/storage/tasks/stat_device.yml for localhost

TASK [storage : Stat the final device file] ***********************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : set path-based device identifier to be used in /etc/fstab] ****************************************************************************************************************************************
ok: [localhost]

TASK [storage : collect file system UUID] *************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : set uuid-based device identifier to be used in /etc/fstab] ****************************************************************************************************************************************
ok: [localhost]

TASK [storage : configure mount state (1/2)] **********************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : configure mount state (2/2)] **********************************************************************************************************************************************************************
skipping: [localhost]

TASK [storage : Set up the mount] *********************************************************************************************************************************************************************************
changed: [localhost]

TASK [storage : tell systemd to refresh its view of /etc/fstab] ***************************************************************************************************************************************************
changed: [localhost]

TASK [storage : debug] ********************************************************************************************************************************************************************************************
ok: [localhost] => {
    "msg": "Done with test1"
}

TASK [storage : set_fact] *****************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : Update facts] *************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : debug] ********************************************************************************************************************************************************************************************
ok: [localhost] => {
    "msg": "Done with pool foo"
}

TASK [storage : set_fact] *****************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : Update facts] *************************************************************************************************************************************************************************************
ok: [localhost]

TASK [storage : manage volumes] ***********************************************************************************************************************************************************************************
skipping: [localhost]

PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost                  : ok=103  changed=7    unreachable=0    failed=0   

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.