I guess your only option is to edit
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml and replace
'package' with 'dnf' (keep the beginning 2 "spaces" deeper
than '- name' -> just where "package" starts).
Best Regards,
Strahil Nikolov
На 15 юли 2020 г. 22:39:09 GMT+03:00, clam2718(a)gmail.com написа:
>Thank you very much Strahil for your continued assistance. I have
>tried cleaning up and redeploying four additional times and am still
>experiencing the same error.
>
>To Summarize
>
>(1)
>Attempt 1: change gluster_infra_thick_lvs --> size: 100G to size:
>'100%PVS' and change gluster_infra_thinpools --> lvsize: 500G to
>lvsize: '100%PVS'
>Result 1: deployment failed -->
>TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools
>for RHEL systems.] ***
>task path:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml:33
>fatal: [
fmov1n3.sn.dtcorp.com]: FAILED! => {"changed": false,
"msg":
>"The Python 2 yum module is needed for this module. If you require
>Python 3 support use the `dnf` Ansible module instead."}
>fatal: [
fmov1n1.sn.dtcorp.com]: FAILED! => {"changed": false,
"msg":
>"The Python 2 yum module is needed for this module. If you require
>Python 3 support use the `dnf` Ansible module instead."}
>fatal: [
fmov1n2.sn.dtcorp.com]: FAILED! => {"changed": false,
"msg":
>"The Python 2 yum module is needed for this module. If you require
>Python 3 support use the `dnf` Ansible module instead."}
>
>(2)
>Attempt 2: same as Attempt 1, but substituted 99G for '100%PVS'
>Result 2: same as Result 1
>
>(3)
>Attempt 3: same as Attempt 1, but added
>vars:
> ansible_python_interpreter: /usr/bin/python3
>Result 3: same as Result 1
>
>(4)
>Attempt 4: reboot all three nodes, same as Attempt 1 but omitted
>previously edited size arguments as I read in documentation at
>https://github.com/gluster/gluster-ansible-infra that the size/lvsize
>arguements for variables gluster_infra_thick_lvs and
>gluster_infra_lv_logicalvols are optional and default to 100% size of
>LV.
>
>At the end of this post are the latest version of the playbook and log
>output. As best I can tell the nodes are fully updated, default
>installs using verified images of v4.4.1.1.
>
>From /var/log/cockpit/ovirt-dashboard/gluster-deployment.log I see that
>line 33 in task path:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml is
>what is causing the deployment to fail at this point
>
>- name: Change to Install lvm tools for RHEL systems.
> package:
> name: device-mapper-persistent-data
> state: present
> when: ansible_os_family == 'RedHat'
>
>But package device-mapper-persistent-data is installed:
>
>[root@fmov1n1 ~]# dnf install device-mapper-persistent-data
>Last metadata expiration check: 0:32:10 ago on Wed 15 Jul 2020 06:44:19
>PM UTC.
>Package device-mapper-persistent-data-0.8.5-3.el8.x86_64 is already
>installed.
>Dependencies resolved.
>Nothing to do.
>Complete!
>
>[root@fmov1n1 ~]# dnf info device-mapper-persistent-data
>Last metadata expiration check: 0:31:44 ago on Wed 15 Jul 2020 06:44:19
>PM UTC.
>Installed Packages
>Name : device-mapper-persistent-data
>Version : 0.8.5
>Release : 3.el8
>Architecture : x86_64
>Size : 1.4 M
>Source : device-mapper-persistent-data-0.8.5-3.el8.src.rpm
>Repository : @System
>Summary : Device-mapper Persistent Data Tools
>URL :
https://github.com/jthornber/thin-provisioning-tools
>License : GPLv3+
>Description : thin-provisioning-tools contains
>check,dump,restore,repair,rmap
> : and metadata_size tools to manage device-mapper thin provisioning
> : target metadata devices; cache check,dump,metadata_size,restore
> : and repair tools to manage device-mapper cache metadata devices
> : are included and era check, dump, restore and invalidate to manage
> : snapshot eras
>
>I can't figure out why Ansible v2.9.10 is not calling DNF. Ansible DNF
>package is installed:
>
>[root@fmov1n1 modules]# ansible-doc -t module dnf
>> DNF
>(/usr/lib/python3.6/site-packages/ansible/modules/packaging/os/dnf.py)
>
>Installs, upgrade, removes, and lists packages and groups with the
>`dnf' package
> manager.
>
> * This module is maintained by The Ansible Core Team
>...
>
>
>I am unsure how to further troubleshoot from here!
>
>Thank you again!!!
>Charles
>
>---
>Latest Gluster Playbook (edited from Wizard output)
>
>hc_nodes:
> hosts:
>
fmov1n1.sn.dtcorp.com:
> gluster_infra_volume_groups:
> - vgname: gluster_vg_nvme0n1
> pvname: /dev/mapper/vdo_nvme0n1
> - vgname: gluster_vg_nvme2n1
> pvname: /dev/mapper/vdo_nvme2n1
> - vgname: gluster_vg_nvme1n1
> pvname: /dev/mapper/vdo_nvme1n1
> gluster_infra_mount_devices:
> - path: /gluster_bricks/engine
> lvname: gluster_lv_engine
> vgname: gluster_vg_nvme0n1
> - path: /gluster_bricks/data
> lvname: gluster_lv_data
> vgname: gluster_vg_nvme2n1
> - path: /gluster_bricks/vmstore
> lvname: gluster_lv_vmstore
> vgname: gluster_vg_nvme1n1
> gluster_infra_vdo:
> - name: vdo_nvme0n1
> device: /dev/nvme0n1
> slabsize: 2G
> logicalsize: 100G
> blockmapcachesize: 128M
> emulate512: 'off'
> writepolicy: auto
> maxDiscardSize: 16M
> - name: vdo_nvme2n1
> device: /dev/nvme2n1
> slabsize: 2G
> logicalsize: 500G
> blockmapcachesize: 128M
> emulate512: 'off'
> writepolicy: auto
> maxDiscardSize: 16M
> - name: vdo_nvme1n1
> device: /dev/nvme1n1
> slabsize: 2G
> logicalsize: 500G
> blockmapcachesize: 128M
> emulate512: 'off'
> writepolicy: auto
> maxDiscardSize: 16M
> blacklist_mpath_devices:
> - nvme0n1
> - nvme2n1
> - nvme1n1
> gluster_infra_thick_lvs:
> - vgname: gluster_vg_nvme0n1
> lvname: gluster_lv_engine
> gluster_infra_thinpools:
> - vgname: gluster_vg_nvme2n1
> thinpoolname: gluster_thinpool_gluster_vg_nvme2n1
> poolmetadatasize: 1G
> - vgname: gluster_vg_nvme1n1
> thinpoolname: gluster_thinpool_gluster_vg_nvme1n1
> poolmetadatasize: 1G
> gluster_infra_lv_logicalvols:
> - vgname: gluster_vg_nvme2n1
> thinpool: gluster_thinpool_gluster_vg_nvme2n1
> lvname: gluster_lv_data
> - vgname: gluster_vg_nvme1n1
> thinpool: gluster_thinpool_gluster_vg_nvme1n1
> lvname: gluster_lv_vmstore
>
fmov1n2.sn.dtcorp.com:
> gluster_infra_volume_groups:
> - vgname: gluster_vg_nvme0n1
> pvname: /dev/mapper/vdo_nvme0n1
> - vgname: gluster_vg_nvme2n1
> pvname: /dev/mapper/vdo_nvme2n1
> - vgname: gluster_vg_nvme1n1
> pvname: /dev/mapper/vdo_nvme1n1
> gluster_infra_mount_devices:
> - path: /gluster_bricks/engine
> lvname: gluster_lv_engine
> vgname: gluster_vg_nvme0n1
> - path: /gluster_bricks/data
> lvname: gluster_lv_data
> vgname: gluster_vg_nvme2n1
> - path: /gluster_bricks/vmstore
> lvname: gluster_lv_vmstore
> vgname: gluster_vg_nvme1n1
> gluster_infra_vdo:
> - name: vdo_nvme0n1
> device: /dev/nvme0n1
> slabsize: 2G
> logicalsize: 100G
> blockmapcachesize: 128M
> emulate512: 'off'
> writepolicy: auto
> maxDiscardSize: 16M
> - name: vdo_nvme2n1
> device: /dev/nvme2n1
> slabsize: 2G
> logicalsize: 500G
> blockmapcachesize: 128M
> emulate512: 'off'
> writepolicy: auto
> maxDiscardSize: 16M
> - name: vdo_nvme1n1
> device: /dev/nvme1n1
> slabsize: 2G
> logicalsize: 500G
> blockmapcachesize: 128M
> emulate512: 'off'
> writepolicy: auto
> maxDiscardSize: 16M
> blacklist_mpath_devices:
> - nvme0n1
> - nvme2n1
> - nvme1n1
> gluster_infra_thick_lvs:
> - vgname: gluster_vg_nvme0n1
> lvname: gluster_lv_engine
> gluster_infra_thinpools:
> - vgname: gluster_vg_nvme2n1
> thinpoolname: gluster_thinpool_gluster_vg_nvme2n1
> poolmetadatasize: 1G
> - vgname: gluster_vg_nvme1n1
> thinpoolname: gluster_thinpool_gluster_vg_nvme1n1
> poolmetadatasize: 1G
> gluster_infra_lv_logicalvols:
> - vgname: gluster_vg_nvme2n1
> thinpool: gluster_thinpool_gluster_vg_nvme2n1
> lvname: gluster_lv_data
> - vgname: gluster_vg_nvme1n1
> thinpool: gluster_thinpool_gluster_vg_nvme1n1
> lvname: gluster_lv_vmstore
>
fmov1n3.sn.dtcorp.com:
> gluster_infra_volume_groups:
> - vgname: gluster_vg_nvme0n1
> pvname: /dev/mapper/vdo_nvme0n1
> - vgname: gluster_vg_nvme2n1
> pvname: /dev/mapper/vdo_nvme2n1
> - vgname: gluster_vg_nvme1n1
> pvname: /dev/mapper/vdo_nvme1n1
> gluster_infra_mount_devices:
> - path: /gluster_bricks/engine
> lvname: gluster_lv_engine
> vgname: gluster_vg_nvme0n1
> - path: /gluster_bricks/data
> lvname: gluster_lv_data
> vgname: gluster_vg_nvme2n1
> - path: /gluster_bricks/vmstore
> lvname: gluster_lv_vmstore
> vgname: gluster_vg_nvme1n1
> gluster_infra_vdo:
> - name: vdo_nvme0n1
> device: /dev/nvme0n1
> slabsize: 2G
> logicalsize: 100G
> blockmapcachesize: 128M
> emulate512: 'off'
> writepolicy: auto
> maxDiscardSize: 16M
> - name: vdo_nvme2n1
> device: /dev/nvme2n1
> slabsize: 2G
> logicalsize: 500G
> blockmapcachesize: 128M
> emulate512: 'off'
> writepolicy: auto
> maxDiscardSize: 16M
> - name: vdo_nvme1n1
> device: /dev/nvme1n1
> slabsize: 2G
> logicalsize: 500G
> blockmapcachesize: 128M
> emulate512: 'off'
> writepolicy: auto
> maxDiscardSize: 16M
> blacklist_mpath_devices:
> - nvme0n1
> - nvme2n1
> - nvme1n1
> gluster_infra_thick_lvs:
> - vgname: gluster_vg_nvme0n1
> lvname: gluster_lv_engine
> gluster_infra_thinpools:
> - vgname: gluster_vg_nvme2n1
> thinpoolname: gluster_thinpool_gluster_vg_nvme2n1
> poolmetadatasize: 1G
> - vgname: gluster_vg_nvme1n1
> thinpoolname: gluster_thinpool_gluster_vg_nvme1n1
> poolmetadatasize: 1G
> gluster_infra_lv_logicalvols:
> - vgname: gluster_vg_nvme2n1
> thinpool: gluster_thinpool_gluster_vg_nvme2n1
> lvname: gluster_lv_data
> - vgname: gluster_vg_nvme1n1
> thinpool: gluster_thinpool_gluster_vg_nvme1n1
> lvname: gluster_lv_vmstore
> vars:
> ansible_python_interpreter: /usr/bin/python3
> gluster_infra_disktype: JBOD
> gluster_set_selinux_labels: true
> gluster_infra_fw_ports:
> - 2049/tcp
> - 54321/tcp
> - 5900/tcp
> - 5900-6923/tcp
> - 5666/tcp
> - 16514/tcp
> gluster_infra_fw_permanent: true
> gluster_infra_fw_state: enabled
> gluster_infra_fw_zone: public
> gluster_infra_fw_services:
> - glusterfs
> gluster_features_force_varlogsizecheck: false
> cluster_nodes:
> -
fmov1n1.sn.dtcorp.com
> -
fmov1n2.sn.dtcorp.com
> -
fmov1n3.sn.dtcorp.com
> gluster_features_hci_cluster: '{{ cluster_nodes }}'
> gluster_features_hci_volumes:
> - volname: engine
> brick: /gluster_bricks/engine/engine
> arbiter: 0
> - volname: data
> brick: /gluster_bricks/data/data
> arbiter: 0
> - volname: vmstore
> brick: /gluster_bricks/vmstore/vmstore
> arbiter: 0
>
>---
>Latest /var/log/cockpit/ovirt-dashboard/gluster-deployment.log
>
>
>[root@fmov1n1 modules]# cat
>/var/log/cockpit/ovirt-dashboard/gluster-deployment.log
>ansible-playbook 2.9.10
> config file = /etc/ansible/ansible.cfg
>configured module search path = ['/root/.ansible/plugins/modules',
>'/usr/share/ansible/plugins/modules']
>ansible python module location =
>/usr/lib/python3.6/site-packages/ansible
> executable location = /root/../usr/bin/ansible-playbook
>python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1
>20191121 (Red Hat 8.3.1-5)]
>Using /etc/ansible/ansible.cfg as config file
>statically imported:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/blacklist_mpath_devices.yml
>statically imported:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/lvm_config.yml
>statically imported:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml
>statically imported:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main-lvm.yml
>statically imported:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml
>statically imported:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thick_lv_create.yml
>statically imported:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thin_pool_create.yml
>statically imported:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thin_volume_create.yml
>statically imported:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/cache_setup.yml
>statically imported:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml
>statically imported:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main-lvm.yml
>statically imported:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml
>statically imported:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thick_lv_create.yml
>statically imported:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thin_pool_create.yml
>statically imported:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thin_volume_create.yml
>statically imported:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/cache_setup.yml
>statically imported:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/fscreate.yml
>statically imported:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/mount.yml
>statically imported:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/lvm_kernelparams.yml
>statically imported:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/fstrim_service.yml
>statically imported:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/luks_device_encrypt.yml
>statically imported:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/bind_tang_server.yml
>statically imported:
>/etc/ansible/roles/gluster.features/roles/nfs_ganesha/tasks/prerequisites.yml
>statically imported:
>/etc/ansible/roles/gluster.features/roles/nfs_ganesha/tasks/distribute_keys.yml
>statically imported:
>/etc/ansible/roles/gluster.features/roles/nfs_ganesha/tasks/master_tasks.yml
>statically imported:
>/etc/ansible/roles/gluster.features/roles/nfs_ganesha/tasks/enable_ganesha.yml
>statically imported:
>/etc/ansible/roles/gluster.features/roles/nfs_ganesha/tasks/add_new_nodes.yml
>statically imported:
>/etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/prerequisites.yml
>statically imported:
>/etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/glusterd_ipv6.yml
>statically imported:
>/etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml
>statically imported:
>/etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/ssl-setup.yml
>statically imported:
>/etc/ansible/roles/gluster.features/roles/ctdb/tasks/setup_ctdb.yml
>
>PLAYBOOK: hc_wizard.yml
>********************************************************
>1 plays in
>/root/../usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml
>
>PLAY [Setup backend]
>***********************************************************
>
>TASK [Gathering Facts]
>*********************************************************
>task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:4
>ok: [
fmov1n2.sn.dtcorp.com]
>ok: [
fmov1n1.sn.dtcorp.com]
>ok: [
fmov1n3.sn.dtcorp.com]
>
>TASK [Check if valid hostnames are provided]
>***********************************
>task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:16
>changed: [
fmov1n1.sn.dtcorp.com] => (
item=fmov1n1.sn.dtcorp.com) =>
>{"ansible_loop_var": "item", "changed": true,
"cmd": ["getent",
>"ahosts", "fmov1n1.sn.dtcorp.com"], "delta":
"0:00:00.006835", "end":
>"2020-07-15 18:03:58.366109", "item":
"fmov1n1.sn.dtcorp.com", "rc": 0,
>"start": "2020-07-15 18:03:58.359274", "stderr":
"", "stderr_lines":
>[], "stdout": "172.16.16.21 STREAM
>fmov1n1.sn.dtcorp.com\n172.16.16.21 DGRAM \n172.16.16.21 RAW
>", "stdout_lines": ["172.16.16.21 STREAM
fmov1n1.sn.dtcorp.com",
>"172.16.16.21 DGRAM ", "172.16.16.21 RAW "]}
>changed: [
fmov1n1.sn.dtcorp.com] => (
item=fmov1n2.sn.dtcorp.com) =>
>{"ansible_loop_var": "item", "changed": true,
"cmd": ["getent",
>"ahosts", "fmov1n2.sn.dtcorp.com"], "delta":
"0:00:00.004972", "end":
>"2020-07-15 18:03:58.569094", "item":
"fmov1n2.sn.dtcorp.com", "rc": 0,
>"start": "2020-07-15 18:03:58.564122", "stderr":
"", "stderr_lines":
>[], "stdout": "172.16.16.22 STREAM
>fmov1n2.sn.dtcorp.com\n172.16.16.22 DGRAM \n172.16.16.22 RAW
>", "stdout_lines": ["172.16.16.22 STREAM
fmov1n2.sn.dtcorp.com",
>"172.16.16.22 DGRAM ", "172.16.16.22 RAW "]}
>changed: [
fmov1n1.sn.dtcorp.com] => (
item=fmov1n3.sn.dtcorp.com) =>
>{"ansible_loop_var": "item", "changed": true,
"cmd": ["getent",
>"ahosts", "fmov1n3.sn.dtcorp.com"], "delta":
"0:00:00.004759", "end":
>"2020-07-15 18:03:58.769052", "item":
"fmov1n3.sn.dtcorp.com", "rc": 0,
>"start": "2020-07-15 18:03:58.764293", "stderr":
"", "stderr_lines":
>[], "stdout": "172.16.16.23 STREAM
>fmov1n3.sn.dtcorp.com\n172.16.16.23 DGRAM \n172.16.16.23 RAW
>", "stdout_lines": ["172.16.16.23 STREAM
fmov1n3.sn.dtcorp.com",
>"172.16.16.23 DGRAM ", "172.16.16.23 RAW "]}
>
>TASK [Check if provided hostnames are valid]
>***********************************
>task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:29
>ok: [
fmov1n1.sn.dtcorp.com] => {
> "changed": false,
> "msg": "All assertions passed"
>}
>ok: [
fmov1n2.sn.dtcorp.com] => {
> "changed": false,
> "msg": "All assertions passed"
>}
>ok: [
fmov1n3.sn.dtcorp.com] => {
> "changed": false,
> "msg": "All assertions passed"
>}
>
>TASK [Check if /var/log has enough disk space]
>*********************************
>task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:38
>skipping: [
fmov1n1.sn.dtcorp.com] => {"changed": false,
"skip_reason":
>"Conditional result was False"}
>skipping: [
fmov1n2.sn.dtcorp.com] => {"changed": false,
"skip_reason":
>"Conditional result was False"}
>skipping: [
fmov1n3.sn.dtcorp.com] => {"changed": false,
"skip_reason":
>"Conditional result was False"}
>
>TASK [Check if the /var is greater than 15G]
>***********************************
>task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:43
>skipping: [
fmov1n1.sn.dtcorp.com] => {"changed": false,
"skip_reason":
>"Conditional result was False"}
>skipping: [
fmov1n2.sn.dtcorp.com] => {"changed": false,
"skip_reason":
>"Conditional result was False"}
>skipping: [
fmov1n3.sn.dtcorp.com] => {"changed": false,
"skip_reason":
>"Conditional result was False"}
>
>TASK [Check if disks have logical block size of 512B]
>**************************
>task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:53
>skipping: [
fmov1n1.sn.dtcorp.com] => (item={'vgname':
>'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'})
=>
>{"ansible_loop_var": "item", "changed": false,
"item": {"pvname":
>"/dev/mapper/vdo_nvme0n1", "vgname":
"gluster_vg_nvme0n1"},
>"skip_reason": "Conditional result was False"}
>skipping: [
fmov1n1.sn.dtcorp.com] => (item={'vgname':
>'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'})
=>
>{"ansible_loop_var": "item", "changed": false,
"item": {"pvname":
>"/dev/mapper/vdo_nvme2n1", "vgname":
"gluster_vg_nvme2n1"},
>"skip_reason": "Conditional result was False"}
>skipping: [
fmov1n1.sn.dtcorp.com] => (item={'vgname':
>'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'})
=>
>{"ansible_loop_var": "item", "changed": false,
"item": {"pvname":
>"/dev/mapper/vdo_nvme1n1", "vgname":
"gluster_vg_nvme1n1"},
>"skip_reason": "Conditional result was False"}
>skipping: [
fmov1n2.sn.dtcorp.com] => (item={'vgname':
>'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'})
=>
>{"ansible_loop_var": "item", "changed": false,
"item": {"pvname":
>"/dev/mapper/vdo_nvme0n1", "vgname":
"gluster_vg_nvme0n1"},
>"skip_reason": "Conditional result was False"}
>skipping: [
fmov1n2.sn.dtcorp.com] => (item={'vgname':
>'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'})
=>
>{"ansible_loop_var": "item", "changed": false,
"item": {"pvname":
>"/dev/mapper/vdo_nvme2n1", "vgname":
"gluster_vg_nvme2n1"},
>"skip_reason": "Conditional result was False"}
>skipping: [
fmov1n2.sn.dtcorp.com] => (item={'vgname':
>'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'})
=>
>{"ansible_loop_var": "item", "changed": false,
"item": {"pvname":
>"/dev/mapper/vdo_nvme1n1", "vgname":
"gluster_vg_nvme1n1"},
>"skip_reason": "Conditional result was False"}
>skipping: [
fmov1n3.sn.dtcorp.com] => (item={'vgname':
>'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'})
=>
>{"ansible_loop_var": "item", "changed": false,
"item": {"pvname":
>"/dev/mapper/vdo_nvme0n1", "vgname":
"gluster_vg_nvme0n1"},
>"skip_reason": "Conditional result was False"}
>skipping: [
fmov1n3.sn.dtcorp.com] => (item={'vgname':
>'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'})
=>
>{"ansible_loop_var": "item", "changed": false,
"item": {"pvname":
>"/dev/mapper/vdo_nvme2n1", "vgname":
"gluster_vg_nvme2n1"},
>"skip_reason": "Conditional result was False"}
>skipping: [
fmov1n3.sn.dtcorp.com] => (item={'vgname':
>'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'})
=>
>{"ansible_loop_var": "item", "changed": false,
"item": {"pvname":
>"/dev/mapper/vdo_nvme1n1", "vgname":
"gluster_vg_nvme1n1"},
>"skip_reason": "Conditional result was False"}
>
>TASK [Check if logical block size is 512 bytes]
>********************************
>task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:61
>skipping: [
fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) =>
>{"ansible_loop_var": "item", "changed": false,
"item":
>{"ansible_loop_var": "item", "changed": false,
"item": {"pvname":
>"/dev/mapper/vdo_nvme0n1", "vgname":
"gluster_vg_nvme0n1"},
>"skip_reason": "Conditional result was False",
"skipped": true},
>"skip_reason": "Conditional result was False"}
>skipping: [
fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) =>
>{"ansible_loop_var": "item", "changed": false,
"item":
>{"ansible_loop_var": "item", "changed": false,
"item": {"pvname":
>"/dev/mapper/vdo_nvme2n1", "vgname":
"gluster_vg_nvme2n1"},
>"skip_reason": "Conditional result was False",
"skipped": true},
>"skip_reason": "Conditional result was False"}
>skipping: [
fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) =>
>{"ansible_loop_var": "item", "changed": false,
"item":
>{"ansible_loop_var": "item", "changed": false,
"item": {"pvname":
>"/dev/mapper/vdo_nvme1n1", "vgname":
"gluster_vg_nvme1n1"},
>"skip_reason": "Conditional result was False",
"skipped": true},
>"skip_reason": "Conditional result was False"}
>skipping: [
fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) =>
>{"ansible_loop_var": "item", "changed": false,
"item":
>{"ansible_loop_var": "item", "changed": false,
"item": {"pvname":
>"/dev/mapper/vdo_nvme0n1", "vgname":
"gluster_vg_nvme0n1"},
>"skip_reason": "Conditional result was False",
"skipped": true},
>"skip_reason": "Conditional result was False"}
>skipping: [
fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) =>
>{"ansible_loop_var": "item", "changed": false,
"item":
>{"ansible_loop_var": "item", "changed": false,
"item": {"pvname":
>"/dev/mapper/vdo_nvme2n1", "vgname":
"gluster_vg_nvme2n1"},
>"skip_reason": "Conditional resul