oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set

Hi, Deploying oVirt 4.4.1.1 via Cockpit --> Hosted Engine --> Hyperconverged fails at Gluster deployment: TASK [gluster.infra/roles/backend_setup : Create thick logical volume] ********* failed: [fmov1n3.sn.dtcorp.com] (item={'vgname': 'gluster_vg_nvme0n1', 'lvname': 'gluster_lv_engine', 'size': '100G'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": " Volume group \"gluster_vg_nvme0n1\" has insufficient free space (25599 extents): 25600 required.\n", "index": 0, "item": {"lvname": "gluster_lv_engine", "size": "100G", "vgname": "gluster_vg_nvme0n1"}, "msg": "Creating logical volume 'gluster_lv_engine' failed", "rc": 5} failed: [fmov1n1.sn.dtcorp.com] (item={'vgname': 'gluster_vg_nvme0n1', 'lvname': 'gluster_lv_engine', 'size': '100G'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": " Volume group \"gluster_vg_nvme0n1\" has insufficient free space (25599 extents): 25600 required.\n", "index": 0, "item": {"lvname": "gluster_lv_engine", "size": "100G", "vgname": "gluster_vg_nvme0n1"}, "msg": "Creating logical volume 'gluster_lv_engine' failed", "rc": 5} failed: [fmov1n2.sn.dtcorp.com] (item={'vgname': 'gluster_vg_nvme0n1', 'lvname': 'gluster_lv_engine', 'size': '100G'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": " Volume group \"gluster_vg_nvme0n1\" has insufficient free space (25599 extents): 25600 required.\n", "index": 0, "item": {"lvname": "gluster_lv_engine", "size": "100G", "vgname": "gluster_vg_nvme0n1"}, "msg": "Creating logical volume 'gluster_lv_engine' failed", "rc": 5} Deployment is on 3 count Dell PowerEdge R740xd with 5 count 1.6TB NVMe drives per host. Deployment is only to three as JBOD, 1 drive per node per volume (engine, data, vmstore) utilizing VDO. Thus, deploying even a 100G volume to 1.6TB drive fails with "insufficient free space" error. I suspect this might have to do with the Ansible playbook deploying Gluster mishandling the logical volume creation due to the rounding error as described here: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/htm... If I can provide any additional information, logs, etc. please ask. Also, if anyone has experience/suggestions with Gluster config for hyperconverged setup on NVMe drives I would greatly appreciate any pearls of wisdom. Thank you so very much for any assistance! Charles

On Tue, Jul 14, 2020 at 3:50 AM <clam2718@gmail.com> wrote:
Hi,
Deploying oVirt 4.4.1.1 via Cockpit --> Hosted Engine --> Hyperconverged fails at Gluster deployment:
TASK [gluster.infra/roles/backend_setup : Create thick logical volume] ********* failed: [fmov1n3.sn.dtcorp.com] (item={'vgname': 'gluster_vg_nvme0n1', 'lvname': 'gluster_lv_engine', 'size': '100G'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": " Volume group \"gluster_vg_nvme0n1\" has insufficient free space (25599 extents): 25600 required.\n", "index": 0, "item": {"lvname": "gluster_lv_engine", "size": "100G", "vgname": "gluster_vg_nvme0n1"}, "msg": "Creating logical volume 'gluster_lv_engine' failed", "rc": 5} failed: [fmov1n1.sn.dtcorp.com] (item={'vgname': 'gluster_vg_nvme0n1', 'lvname': 'gluster_lv_engine', 'size': '100G'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": " Volume group \"gluster_vg_nvme0n1\" has insufficient free space (25599 extents): 25600 required.\n", "index": 0, "item": {"lvname": "gluster_lv_engine", "size": "100G", "vgname": "gluster_vg_nvme0n1"}, "msg": "Creating logical volume 'gluster_lv_engine' failed", "rc": 5} failed: [fmov1n2.sn.dtcorp.com] (item={'vgname': 'gluster_vg_nvme0n1', 'lvname': 'gluster_lv_engine', 'size': '100G'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": " Volume group \"gluster_vg_nvme0n1\" has insufficient free space (25599 extents): 25600 required.\n", "index": 0, "item": {"lvname": "gluster_lv_engine", "size": "100G", "vgname": "gluster_vg_nvme0n1"}, "msg": "Creating logical volume 'gluster_lv_engine' failed", "rc": 5}
Deployment is on 3 count Dell PowerEdge R740xd with 5 count 1.6TB NVMe drives per host. Deployment is only to three as JBOD, 1 drive per node per volume (engine, data, vmstore) utilizing VDO. Thus, deploying even a 100G volume to 1.6TB drive fails with "insufficient free space" error.
I suspect this might have to do with the Ansible playbook deploying Gluster mishandling the logical volume creation due to the rounding error as described here: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/htm...
If I can provide any additional information, logs, etc. please ask. Also, if anyone has experience/suggestions with Gluster config for hyperconverged setup on NVMe drives I would greatly appreciate any pearls of wisdom.
Can you provide the output of pvdisplay command on all hosts.
Thank you so very much for any assistance! Charles _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3AARZD4VBNNHWN...

Output of pvdisplay for each of three hosts below. Node 1: --- Physical volume --- PV Name /dev/mapper/vdo_nvme2n1 VG Name gluster_vg_nvme2n1 PV Size 1000.00 GiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 255999 Free PE 255999 Allocated PE 0 PV UUID tKg74P-klP8-o2sX-XCER-wcHf-XW9Q-mFViNT --- Physical volume --- PV Name /dev/mapper/vdo_nvme1n1 VG Name gluster_vg_nvme1n1 PV Size 1000.00 GiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 255999 Free PE 255999 Allocated PE 0 PV UUID wXyN5p-LaX3-9b9f-3RbH-j1B6-sXfT-UZ0BG7 --- Physical volume --- PV Name /dev/mapper/vdo_nvme0n1 VG Name gluster_vg_nvme0n1 PV Size 100.00 GiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 25599 Free PE 25599 Allocated PE 0 PV UUID gTHFgm-NU5J-LJWJ-DyIb-ecm7-85Cq-OedKeX --- Physical volume --- PV Name /dev/mapper/luks-3890d311-7c61-43ae-98a5-42c0318e735f VG Name onn PV Size <221.92 GiB / not usable 0 Allocatable yes PE Size 4.00 MiB Total PE 56811 Free PE 10897 Allocated PE 45914 PV UUID FqWsAT-hxAO-UCgq-PA7e-m0W1-3Jrw-XGnLf1 --- Node 2: --- Physical volume --- PV Name /dev/mapper/vdo_nvme2n1 VG Name gluster_vg_nvme2n1 PV Size 1000.00 GiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 255999 Free PE 255999 Allocated PE 0 PV UUID KR4c82-465u-B22g-2Q95-4l81-1urD-iqvBRt --- Physical volume --- PV Name /dev/mapper/vdo_nvme1n1 VG Name gluster_vg_nvme1n1 PV Size 1000.00 GiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 255999 Free PE 255999 Allocated PE 0 PV UUID sEABVg-tCRU-zW8n-pfPW-p5aj-XbBt-IjsTp1 --- Physical volume --- PV Name /dev/mapper/vdo_nvme0n1 VG Name gluster_vg_nvme0n1 PV Size 100.00 GiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 25599 Free PE 25599 Allocated PE 0 PV UUID NLRTl5-05ol-6zcH-ZjAS-T82n-hcow-20LYEL --- Physical volume --- PV Name /dev/mapper/luks-7d42e806-af06-4a72-96b7-de77f76e562f VG Name onn PV Size <221.92 GiB / not usable 0 Allocatable yes PE Size 4.00 MiB Total PE 56811 Free PE 10897 Allocated PE 45914 PV UUID O07nNl-yd7X-Gh8x-2d4b-lRME-bz21-OjCykI --- Node 3: --- Physical volume --- PV Name /dev/mapper/vdo_nvme2n1 VG Name gluster_vg_nvme2n1 PV Size 1000.00 GiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 255999 Free PE 255999 Allocated PE 0 PV UUID 4Yji7W-LuIv-Y2Aq-oD8t-wBwO-VaXY-9coNN0 --- Physical volume --- PV Name /dev/mapper/vdo_nvme1n1 VG Name gluster_vg_nvme1n1 PV Size 1000.00 GiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 255999 Free PE 255999 Allocated PE 0 PV UUID rTEqJ0-SkWm-Ge05-iz97-ZOoT-AdYY-L6uHtN --- Physical volume --- PV Name /dev/mapper/vdo_nvme0n1 VG Name gluster_vg_nvme0n1 PV Size 100.00 GiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 25599 Free PE 25599 Allocated PE 0 PV UUID AoJ9h9-vNYG-IgXQ-gSdB-aYWi-Nzl0-JPiQU3 --- Physical volume --- PV Name /dev/mapper/luks-5ac3e150-55c1-4fc2-acd4-f2861c3d2e0a VG Name onn PV Size <221.92 GiB / not usable 0 Allocatable yes PE Size 4.00 MiB Total PE 56811 Free PE 10897 Allocated PE 45914 PV UUID N3HLbG-kUIb-5I98-UfZX-eG9A-qnHi-J4tWWi --- My apologies for the delay (I am UTC-4). Thanks so very much for your input Ritesh! Respectfully, Charles

На 14 юли 2020 г. 16:32:42 GMT+03:00, clam2718@gmail.com написа:
Output of pvdisplay for each of three hosts below.
--- Physical volume --- PV Name /dev/mapper/vdo_nvme0n1 VG Name gluster_vg_nvme0n1 PV Size 100.00 GiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 25599 Free PE 25599 Allocated PE 0 PV UUID gTHFgm-NU5J-LJWJ-DyIb-ecm7-85Cq-OedKeX
You don't have 100G free due to the 'not usable 4.00 MiB'. Select 99G or '100%PVS' Best Regards, Strahil Nikolov

Thank you Strahil. I think I edited the oVirt Node Cockpit Hyperconverged Wizard Gluster Deployment Ansible playbook as detailed in your post and received the following new failure: TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for RHEL systems.] *** fatal: [fmov1n3.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [fmov1n1.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [fmov1n2.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} Any further assistance is most appreciated!!! Respectfully, Charles --- Gluster Deployment Ansible Playbook hc_nodes: hosts: fmov1n1.sn.dtcorp.com: gluster_infra_volume_groups: - vgname: gluster_vg_nvme0n1 pvname: /dev/mapper/vdo_nvme0n1 - vgname: gluster_vg_nvme2n1 pvname: /dev/mapper/vdo_nvme2n1 - vgname: gluster_vg_nvme1n1 pvname: /dev/mapper/vdo_nvme1n1 gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_nvme0n1 - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_nvme2n1 - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_nvme1n1 gluster_infra_vdo: - name: vdo_nvme0n1 device: /dev/nvme0n1 slabsize: 2G logicalsize: 1000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme2n1 device: /dev/nvme2n1 slabsize: 32G logicalsize: 5000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme1n1 device: /dev/nvme1n1 slabsize: 32G logicalsize: 5000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M blacklist_mpath_devices: - nvme0n1 - nvme2n1 - nvme1n1 gluster_infra_thick_lvs: - vgname: gluster_vg_nvme0n1 lvname: gluster_lv_engine size: '100%PVS' gluster_infra_thinpools: - vgname: gluster_vg_nvme2n1 thinpoolname: gluster_thinpool_gluster_vg_nvme2n1 poolmetadatasize: 3G - vgname: gluster_vg_nvme1n1 thinpoolname: gluster_thinpool_gluster_vg_nvme1n1 poolmetadatasize: 3G gluster_infra_lv_logicalvols: - vgname: gluster_vg_nvme2n1 thinpool: gluster_thinpool_gluster_vg_nvme2n1 lvname: gluster_lv_data lvsize: '100%PVS' - vgname: gluster_vg_nvme1n1 thinpool: gluster_thinpool_gluster_vg_nvme1n1 lvname: gluster_lv_vmstore lvsize: '100%PVS' fmov1n2.sn.dtcorp.com: gluster_infra_volume_groups: - vgname: gluster_vg_nvme0n1 pvname: /dev/mapper/vdo_nvme0n1 - vgname: gluster_vg_nvme2n1 pvname: /dev/mapper/vdo_nvme2n1 - vgname: gluster_vg_nvme1n1 pvname: /dev/mapper/vdo_nvme1n1 gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_nvme0n1 - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_nvme2n1 - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_nvme1n1 gluster_infra_vdo: - name: vdo_nvme0n1 device: /dev/nvme0n1 slabsize: 2G logicalsize: 1000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme2n1 device: /dev/nvme2n1 slabsize: 32G logicalsize: 5000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme1n1 device: /dev/nvme1n1 slabsize: 32G logicalsize: 5000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M blacklist_mpath_devices: - nvme0n1 - nvme2n1 - nvme1n1 gluster_infra_thick_lvs: - vgname: gluster_vg_nvme0n1 lvname: gluster_lv_engine size: '100%PVS' gluster_infra_thinpools: - vgname: gluster_vg_nvme2n1 thinpoolname: gluster_thinpool_gluster_vg_nvme2n1 poolmetadatasize: 3G - vgname: gluster_vg_nvme1n1 thinpoolname: gluster_thinpool_gluster_vg_nvme1n1 poolmetadatasize: 3G gluster_infra_lv_logicalvols: - vgname: gluster_vg_nvme2n1 thinpool: gluster_thinpool_gluster_vg_nvme2n1 lvname: gluster_lv_data lvsize: '100%PVS' - vgname: gluster_vg_nvme1n1 thinpool: gluster_thinpool_gluster_vg_nvme1n1 lvname: gluster_lv_vmstore lvsize: '100%PVS' fmov1n3.sn.dtcorp.com: gluster_infra_volume_groups: - vgname: gluster_vg_nvme0n1 pvname: /dev/mapper/vdo_nvme0n1 - vgname: gluster_vg_nvme2n1 pvname: /dev/mapper/vdo_nvme2n1 - vgname: gluster_vg_nvme1n1 pvname: /dev/mapper/vdo_nvme1n1 gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_nvme0n1 - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_nvme2n1 - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_nvme1n1 gluster_infra_vdo: - name: vdo_nvme0n1 device: /dev/nvme0n1 slabsize: 2G logicalsize: 1000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme2n1 device: /dev/nvme2n1 slabsize: 32G logicalsize: 5000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme1n1 device: /dev/nvme1n1 slabsize: 32G logicalsize: 5000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M blacklist_mpath_devices: - nvme0n1 - nvme2n1 - nvme1n1 gluster_infra_thick_lvs: - vgname: gluster_vg_nvme0n1 lvname: gluster_lv_engine size: '100%PVS' gluster_infra_thinpools: - vgname: gluster_vg_nvme2n1 thinpoolname: gluster_thinpool_gluster_vg_nvme2n1 poolmetadatasize: 3G - vgname: gluster_vg_nvme1n1 thinpoolname: gluster_thinpool_gluster_vg_nvme1n1 poolmetadatasize: 3G gluster_infra_lv_logicalvols: - vgname: gluster_vg_nvme2n1 thinpool: gluster_thinpool_gluster_vg_nvme2n1 lvname: gluster_lv_data lvsize: '100%PVS' - vgname: gluster_vg_nvme1n1 thinpool: gluster_thinpool_gluster_vg_nvme1n1 lvname: gluster_lv_vmstore lvsize: '100%PVS' vars: gluster_infra_disktype: JBOD gluster_set_selinux_labels: true gluster_infra_fw_ports: - 2049/tcp - 54321/tcp - 5900/tcp - 5900-6923/tcp - 5666/tcp - 16514/tcp gluster_infra_fw_permanent: true gluster_infra_fw_state: enabled gluster_infra_fw_zone: public gluster_infra_fw_services: - glusterfs gluster_features_force_varlogsizecheck: false cluster_nodes: - fmov1n1.sn.dtcorp.com - fmov1n2.sn.dtcorp.com - fmov1n3.sn.dtcorp.com gluster_features_hci_cluster: '{{ cluster_nodes }}' gluster_features_hci_volumes: - volname: engine brick: /gluster_bricks/engine/engine arbiter: 0 - volname: data brick: /gluster_bricks/data/data arbiter: 0 - volname: vmstore brick: /gluster_bricks/vmstore/vmstore arbiter: 0 --- /var/log/cockpit/ovirt-dashboard/gluster-deployment.log PLAY [Setup backend] *********************************************************** TASK [Gathering Facts] ********************************************************* ok: [fmov1n1.sn.dtcorp.com] ok: [fmov1n2.sn.dtcorp.com] ok: [fmov1n3.sn.dtcorp.com] TASK [Check if valid hostnames are provided] *********************************** changed: [fmov1n1.sn.dtcorp.com] => (item=fmov1n1.sn.dtcorp.com) changed: [fmov1n1.sn.dtcorp.com] => (item=fmov1n2.sn.dtcorp.com) changed: [fmov1n1.sn.dtcorp.com] => (item=fmov1n3.sn.dtcorp.com) TASK [Check if provided hostnames are valid] *********************************** ok: [fmov1n1.sn.dtcorp.com] => { "changed": false, "msg": "All assertions passed" } ok: [fmov1n2.sn.dtcorp.com] => { "changed": false, "msg": "All assertions passed" } ok: [fmov1n3.sn.dtcorp.com] => { "changed": false, "msg": "All assertions passed" } TASK [Check if /var/log has enough disk space] ********************************* skipping: [fmov1n1.sn.dtcorp.com] skipping: [fmov1n2.sn.dtcorp.com] skipping: [fmov1n3.sn.dtcorp.com] TASK [Check if the /var is greater than 15G] *********************************** skipping: [fmov1n1.sn.dtcorp.com] skipping: [fmov1n2.sn.dtcorp.com] skipping: [fmov1n3.sn.dtcorp.com] TASK [Check if disks have logical block size of 512B] ************************** skipping: [fmov1n1.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) skipping: [fmov1n1.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) skipping: [fmov1n1.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) skipping: [fmov1n2.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) skipping: [fmov1n2.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) skipping: [fmov1n2.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) skipping: [fmov1n3.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) skipping: [fmov1n3.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) skipping: [fmov1n3.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) TASK [Check if logical block size is 512 bytes] ******************************** skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) TASK [Get logical block size of VDO devices] *********************************** skipping: [fmov1n1.sn.dtcorp.com] => (item={'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) skipping: [fmov1n1.sn.dtcorp.com] => (item={'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) skipping: [fmov1n1.sn.dtcorp.com] => (item={'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) skipping: [fmov1n2.sn.dtcorp.com] => (item={'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) skipping: [fmov1n2.sn.dtcorp.com] => (item={'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) skipping: [fmov1n2.sn.dtcorp.com] => (item={'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) skipping: [fmov1n3.sn.dtcorp.com] => (item={'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) skipping: [fmov1n3.sn.dtcorp.com] => (item={'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) skipping: [fmov1n3.sn.dtcorp.com] => (item={'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) TASK [Check if logical block size is 512 bytes for VDO devices] **************** skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) TASK [gluster.infra/roles/firewall_config : Start firewalld if not already started] *** ok: [fmov1n3.sn.dtcorp.com] ok: [fmov1n2.sn.dtcorp.com] ok: [fmov1n1.sn.dtcorp.com] TASK [gluster.infra/roles/firewall_config : check if required variables are set] *** skipping: [fmov1n1.sn.dtcorp.com] skipping: [fmov1n2.sn.dtcorp.com] skipping: [fmov1n3.sn.dtcorp.com] TASK [gluster.infra/roles/firewall_config : Open/Close firewalld ports] ******** changed: [fmov1n3.sn.dtcorp.com] => (item=2049/tcp) changed: [fmov1n2.sn.dtcorp.com] => (item=2049/tcp) changed: [fmov1n1.sn.dtcorp.com] => (item=2049/tcp) changed: [fmov1n3.sn.dtcorp.com] => (item=54321/tcp) changed: [fmov1n2.sn.dtcorp.com] => (item=54321/tcp) changed: [fmov1n1.sn.dtcorp.com] => (item=54321/tcp) changed: [fmov1n3.sn.dtcorp.com] => (item=5900/tcp) changed: [fmov1n2.sn.dtcorp.com] => (item=5900/tcp) changed: [fmov1n1.sn.dtcorp.com] => (item=5900/tcp) changed: [fmov1n3.sn.dtcorp.com] => (item=5900-6923/tcp) changed: [fmov1n2.sn.dtcorp.com] => (item=5900-6923/tcp) changed: [fmov1n1.sn.dtcorp.com] => (item=5900-6923/tcp) changed: [fmov1n3.sn.dtcorp.com] => (item=5666/tcp) changed: [fmov1n2.sn.dtcorp.com] => (item=5666/tcp) changed: [fmov1n1.sn.dtcorp.com] => (item=5666/tcp) changed: [fmov1n3.sn.dtcorp.com] => (item=16514/tcp) changed: [fmov1n2.sn.dtcorp.com] => (item=16514/tcp) changed: [fmov1n1.sn.dtcorp.com] => (item=16514/tcp) TASK [gluster.infra/roles/firewall_config : Add/Delete services to firewalld rules] *** ok: [fmov1n3.sn.dtcorp.com] => (item=glusterfs) ok: [fmov1n2.sn.dtcorp.com] => (item=glusterfs) ok: [fmov1n1.sn.dtcorp.com] => (item=glusterfs) TASK [gluster.infra/roles/backend_setup : Check that the multipath.conf exists] *** ok: [fmov1n3.sn.dtcorp.com] ok: [fmov1n2.sn.dtcorp.com] ok: [fmov1n1.sn.dtcorp.com] TASK [gluster.infra/roles/backend_setup : Ensure that multipathd services is enabled if not] *** skipping: [fmov1n3.sn.dtcorp.com] changed: [fmov1n1.sn.dtcorp.com] changed: [fmov1n2.sn.dtcorp.com] TASK [gluster.infra/roles/backend_setup : Ensure that multipathd services is running] *** ok: [fmov1n3.sn.dtcorp.com] changed: [fmov1n1.sn.dtcorp.com] changed: [fmov1n2.sn.dtcorp.com] TASK [gluster.infra/roles/backend_setup : Create /etc/multipath/conf.d if doesn't exists] *** changed: [fmov1n3.sn.dtcorp.com] changed: [fmov1n2.sn.dtcorp.com] changed: [fmov1n1.sn.dtcorp.com] TASK [gluster.infra/roles/backend_setup : Get the UUID of the devices] ********* changed: [fmov1n3.sn.dtcorp.com] => (item=nvme0n1) changed: [fmov1n2.sn.dtcorp.com] => (item=nvme0n1) changed: [fmov1n1.sn.dtcorp.com] => (item=nvme0n1) changed: [fmov1n3.sn.dtcorp.com] => (item=nvme2n1) changed: [fmov1n2.sn.dtcorp.com] => (item=nvme2n1) changed: [fmov1n1.sn.dtcorp.com] => (item=nvme2n1) changed: [fmov1n3.sn.dtcorp.com] => (item=nvme1n1) changed: [fmov1n2.sn.dtcorp.com] => (item=nvme1n1) changed: [fmov1n1.sn.dtcorp.com] => (item=nvme1n1) TASK [gluster.infra/roles/backend_setup : Check that the blacklist.conf exists] *** ok: [fmov1n3.sn.dtcorp.com] ok: [fmov1n2.sn.dtcorp.com] ok: [fmov1n1.sn.dtcorp.com] TASK [gluster.infra/roles/backend_setup : Create blacklist template content] *** changed: [fmov1n3.sn.dtcorp.com] changed: [fmov1n2.sn.dtcorp.com] changed: [fmov1n1.sn.dtcorp.com] TASK [gluster.infra/roles/backend_setup : Add wwid to blacklist in blacklist.conf file] *** changed: [fmov1n3.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme0n1', 'stdout': "wwid 'eui.343756304d7020750025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:36.612051', 'end': '2020-07-14 21:06:36.623511', 'delta': '0:00:00.011460', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme0n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020750025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme0n1', 'ansible_loop_var': 'item'}) changed: [fmov1n2.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme0n1', 'stdout': "wwid 'eui.343756304d7020530025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:36.674961', 'end': '2020-07-14 21:06:36.687875', 'delta': '0:00:00.012914', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme0n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020530025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme0n1', 'ansible_loop_var': 'item'}) changed: [fmov1n1.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme0n1', 'stdout': "wwid 'eui.343756304d7020220025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:36.732721', 'end': '2020-07-14 21:06:36.744468', 'delta': '0:00:00.011747', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme0n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020220025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme0n1', 'ansible_loop_var': 'item'}) changed: [fmov1n3.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme2n1', 'stdout': "wwid 'eui.343756304d7020730025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:41.411729', 'end': '2020-07-14 21:06:41.423305', 'delta': '0:00:00.011576', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme2n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020730025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme2n1', 'ansible_loop_var': 'item'}) changed: [fmov1n2.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme2n1', 'stdout': "wwid 'eui.343756304d7020190025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:41.683414', 'end': '2020-07-14 21:06:41.695115', 'delta': '0:00:00.011701', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme2n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020190025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme2n1', 'ansible_loop_var': 'item'}) changed: [fmov1n1.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme2n1', 'stdout': "wwid 'eui.343756304d7007630025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:41.832021', 'end': '2020-07-14 21:06:41.844162', 'delta': '0:00:00.012141', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme2n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7007630025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme2n1', 'ansible_loop_var': 'item'}) changed: [fmov1n3.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme1n1', 'stdout': "wwid 'eui.343756304d7020760025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:46.242072', 'end': '2020-07-14 21:06:46.253191', 'delta': '0:00:00.011119', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme1n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020760025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme1n1', 'ansible_loop_var': 'item'}) changed: [fmov1n2.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme1n1', 'stdout': "wwid 'eui.343756304d7020690025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:46.697920', 'end': '2020-07-14 21:06:46.708944', 'delta': '0:00:00.011024', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme1n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020690025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme1n1', 'ansible_loop_var': 'item'}) changed: [fmov1n1.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme1n1', 'stdout': "wwid 'eui.343756304d7020540025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:46.860257', 'end': '2020-07-14 21:06:46.871208', 'delta': '0:00:00.010951', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme1n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020540025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme1n1', 'ansible_loop_var': 'item'}) TASK [gluster.infra/roles/backend_setup : Reload multipathd] ******************* changed: [fmov1n3.sn.dtcorp.com] changed: [fmov1n2.sn.dtcorp.com] changed: [fmov1n1.sn.dtcorp.com] TASK [gluster.infra/roles/backend_setup : Gather facts to determine the OS distribution] *** ok: [fmov1n1.sn.dtcorp.com] ok: [fmov1n2.sn.dtcorp.com] ok: [fmov1n3.sn.dtcorp.com] TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for debian systems.] *** skipping: [fmov1n1.sn.dtcorp.com] skipping: [fmov1n2.sn.dtcorp.com] skipping: [fmov1n3.sn.dtcorp.com] TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for RHEL systems.] *** fatal: [fmov1n3.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [fmov1n1.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [fmov1n2.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} NO MORE HOSTS LEFT ************************************************************* NO MORE HOSTS LEFT ************************************************************* PLAY RECAP ********************************************************************* fmov1n1.sn.dtcorp.com : ok=16 changed=9 unreachable=0 failed=1 skipped=8 rescued=0 ignored=0 fmov1n2.sn.dtcorp.com : ok=15 changed=8 unreachable=0 failed=1 skipped=8 rescued=0 ignored=0 fmov1n3.sn.dtcorp.com : ok=14 changed=6 unreachable=0 failed=1 skipped=9 rescued=0 ignored=0

Based on https://github.com/gluster/gluster-ansible-infra/blob/master/roles/backend_s... the used module is package, but the strange thing is why ansible doesn't detect the python3 and dnf. As far as I remember, you can edit the play before running it . Maybe this will fix: 1. Go to command line and run: which python3 2. Set the 'ansible_python_interpreter' to the value of the previous step Most probably you need to convert it to: vars: ansible_python_interpreter=/full/path/to/python3/or/python3 Note that the variable 'ansible_python_interpreter' must be indented to the write with 2 spaces (no tabs allowed). Best Regards, Strahil Nikolov На 15 юли 2020 г. 0:19:09 GMT+03:00, clam2718@gmail.com написа:
Thank you Strahil. I think I edited the oVirt Node Cockpit Hyperconverged Wizard Gluster Deployment Ansible playbook as detailed in your post and received the following new failure:
TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for RHEL systems.] *** fatal: [fmov1n3.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [fmov1n1.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [fmov1n2.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."}
Any further assistance is most appreciated!!!
Respectfully, Charles
--- Gluster Deployment Ansible Playbook
hc_nodes: hosts: fmov1n1.sn.dtcorp.com: gluster_infra_volume_groups: - vgname: gluster_vg_nvme0n1 pvname: /dev/mapper/vdo_nvme0n1 - vgname: gluster_vg_nvme2n1 pvname: /dev/mapper/vdo_nvme2n1 - vgname: gluster_vg_nvme1n1 pvname: /dev/mapper/vdo_nvme1n1 gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_nvme0n1 - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_nvme2n1 - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_nvme1n1 gluster_infra_vdo: - name: vdo_nvme0n1 device: /dev/nvme0n1 slabsize: 2G logicalsize: 1000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme2n1 device: /dev/nvme2n1 slabsize: 32G logicalsize: 5000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme1n1 device: /dev/nvme1n1 slabsize: 32G logicalsize: 5000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M blacklist_mpath_devices: - nvme0n1 - nvme2n1 - nvme1n1 gluster_infra_thick_lvs: - vgname: gluster_vg_nvme0n1 lvname: gluster_lv_engine size: '100%PVS' gluster_infra_thinpools: - vgname: gluster_vg_nvme2n1 thinpoolname: gluster_thinpool_gluster_vg_nvme2n1 poolmetadatasize: 3G - vgname: gluster_vg_nvme1n1 thinpoolname: gluster_thinpool_gluster_vg_nvme1n1 poolmetadatasize: 3G gluster_infra_lv_logicalvols: - vgname: gluster_vg_nvme2n1 thinpool: gluster_thinpool_gluster_vg_nvme2n1 lvname: gluster_lv_data lvsize: '100%PVS' - vgname: gluster_vg_nvme1n1 thinpool: gluster_thinpool_gluster_vg_nvme1n1 lvname: gluster_lv_vmstore lvsize: '100%PVS' fmov1n2.sn.dtcorp.com: gluster_infra_volume_groups: - vgname: gluster_vg_nvme0n1 pvname: /dev/mapper/vdo_nvme0n1 - vgname: gluster_vg_nvme2n1 pvname: /dev/mapper/vdo_nvme2n1 - vgname: gluster_vg_nvme1n1 pvname: /dev/mapper/vdo_nvme1n1 gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_nvme0n1 - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_nvme2n1 - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_nvme1n1 gluster_infra_vdo: - name: vdo_nvme0n1 device: /dev/nvme0n1 slabsize: 2G logicalsize: 1000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme2n1 device: /dev/nvme2n1 slabsize: 32G logicalsize: 5000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme1n1 device: /dev/nvme1n1 slabsize: 32G logicalsize: 5000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M blacklist_mpath_devices: - nvme0n1 - nvme2n1 - nvme1n1 gluster_infra_thick_lvs: - vgname: gluster_vg_nvme0n1 lvname: gluster_lv_engine size: '100%PVS' gluster_infra_thinpools: - vgname: gluster_vg_nvme2n1 thinpoolname: gluster_thinpool_gluster_vg_nvme2n1 poolmetadatasize: 3G - vgname: gluster_vg_nvme1n1 thinpoolname: gluster_thinpool_gluster_vg_nvme1n1 poolmetadatasize: 3G gluster_infra_lv_logicalvols: - vgname: gluster_vg_nvme2n1 thinpool: gluster_thinpool_gluster_vg_nvme2n1 lvname: gluster_lv_data lvsize: '100%PVS' - vgname: gluster_vg_nvme1n1 thinpool: gluster_thinpool_gluster_vg_nvme1n1 lvname: gluster_lv_vmstore lvsize: '100%PVS' fmov1n3.sn.dtcorp.com: gluster_infra_volume_groups: - vgname: gluster_vg_nvme0n1 pvname: /dev/mapper/vdo_nvme0n1 - vgname: gluster_vg_nvme2n1 pvname: /dev/mapper/vdo_nvme2n1 - vgname: gluster_vg_nvme1n1 pvname: /dev/mapper/vdo_nvme1n1 gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_nvme0n1 - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_nvme2n1 - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_nvme1n1 gluster_infra_vdo: - name: vdo_nvme0n1 device: /dev/nvme0n1 slabsize: 2G logicalsize: 1000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme2n1 device: /dev/nvme2n1 slabsize: 32G logicalsize: 5000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme1n1 device: /dev/nvme1n1 slabsize: 32G logicalsize: 5000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M blacklist_mpath_devices: - nvme0n1 - nvme2n1 - nvme1n1 gluster_infra_thick_lvs: - vgname: gluster_vg_nvme0n1 lvname: gluster_lv_engine size: '100%PVS' gluster_infra_thinpools: - vgname: gluster_vg_nvme2n1 thinpoolname: gluster_thinpool_gluster_vg_nvme2n1 poolmetadatasize: 3G - vgname: gluster_vg_nvme1n1 thinpoolname: gluster_thinpool_gluster_vg_nvme1n1 poolmetadatasize: 3G gluster_infra_lv_logicalvols: - vgname: gluster_vg_nvme2n1 thinpool: gluster_thinpool_gluster_vg_nvme2n1 lvname: gluster_lv_data lvsize: '100%PVS' - vgname: gluster_vg_nvme1n1 thinpool: gluster_thinpool_gluster_vg_nvme1n1 lvname: gluster_lv_vmstore lvsize: '100%PVS' vars: gluster_infra_disktype: JBOD gluster_set_selinux_labels: true gluster_infra_fw_ports: - 2049/tcp - 54321/tcp - 5900/tcp - 5900-6923/tcp - 5666/tcp - 16514/tcp gluster_infra_fw_permanent: true gluster_infra_fw_state: enabled gluster_infra_fw_zone: public gluster_infra_fw_services: - glusterfs gluster_features_force_varlogsizecheck: false cluster_nodes: - fmov1n1.sn.dtcorp.com - fmov1n2.sn.dtcorp.com - fmov1n3.sn.dtcorp.com gluster_features_hci_cluster: '{{ cluster_nodes }}' gluster_features_hci_volumes: - volname: engine brick: /gluster_bricks/engine/engine arbiter: 0 - volname: data brick: /gluster_bricks/data/data arbiter: 0 - volname: vmstore brick: /gluster_bricks/vmstore/vmstore arbiter: 0
--- /var/log/cockpit/ovirt-dashboard/gluster-deployment.log
PLAY [Setup backend] ***********************************************************
TASK [Gathering Facts] ********************************************************* ok: [fmov1n1.sn.dtcorp.com] ok: [fmov1n2.sn.dtcorp.com] ok: [fmov1n3.sn.dtcorp.com]
TASK [Check if valid hostnames are provided] *********************************** changed: [fmov1n1.sn.dtcorp.com] => (item=fmov1n1.sn.dtcorp.com) changed: [fmov1n1.sn.dtcorp.com] => (item=fmov1n2.sn.dtcorp.com) changed: [fmov1n1.sn.dtcorp.com] => (item=fmov1n3.sn.dtcorp.com)
TASK [Check if provided hostnames are valid] *********************************** ok: [fmov1n1.sn.dtcorp.com] => { "changed": false, "msg": "All assertions passed" } ok: [fmov1n2.sn.dtcorp.com] => { "changed": false, "msg": "All assertions passed" } ok: [fmov1n3.sn.dtcorp.com] => { "changed": false, "msg": "All assertions passed" }
TASK [Check if /var/log has enough disk space] ********************************* skipping: [fmov1n1.sn.dtcorp.com] skipping: [fmov1n2.sn.dtcorp.com] skipping: [fmov1n3.sn.dtcorp.com]
TASK [Check if the /var is greater than 15G] *********************************** skipping: [fmov1n1.sn.dtcorp.com] skipping: [fmov1n2.sn.dtcorp.com] skipping: [fmov1n3.sn.dtcorp.com]
TASK [Check if disks have logical block size of 512B] ************************** skipping: [fmov1n1.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) skipping: [fmov1n1.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) skipping: [fmov1n1.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) skipping: [fmov1n2.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) skipping: [fmov1n2.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) skipping: [fmov1n2.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) skipping: [fmov1n3.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) skipping: [fmov1n3.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) skipping: [fmov1n3.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'})
TASK [Check if logical block size is 512 bytes] ******************************** skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size)
TASK [Get logical block size of VDO devices] *********************************** skipping: [fmov1n1.sn.dtcorp.com] => (item={'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) skipping: [fmov1n1.sn.dtcorp.com] => (item={'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) skipping: [fmov1n1.sn.dtcorp.com] => (item={'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) skipping: [fmov1n2.sn.dtcorp.com] => (item={'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) skipping: [fmov1n2.sn.dtcorp.com] => (item={'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) skipping: [fmov1n2.sn.dtcorp.com] => (item={'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) skipping: [fmov1n3.sn.dtcorp.com] => (item={'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) skipping: [fmov1n3.sn.dtcorp.com] => (item={'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) skipping: [fmov1n3.sn.dtcorp.com] => (item={'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'})
TASK [Check if logical block size is 512 bytes for VDO devices] **************** skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size)
TASK [gluster.infra/roles/firewall_config : Start firewalld if not already started] *** ok: [fmov1n3.sn.dtcorp.com] ok: [fmov1n2.sn.dtcorp.com] ok: [fmov1n1.sn.dtcorp.com]
TASK [gluster.infra/roles/firewall_config : check if required variables are set] *** skipping: [fmov1n1.sn.dtcorp.com] skipping: [fmov1n2.sn.dtcorp.com] skipping: [fmov1n3.sn.dtcorp.com]
TASK [gluster.infra/roles/firewall_config : Open/Close firewalld ports] ******** changed: [fmov1n3.sn.dtcorp.com] => (item=2049/tcp) changed: [fmov1n2.sn.dtcorp.com] => (item=2049/tcp) changed: [fmov1n1.sn.dtcorp.com] => (item=2049/tcp) changed: [fmov1n3.sn.dtcorp.com] => (item=54321/tcp) changed: [fmov1n2.sn.dtcorp.com] => (item=54321/tcp) changed: [fmov1n1.sn.dtcorp.com] => (item=54321/tcp) changed: [fmov1n3.sn.dtcorp.com] => (item=5900/tcp) changed: [fmov1n2.sn.dtcorp.com] => (item=5900/tcp) changed: [fmov1n1.sn.dtcorp.com] => (item=5900/tcp) changed: [fmov1n3.sn.dtcorp.com] => (item=5900-6923/tcp) changed: [fmov1n2.sn.dtcorp.com] => (item=5900-6923/tcp) changed: [fmov1n1.sn.dtcorp.com] => (item=5900-6923/tcp) changed: [fmov1n3.sn.dtcorp.com] => (item=5666/tcp) changed: [fmov1n2.sn.dtcorp.com] => (item=5666/tcp) changed: [fmov1n1.sn.dtcorp.com] => (item=5666/tcp) changed: [fmov1n3.sn.dtcorp.com] => (item=16514/tcp) changed: [fmov1n2.sn.dtcorp.com] => (item=16514/tcp) changed: [fmov1n1.sn.dtcorp.com] => (item=16514/tcp)
TASK [gluster.infra/roles/firewall_config : Add/Delete services to firewalld rules] *** ok: [fmov1n3.sn.dtcorp.com] => (item=glusterfs) ok: [fmov1n2.sn.dtcorp.com] => (item=glusterfs) ok: [fmov1n1.sn.dtcorp.com] => (item=glusterfs)
TASK [gluster.infra/roles/backend_setup : Check that the multipath.conf exists] *** ok: [fmov1n3.sn.dtcorp.com] ok: [fmov1n2.sn.dtcorp.com] ok: [fmov1n1.sn.dtcorp.com]
TASK [gluster.infra/roles/backend_setup : Ensure that multipathd services is enabled if not] *** skipping: [fmov1n3.sn.dtcorp.com] changed: [fmov1n1.sn.dtcorp.com] changed: [fmov1n2.sn.dtcorp.com]
TASK [gluster.infra/roles/backend_setup : Ensure that multipathd services is running] *** ok: [fmov1n3.sn.dtcorp.com] changed: [fmov1n1.sn.dtcorp.com] changed: [fmov1n2.sn.dtcorp.com]
TASK [gluster.infra/roles/backend_setup : Create /etc/multipath/conf.d if doesn't exists] *** changed: [fmov1n3.sn.dtcorp.com] changed: [fmov1n2.sn.dtcorp.com] changed: [fmov1n1.sn.dtcorp.com]
TASK [gluster.infra/roles/backend_setup : Get the UUID of the devices] ********* changed: [fmov1n3.sn.dtcorp.com] => (item=nvme0n1) changed: [fmov1n2.sn.dtcorp.com] => (item=nvme0n1) changed: [fmov1n1.sn.dtcorp.com] => (item=nvme0n1) changed: [fmov1n3.sn.dtcorp.com] => (item=nvme2n1) changed: [fmov1n2.sn.dtcorp.com] => (item=nvme2n1) changed: [fmov1n1.sn.dtcorp.com] => (item=nvme2n1) changed: [fmov1n3.sn.dtcorp.com] => (item=nvme1n1) changed: [fmov1n2.sn.dtcorp.com] => (item=nvme1n1) changed: [fmov1n1.sn.dtcorp.com] => (item=nvme1n1)
TASK [gluster.infra/roles/backend_setup : Check that the blacklist.conf exists] *** ok: [fmov1n3.sn.dtcorp.com] ok: [fmov1n2.sn.dtcorp.com] ok: [fmov1n1.sn.dtcorp.com]
TASK [gluster.infra/roles/backend_setup : Create blacklist template content] *** changed: [fmov1n3.sn.dtcorp.com] changed: [fmov1n2.sn.dtcorp.com] changed: [fmov1n1.sn.dtcorp.com]
TASK [gluster.infra/roles/backend_setup : Add wwid to blacklist in blacklist.conf file] *** changed: [fmov1n3.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme0n1', 'stdout': "wwid 'eui.343756304d7020750025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:36.612051', 'end': '2020-07-14 21:06:36.623511', 'delta': '0:00:00.011460', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme0n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020750025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme0n1', 'ansible_loop_var': 'item'}) changed: [fmov1n2.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme0n1', 'stdout': "wwid 'eui.343756304d7020530025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:36.674961', 'end': '2020-07-14 21:06:36.687875', 'delta': '0:00:00.012914', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme0n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020530025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme0n1', 'ansible_loop_var': 'item'}) changed: [fmov1n1.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme0n1', 'stdout': "wwid 'eui.343756304d7020220025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:36.732721', 'end': '2020-07-14 21:06:36.744468', 'delta': '0:00:00.011747', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme0n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020220025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme0n1', 'ansible_loop_var': 'item'}) changed: [fmov1n3.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme2n1', 'stdout': "wwid 'eui.343756304d7020730025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:41.411729', 'end': '2020-07-14 21:06:41.423305', 'delta': '0:00:00.011576', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme2n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020730025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme2n1', 'ansible_loop_var': 'item'}) changed: [fmov1n2.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme2n1', 'stdout': "wwid 'eui.343756304d7020190025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:41.683414', 'end': '2020-07-14 21:06:41.695115', 'delta': '0:00:00.011701', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme2n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020190025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme2n1', 'ansible_loop_var': 'item'}) changed: [fmov1n1.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme2n1', 'stdout': "wwid 'eui.343756304d7007630025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:41.832021', 'end': '2020-07-14 21:06:41.844162', 'delta': '0:00:00.012141', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme2n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7007630025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme2n1', 'ansible_loop_var': 'item'}) changed: [fmov1n3.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme1n1', 'stdout': "wwid 'eui.343756304d7020760025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:46.242072', 'end': '2020-07-14 21:06:46.253191', 'delta': '0:00:00.011119', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme1n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020760025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme1n1', 'ansible_loop_var': 'item'}) changed: [fmov1n2.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme1n1', 'stdout': "wwid 'eui.343756304d7020690025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:46.697920', 'end': '2020-07-14 21:06:46.708944', 'delta': '0:00:00.011024', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme1n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020690025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme1n1', 'ansible_loop_var': 'item'})>

Also, check on system the LV size as it seema that based on your previous outputs the PV names do not match. You might have now a very large HostedEngine LV which will be a waste of space. Best Regards, Strahil Nikolov На 15 юли 2020 г. 0:19:09 GMT+03:00, clam2718@gmail.com написа:
Thank you Strahil. I think I edited the oVirt Node Cockpit Hyperconverged Wizard Gluster Deployment Ansible playbook as detailed in your post and received the following new failure:
TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for RHEL systems.] *** fatal: [fmov1n3.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [fmov1n1.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [fmov1n2.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."}
Any further assistance is most appreciated!!!
Respectfully, Charles
--- Gluster Deployment Ansible Playbook
hc_nodes: hosts: fmov1n1.sn.dtcorp.com: gluster_infra_volume_groups: - vgname: gluster_vg_nvme0n1 pvname: /dev/mapper/vdo_nvme0n1 - vgname: gluster_vg_nvme2n1 pvname: /dev/mapper/vdo_nvme2n1 - vgname: gluster_vg_nvme1n1 pvname: /dev/mapper/vdo_nvme1n1 gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_nvme0n1 - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_nvme2n1 - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_nvme1n1 gluster_infra_vdo: - name: vdo_nvme0n1 device: /dev/nvme0n1 slabsize: 2G logicalsize: 1000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme2n1 device: /dev/nvme2n1 slabsize: 32G logicalsize: 5000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme1n1 device: /dev/nvme1n1 slabsize: 32G logicalsize: 5000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M blacklist_mpath_devices: - nvme0n1 - nvme2n1 - nvme1n1 gluster_infra_thick_lvs: - vgname: gluster_vg_nvme0n1 lvname: gluster_lv_engine size: '100%PVS' gluster_infra_thinpools: - vgname: gluster_vg_nvme2n1 thinpoolname: gluster_thinpool_gluster_vg_nvme2n1 poolmetadatasize: 3G - vgname: gluster_vg_nvme1n1 thinpoolname: gluster_thinpool_gluster_vg_nvme1n1 poolmetadatasize: 3G gluster_infra_lv_logicalvols: - vgname: gluster_vg_nvme2n1 thinpool: gluster_thinpool_gluster_vg_nvme2n1 lvname: gluster_lv_data lvsize: '100%PVS' - vgname: gluster_vg_nvme1n1 thinpool: gluster_thinpool_gluster_vg_nvme1n1 lvname: gluster_lv_vmstore lvsize: '100%PVS' fmov1n2.sn.dtcorp.com: gluster_infra_volume_groups: - vgname: gluster_vg_nvme0n1 pvname: /dev/mapper/vdo_nvme0n1 - vgname: gluster_vg_nvme2n1 pvname: /dev/mapper/vdo_nvme2n1 - vgname: gluster_vg_nvme1n1 pvname: /dev/mapper/vdo_nvme1n1 gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_nvme0n1 - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_nvme2n1 - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_nvme1n1 gluster_infra_vdo: - name: vdo_nvme0n1 device: /dev/nvme0n1 slabsize: 2G logicalsize: 1000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme2n1 device: /dev/nvme2n1 slabsize: 32G logicalsize: 5000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme1n1 device: /dev/nvme1n1 slabsize: 32G logicalsize: 5000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M blacklist_mpath_devices: - nvme0n1 - nvme2n1 - nvme1n1 gluster_infra_thick_lvs: - vgname: gluster_vg_nvme0n1 lvname: gluster_lv_engine size: '100%PVS' gluster_infra_thinpools: - vgname: gluster_vg_nvme2n1 thinpoolname: gluster_thinpool_gluster_vg_nvme2n1 poolmetadatasize: 3G - vgname: gluster_vg_nvme1n1 thinpoolname: gluster_thinpool_gluster_vg_nvme1n1 poolmetadatasize: 3G gluster_infra_lv_logicalvols: - vgname: gluster_vg_nvme2n1 thinpool: gluster_thinpool_gluster_vg_nvme2n1 lvname: gluster_lv_data lvsize: '100%PVS' - vgname: gluster_vg_nvme1n1 thinpool: gluster_thinpool_gluster_vg_nvme1n1 lvname: gluster_lv_vmstore lvsize: '100%PVS' fmov1n3.sn.dtcorp.com: gluster_infra_volume_groups: - vgname: gluster_vg_nvme0n1 pvname: /dev/mapper/vdo_nvme0n1 - vgname: gluster_vg_nvme2n1 pvname: /dev/mapper/vdo_nvme2n1 - vgname: gluster_vg_nvme1n1 pvname: /dev/mapper/vdo_nvme1n1 gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_nvme0n1 - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_nvme2n1 - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_nvme1n1 gluster_infra_vdo: - name: vdo_nvme0n1 device: /dev/nvme0n1 slabsize: 2G logicalsize: 1000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme2n1 device: /dev/nvme2n1 slabsize: 32G logicalsize: 5000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme1n1 device: /dev/nvme1n1 slabsize: 32G logicalsize: 5000G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M blacklist_mpath_devices: - nvme0n1 - nvme2n1 - nvme1n1 gluster_infra_thick_lvs: - vgname: gluster_vg_nvme0n1 lvname: gluster_lv_engine size: '100%PVS' gluster_infra_thinpools: - vgname: gluster_vg_nvme2n1 thinpoolname: gluster_thinpool_gluster_vg_nvme2n1 poolmetadatasize: 3G - vgname: gluster_vg_nvme1n1 thinpoolname: gluster_thinpool_gluster_vg_nvme1n1 poolmetadatasize: 3G gluster_infra_lv_logicalvols: - vgname: gluster_vg_nvme2n1 thinpool: gluster_thinpool_gluster_vg_nvme2n1 lvname: gluster_lv_data lvsize: '100%PVS' - vgname: gluster_vg_nvme1n1 thinpool: gluster_thinpool_gluster_vg_nvme1n1 lvname: gluster_lv_vmstore lvsize: '100%PVS' vars: gluster_infra_disktype: JBOD gluster_set_selinux_labels: true gluster_infra_fw_ports: - 2049/tcp - 54321/tcp - 5900/tcp - 5900-6923/tcp - 5666/tcp - 16514/tcp gluster_infra_fw_permanent: true gluster_infra_fw_state: enabled gluster_infra_fw_zone: public gluster_infra_fw_services: - glusterfs gluster_features_force_varlogsizecheck: false cluster_nodes: - fmov1n1.sn.dtcorp.com - fmov1n2.sn.dtcorp.com - fmov1n3.sn.dtcorp.com gluster_features_hci_cluster: '{{ cluster_nodes }}' gluster_features_hci_volumes: - volname: engine brick: /gluster_bricks/engine/engine arbiter: 0 - volname: data brick: /gluster_bricks/data/data arbiter: 0 - volname: vmstore brick: /gluster_bricks/vmstore/vmstore arbiter: 0
--- /var/log/cockpit/ovirt-dashboard/gluster-deployment.log
PLAY [Setup backend] ***********************************************************
TASK [Gathering Facts] ********************************************************* ok: [fmov1n1.sn.dtcorp.com] ok: [fmov1n2.sn.dtcorp.com] ok: [fmov1n3.sn.dtcorp.com]
TASK [Check if valid hostnames are provided] *********************************** changed: [fmov1n1.sn.dtcorp.com] => (item=fmov1n1.sn.dtcorp.com) changed: [fmov1n1.sn.dtcorp.com] => (item=fmov1n2.sn.dtcorp.com) changed: [fmov1n1.sn.dtcorp.com] => (item=fmov1n3.sn.dtcorp.com)
TASK [Check if provided hostnames are valid] *********************************** ok: [fmov1n1.sn.dtcorp.com] => { "changed": false, "msg": "All assertions passed" } ok: [fmov1n2.sn.dtcorp.com] => { "changed": false, "msg": "All assertions passed" } ok: [fmov1n3.sn.dtcorp.com] => { "changed": false, "msg": "All assertions passed" }
TASK [Check if /var/log has enough disk space] ********************************* skipping: [fmov1n1.sn.dtcorp.com] skipping: [fmov1n2.sn.dtcorp.com] skipping: [fmov1n3.sn.dtcorp.com]
TASK [Check if the /var is greater than 15G] *********************************** skipping: [fmov1n1.sn.dtcorp.com] skipping: [fmov1n2.sn.dtcorp.com] skipping: [fmov1n3.sn.dtcorp.com]
TASK [Check if disks have logical block size of 512B] ************************** skipping: [fmov1n1.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) skipping: [fmov1n1.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) skipping: [fmov1n1.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) skipping: [fmov1n2.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) skipping: [fmov1n2.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) skipping: [fmov1n2.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) skipping: [fmov1n3.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) skipping: [fmov1n3.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) skipping: [fmov1n3.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'})
TASK [Check if logical block size is 512 bytes] ******************************** skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size)
TASK [Get logical block size of VDO devices] *********************************** skipping: [fmov1n1.sn.dtcorp.com] => (item={'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) skipping: [fmov1n1.sn.dtcorp.com] => (item={'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) skipping: [fmov1n1.sn.dtcorp.com] => (item={'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) skipping: [fmov1n2.sn.dtcorp.com] => (item={'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) skipping: [fmov1n2.sn.dtcorp.com] => (item={'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) skipping: [fmov1n2.sn.dtcorp.com] => (item={'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) skipping: [fmov1n3.sn.dtcorp.com] => (item={'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) skipping: [fmov1n3.sn.dtcorp.com] => (item={'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) skipping: [fmov1n3.sn.dtcorp.com] => (item={'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'})
TASK [Check if logical block size is 512 bytes for VDO devices] **************** skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size)
TASK [gluster.infra/roles/firewall_config : Start firewalld if not already started] *** ok: [fmov1n3.sn.dtcorp.com] ok: [fmov1n2.sn.dtcorp.com] ok: [fmov1n1.sn.dtcorp.com]
TASK [gluster.infra/roles/firewall_config : check if required variables are set] *** skipping: [fmov1n1.sn.dtcorp.com] skipping: [fmov1n2.sn.dtcorp.com] skipping: [fmov1n3.sn.dtcorp.com]
TASK [gluster.infra/roles/firewall_config : Open/Close firewalld ports] ******** changed: [fmov1n3.sn.dtcorp.com] => (item=2049/tcp) changed: [fmov1n2.sn.dtcorp.com] => (item=2049/tcp) changed: [fmov1n1.sn.dtcorp.com] => (item=2049/tcp) changed: [fmov1n3.sn.dtcorp.com] => (item=54321/tcp) changed: [fmov1n2.sn.dtcorp.com] => (item=54321/tcp) changed: [fmov1n1.sn.dtcorp.com] => (item=54321/tcp) changed: [fmov1n3.sn.dtcorp.com] => (item=5900/tcp) changed: [fmov1n2.sn.dtcorp.com] => (item=5900/tcp) changed: [fmov1n1.sn.dtcorp.com] => (item=5900/tcp) changed: [fmov1n3.sn.dtcorp.com] => (item=5900-6923/tcp) changed: [fmov1n2.sn.dtcorp.com] => (item=5900-6923/tcp) changed: [fmov1n1.sn.dtcorp.com] => (item=5900-6923/tcp) changed: [fmov1n3.sn.dtcorp.com] => (item=5666/tcp) changed: [fmov1n2.sn.dtcorp.com] => (item=5666/tcp) changed: [fmov1n1.sn.dtcorp.com] => (item=5666/tcp) changed: [fmov1n3.sn.dtcorp.com] => (item=16514/tcp) changed: [fmov1n2.sn.dtcorp.com] => (item=16514/tcp) changed: [fmov1n1.sn.dtcorp.com] => (item=16514/tcp)
TASK [gluster.infra/roles/firewall_config : Add/Delete services to firewalld rules] *** ok: [fmov1n3.sn.dtcorp.com] => (item=glusterfs) ok: [fmov1n2.sn.dtcorp.com] => (item=glusterfs) ok: [fmov1n1.sn.dtcorp.com] => (item=glusterfs)
TASK [gluster.infra/roles/backend_setup : Check that the multipath.conf exists] *** ok: [fmov1n3.sn.dtcorp.com] ok: [fmov1n2.sn.dtcorp.com] ok: [fmov1n1.sn.dtcorp.com]
TASK [gluster.infra/roles/backend_setup : Ensure that multipathd services is enabled if not] *** skipping: [fmov1n3.sn.dtcorp.com] changed: [fmov1n1.sn.dtcorp.com] changed: [fmov1n2.sn.dtcorp.com]
TASK [gluster.infra/roles/backend_setup : Ensure that multipathd services is running] *** ok: [fmov1n3.sn.dtcorp.com] changed: [fmov1n1.sn.dtcorp.com] changed: [fmov1n2.sn.dtcorp.com]
TASK [gluster.infra/roles/backend_setup : Create /etc/multipath/conf.d if doesn't exists] *** changed: [fmov1n3.sn.dtcorp.com] changed: [fmov1n2.sn.dtcorp.com] changed: [fmov1n1.sn.dtcorp.com]
TASK [gluster.infra/roles/backend_setup : Get the UUID of the devices] ********* changed: [fmov1n3.sn.dtcorp.com] => (item=nvme0n1) changed: [fmov1n2.sn.dtcorp.com] => (item=nvme0n1) changed: [fmov1n1.sn.dtcorp.com] => (item=nvme0n1) changed: [fmov1n3.sn.dtcorp.com] => (item=nvme2n1) changed: [fmov1n2.sn.dtcorp.com] => (item=nvme2n1) changed: [fmov1n1.sn.dtcorp.com] => (item=nvme2n1) changed: [fmov1n3.sn.dtcorp.com] => (item=nvme1n1) changed: [fmov1n2.sn.dtcorp.com] => (item=nvme1n1) changed: [fmov1n1.sn.dtcorp.com] => (item=nvme1n1)
TASK [gluster.infra/roles/backend_setup : Check that the blacklist.conf exists] *** ok: [fmov1n3.sn.dtcorp.com] ok: [fmov1n2.sn.dtcorp.com] ok: [fmov1n1.sn.dtcorp.com]
TASK [gluster.infra/roles/backend_setup : Create blacklist template content] *** changed: [fmov1n3.sn.dtcorp.com] changed: [fmov1n2.sn.dtcorp.com] changed: [fmov1n1.sn.dtcorp.com]
TASK [gluster.infra/roles/backend_setup : Add wwid to blacklist in blacklist.conf file] *** changed: [fmov1n3.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme0n1', 'stdout': "wwid 'eui.343756304d7020750025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:36.612051', 'end': '2020-07-14 21:06:36.623511', 'delta': '0:00:00.011460', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme0n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020750025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme0n1', 'ansible_loop_var': 'item'}) changed: [fmov1n2.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme0n1', 'stdout': "wwid 'eui.343756304d7020530025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:36.674961', 'end': '2020-07-14 21:06:36.687875', 'delta': '0:00:00.012914', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme0n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020530025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme0n1', 'ansible_loop_var': 'item'}) changed: [fmov1n1.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme0n1', 'stdout': "wwid 'eui.343756304d7020220025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:36.732721', 'end': '2020-07-14 21:06:36.744468', 'delta': '0:00:00.011747', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme0n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020220025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme0n1', 'ansible_loop_var': 'item'}) changed: [fmov1n3.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme2n1', 'stdout': "wwid 'eui.343756304d7020730025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:41.411729', 'end': '2020-07-14 21:06:41.423305', 'delta': '0:00:00.011576', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme2n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020730025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme2n1', 'ansible_loop_var': 'item'}) changed: [fmov1n2.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme2n1', 'stdout': "wwid 'eui.343756304d7020190025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:41.683414', 'end': '2020-07-14 21:06:41.695115', 'delta': '0:00:00.011701', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme2n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020190025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme2n1', 'ansible_loop_var': 'item'}) changed: [fmov1n1.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme2n1', 'stdout': "wwid 'eui.343756304d7007630025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:41.832021', 'end': '2020-07-14 21:06:41.844162', 'delta': '0:00:00.012141', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme2n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7007630025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme2n1', 'ansible_loop_var': 'item'}) changed: [fmov1n3.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme1n1', 'stdout': "wwid 'eui.343756304d7020760025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:46.242072', 'end': '2020-07-14 21:06:46.253191', 'delta': '0:00:00.011119', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme1n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020760025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme1n1', 'ansible_loop_var': 'item'}) changed: [fmov1n2.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme1n1', 'stdout': "wwid 'eui.343756304d7020690025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-14 21:06:46.697920', 'end': '2020-07-14 21:06:46.708944', 'delta': '0:00:00.011024', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme1n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020690025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme1n1', 'ansible_loop_var': 'item'})>

Thank you very much Strahil for your continued assistance. I have tried cleaning up and redeploying four additional times and am still experiencing the same error. To Summarize (1) Attempt 1: change gluster_infra_thick_lvs --> size: 100G to size: '100%PVS' and change gluster_infra_thinpools --> lvsize: 500G to lvsize: '100%PVS' Result 1: deployment failed --> TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for RHEL systems.] *** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml:33 fatal: [fmov1n3.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [fmov1n1.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [fmov1n2.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} (2) Attempt 2: same as Attempt 1, but substituted 99G for '100%PVS' Result 2: same as Result 1 (3) Attempt 3: same as Attempt 1, but added vars: ansible_python_interpreter: /usr/bin/python3 Result 3: same as Result 1 (4) Attempt 4: reboot all three nodes, same as Attempt 1 but omitted previously edited size arguments as I read in documentation at https://github.com/gluster/gluster-ansible-infra that the size/lvsize arguements for variables gluster_infra_thick_lvs and gluster_infra_lv_logicalvols are optional and default to 100% size of LV. At the end of this post are the latest version of the playbook and log output. As best I can tell the nodes are fully updated, default installs using verified images of v4.4.1.1. From /var/log/cockpit/ovirt-dashboard/gluster-deployment.log I see that line 33 in task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml is what is causing the deployment to fail at this point - name: Change to Install lvm tools for RHEL systems. package: name: device-mapper-persistent-data state: present when: ansible_os_family == 'RedHat' But package device-mapper-persistent-data is installed: [root@fmov1n1 ~]# dnf install device-mapper-persistent-data Last metadata expiration check: 0:32:10 ago on Wed 15 Jul 2020 06:44:19 PM UTC. Package device-mapper-persistent-data-0.8.5-3.el8.x86_64 is already installed. Dependencies resolved. Nothing to do. Complete! [root@fmov1n1 ~]# dnf info device-mapper-persistent-data Last metadata expiration check: 0:31:44 ago on Wed 15 Jul 2020 06:44:19 PM UTC. Installed Packages Name : device-mapper-persistent-data Version : 0.8.5 Release : 3.el8 Architecture : x86_64 Size : 1.4 M Source : device-mapper-persistent-data-0.8.5-3.el8.src.rpm Repository : @System Summary : Device-mapper Persistent Data Tools URL : https://github.com/jthornber/thin-provisioning-tools License : GPLv3+ Description : thin-provisioning-tools contains check,dump,restore,repair,rmap : and metadata_size tools to manage device-mapper thin provisioning : target metadata devices; cache check,dump,metadata_size,restore : and repair tools to manage device-mapper cache metadata devices : are included and era check, dump, restore and invalidate to manage : snapshot eras I can't figure out why Ansible v2.9.10 is not calling DNF. Ansible DNF package is installed: [root@fmov1n1 modules]# ansible-doc -t module dnf
DNF (/usr/lib/python3.6/site-packages/ansible/modules/packaging/os/dnf.py)
Installs, upgrade, removes, and lists packages and groups with the `dnf' package manager. * This module is maintained by The Ansible Core Team ... I am unsure how to further troubleshoot from here! Thank you again!!! Charles --- Latest Gluster Playbook (edited from Wizard output) hc_nodes: hosts: fmov1n1.sn.dtcorp.com: gluster_infra_volume_groups: - vgname: gluster_vg_nvme0n1 pvname: /dev/mapper/vdo_nvme0n1 - vgname: gluster_vg_nvme2n1 pvname: /dev/mapper/vdo_nvme2n1 - vgname: gluster_vg_nvme1n1 pvname: /dev/mapper/vdo_nvme1n1 gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_nvme0n1 - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_nvme2n1 - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_nvme1n1 gluster_infra_vdo: - name: vdo_nvme0n1 device: /dev/nvme0n1 slabsize: 2G logicalsize: 100G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme2n1 device: /dev/nvme2n1 slabsize: 2G logicalsize: 500G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme1n1 device: /dev/nvme1n1 slabsize: 2G logicalsize: 500G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M blacklist_mpath_devices: - nvme0n1 - nvme2n1 - nvme1n1 gluster_infra_thick_lvs: - vgname: gluster_vg_nvme0n1 lvname: gluster_lv_engine gluster_infra_thinpools: - vgname: gluster_vg_nvme2n1 thinpoolname: gluster_thinpool_gluster_vg_nvme2n1 poolmetadatasize: 1G - vgname: gluster_vg_nvme1n1 thinpoolname: gluster_thinpool_gluster_vg_nvme1n1 poolmetadatasize: 1G gluster_infra_lv_logicalvols: - vgname: gluster_vg_nvme2n1 thinpool: gluster_thinpool_gluster_vg_nvme2n1 lvname: gluster_lv_data - vgname: gluster_vg_nvme1n1 thinpool: gluster_thinpool_gluster_vg_nvme1n1 lvname: gluster_lv_vmstore fmov1n2.sn.dtcorp.com: gluster_infra_volume_groups: - vgname: gluster_vg_nvme0n1 pvname: /dev/mapper/vdo_nvme0n1 - vgname: gluster_vg_nvme2n1 pvname: /dev/mapper/vdo_nvme2n1 - vgname: gluster_vg_nvme1n1 pvname: /dev/mapper/vdo_nvme1n1 gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_nvme0n1 - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_nvme2n1 - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_nvme1n1 gluster_infra_vdo: - name: vdo_nvme0n1 device: /dev/nvme0n1 slabsize: 2G logicalsize: 100G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme2n1 device: /dev/nvme2n1 slabsize: 2G logicalsize: 500G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme1n1 device: /dev/nvme1n1 slabsize: 2G logicalsize: 500G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M blacklist_mpath_devices: - nvme0n1 - nvme2n1 - nvme1n1 gluster_infra_thick_lvs: - vgname: gluster_vg_nvme0n1 lvname: gluster_lv_engine gluster_infra_thinpools: - vgname: gluster_vg_nvme2n1 thinpoolname: gluster_thinpool_gluster_vg_nvme2n1 poolmetadatasize: 1G - vgname: gluster_vg_nvme1n1 thinpoolname: gluster_thinpool_gluster_vg_nvme1n1 poolmetadatasize: 1G gluster_infra_lv_logicalvols: - vgname: gluster_vg_nvme2n1 thinpool: gluster_thinpool_gluster_vg_nvme2n1 lvname: gluster_lv_data - vgname: gluster_vg_nvme1n1 thinpool: gluster_thinpool_gluster_vg_nvme1n1 lvname: gluster_lv_vmstore fmov1n3.sn.dtcorp.com: gluster_infra_volume_groups: - vgname: gluster_vg_nvme0n1 pvname: /dev/mapper/vdo_nvme0n1 - vgname: gluster_vg_nvme2n1 pvname: /dev/mapper/vdo_nvme2n1 - vgname: gluster_vg_nvme1n1 pvname: /dev/mapper/vdo_nvme1n1 gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_nvme0n1 - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_nvme2n1 - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_nvme1n1 gluster_infra_vdo: - name: vdo_nvme0n1 device: /dev/nvme0n1 slabsize: 2G logicalsize: 100G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme2n1 device: /dev/nvme2n1 slabsize: 2G logicalsize: 500G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme1n1 device: /dev/nvme1n1 slabsize: 2G logicalsize: 500G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M blacklist_mpath_devices: - nvme0n1 - nvme2n1 - nvme1n1 gluster_infra_thick_lvs: - vgname: gluster_vg_nvme0n1 lvname: gluster_lv_engine gluster_infra_thinpools: - vgname: gluster_vg_nvme2n1 thinpoolname: gluster_thinpool_gluster_vg_nvme2n1 poolmetadatasize: 1G - vgname: gluster_vg_nvme1n1 thinpoolname: gluster_thinpool_gluster_vg_nvme1n1 poolmetadatasize: 1G gluster_infra_lv_logicalvols: - vgname: gluster_vg_nvme2n1 thinpool: gluster_thinpool_gluster_vg_nvme2n1 lvname: gluster_lv_data - vgname: gluster_vg_nvme1n1 thinpool: gluster_thinpool_gluster_vg_nvme1n1 lvname: gluster_lv_vmstore vars: ansible_python_interpreter: /usr/bin/python3 gluster_infra_disktype: JBOD gluster_set_selinux_labels: true gluster_infra_fw_ports: - 2049/tcp - 54321/tcp - 5900/tcp - 5900-6923/tcp - 5666/tcp - 16514/tcp gluster_infra_fw_permanent: true gluster_infra_fw_state: enabled gluster_infra_fw_zone: public gluster_infra_fw_services: - glusterfs gluster_features_force_varlogsizecheck: false cluster_nodes: - fmov1n1.sn.dtcorp.com - fmov1n2.sn.dtcorp.com - fmov1n3.sn.dtcorp.com gluster_features_hci_cluster: '{{ cluster_nodes }}' gluster_features_hci_volumes: - volname: engine brick: /gluster_bricks/engine/engine arbiter: 0 - volname: data brick: /gluster_bricks/data/data arbiter: 0 - volname: vmstore brick: /gluster_bricks/vmstore/vmstore arbiter: 0 --- Latest /var/log/cockpit/ovirt-dashboard/gluster-deployment.log [root@fmov1n1 modules]# cat /var/log/cockpit/ovirt-dashboard/gluster-deployment.log ansible-playbook 2.9.10 config file = /etc/ansible/ansible.cfg configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.6/site-packages/ansible executable location = /root/../usr/bin/ansible-playbook python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] Using /etc/ansible/ansible.cfg as config file statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/blacklist_mpath_devices.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/lvm_config.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main-lvm.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thick_lv_create.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thin_pool_create.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thin_volume_create.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/cache_setup.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main-lvm.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thick_lv_create.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thin_pool_create.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thin_volume_create.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/cache_setup.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/fscreate.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/mount.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/lvm_kernelparams.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/fstrim_service.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/luks_device_encrypt.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/bind_tang_server.yml statically imported: /etc/ansible/roles/gluster.features/roles/nfs_ganesha/tasks/prerequisites.yml statically imported: /etc/ansible/roles/gluster.features/roles/nfs_ganesha/tasks/distribute_keys.yml statically imported: /etc/ansible/roles/gluster.features/roles/nfs_ganesha/tasks/master_tasks.yml statically imported: /etc/ansible/roles/gluster.features/roles/nfs_ganesha/tasks/enable_ganesha.yml statically imported: /etc/ansible/roles/gluster.features/roles/nfs_ganesha/tasks/add_new_nodes.yml statically imported: /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/prerequisites.yml statically imported: /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/glusterd_ipv6.yml statically imported: /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml statically imported: /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/ssl-setup.yml statically imported: /etc/ansible/roles/gluster.features/roles/ctdb/tasks/setup_ctdb.yml PLAYBOOK: hc_wizard.yml ******************************************************** 1 plays in /root/../usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml PLAY [Setup backend] *********************************************************** TASK [Gathering Facts] ********************************************************* task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:4 ok: [fmov1n2.sn.dtcorp.com] ok: [fmov1n1.sn.dtcorp.com] ok: [fmov1n3.sn.dtcorp.com] TASK [Check if valid hostnames are provided] *********************************** task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:16 changed: [fmov1n1.sn.dtcorp.com] => (item=fmov1n1.sn.dtcorp.com) => {"ansible_loop_var": "item", "changed": true, "cmd": ["getent", "ahosts", "fmov1n1.sn.dtcorp.com"], "delta": "0:00:00.006835", "end": "2020-07-15 18:03:58.366109", "item": "fmov1n1.sn.dtcorp.com", "rc": 0, "start": "2020-07-15 18:03:58.359274", "stderr": "", "stderr_lines": [], "stdout": "172.16.16.21 STREAM fmov1n1.sn.dtcorp.com\n172.16.16.21 DGRAM \n172.16.16.21 RAW ", "stdout_lines": ["172.16.16.21 STREAM fmov1n1.sn.dtcorp.com", "172.16.16.21 DGRAM ", "172.16.16.21 RAW "]} changed: [fmov1n1.sn.dtcorp.com] => (item=fmov1n2.sn.dtcorp.com) => {"ansible_loop_var": "item", "changed": true, "cmd": ["getent", "ahosts", "fmov1n2.sn.dtcorp.com"], "delta": "0:00:00.004972", "end": "2020-07-15 18:03:58.569094", "item": "fmov1n2.sn.dtcorp.com", "rc": 0, "start": "2020-07-15 18:03:58.564122", "stderr": "", "stderr_lines": [], "stdout": "172.16.16.22 STREAM fmov1n2.sn.dtcorp.com\n172.16.16.22 DGRAM \n172.16.16.22 RAW ", "stdout_lines": ["172.16.16.22 STREAM fmov1n2.sn.dtcorp.com", "172.16.16.22 DGRAM ", "172.16.16.22 RAW "]} changed: [fmov1n1.sn.dtcorp.com] => (item=fmov1n3.sn.dtcorp.com) => {"ansible_loop_var": "item", "changed": true, "cmd": ["getent", "ahosts", "fmov1n3.sn.dtcorp.com"], "delta": "0:00:00.004759", "end": "2020-07-15 18:03:58.769052", "item": "fmov1n3.sn.dtcorp.com", "rc": 0, "start": "2020-07-15 18:03:58.764293", "stderr": "", "stderr_lines": [], "stdout": "172.16.16.23 STREAM fmov1n3.sn.dtcorp.com\n172.16.16.23 DGRAM \n172.16.16.23 RAW ", "stdout_lines": ["172.16.16.23 STREAM fmov1n3.sn.dtcorp.com", "172.16.16.23 DGRAM ", "172.16.16.23 RAW "]} TASK [Check if provided hostnames are valid] *********************************** task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:29 ok: [fmov1n1.sn.dtcorp.com] => { "changed": false, "msg": "All assertions passed" } ok: [fmov1n2.sn.dtcorp.com] => { "changed": false, "msg": "All assertions passed" } ok: [fmov1n3.sn.dtcorp.com] => { "changed": false, "msg": "All assertions passed" } TASK [Check if /var/log has enough disk space] ********************************* task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:38 skipping: [fmov1n1.sn.dtcorp.com] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [fmov1n2.sn.dtcorp.com] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [fmov1n3.sn.dtcorp.com] => {"changed": false, "skip_reason": "Conditional result was False"} TASK [Check if the /var is greater than 15G] *********************************** task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:43 skipping: [fmov1n1.sn.dtcorp.com] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [fmov1n2.sn.dtcorp.com] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [fmov1n3.sn.dtcorp.com] => {"changed": false, "skip_reason": "Conditional result was False"} TASK [Check if disks have logical block size of 512B] ************************** task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:53 skipping: [fmov1n1.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "skip_reason": "Conditional result was False"} skipping: [fmov1n1.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "skip_reason": "Conditional result was False"} skipping: [fmov1n1.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "skip_reason": "Conditional result was False"} skipping: [fmov1n2.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "skip_reason": "Conditional result was False"} skipping: [fmov1n2.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "skip_reason": "Conditional result was False"} skipping: [fmov1n2.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "skip_reason": "Conditional result was False"} skipping: [fmov1n3.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "skip_reason": "Conditional result was False"} skipping: [fmov1n3.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "skip_reason": "Conditional result was False"} skipping: [fmov1n3.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "skip_reason": "Conditional result was False"} TASK [Check if logical block size is 512 bytes] ******************************** task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:61 skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"} skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"} skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"} skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"} skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"} skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"} skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"} skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"} skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"} TASK [Get logical block size of VDO devices] *********************************** task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:73 skipping: [fmov1n1.sn.dtcorp.com] => (item={'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '100G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme0n1", "emulate512": "off", "logicalsize": "100G", "maxDiscardSize": "16M", "name": "vdo_nvme0n1", "slabsize": "2G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False"} skipping: [fmov1n1.sn.dtcorp.com] => (item={'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '2G', 'logicalsize': '500G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme2n1", "emulate512": "off", "logicalsize": "500G", "maxDiscardSize": "16M", "name": "vdo_nvme2n1", "slabsize": "2G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False"} skipping: [fmov1n1.sn.dtcorp.com] => (item={'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '2G', 'logicalsize': '500G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme1n1", "emulate512": "off", "logicalsize": "500G", "maxDiscardSize": "16M", "name": "vdo_nvme1n1", "slabsize": "2G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False"} skipping: [fmov1n2.sn.dtcorp.com] => (item={'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '100G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme0n1", "emulate512": "off", "logicalsize": "100G", "maxDiscardSize": "16M", "name": "vdo_nvme0n1", "slabsize": "2G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False"} skipping: [fmov1n2.sn.dtcorp.com] => (item={'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '2G', 'logicalsize': '500G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme2n1", "emulate512": "off", "logicalsize": "500G", "maxDiscardSize": "16M", "name": "vdo_nvme2n1", "slabsize": "2G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False"} skipping: [fmov1n2.sn.dtcorp.com] => (item={'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '2G', 'logicalsize': '500G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme1n1", "emulate512": "off", "logicalsize": "500G", "maxDiscardSize": "16M", "name": "vdo_nvme1n1", "slabsize": "2G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False"} skipping: [fmov1n3.sn.dtcorp.com] => (item={'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '100G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme0n1", "emulate512": "off", "logicalsize": "100G", "maxDiscardSize": "16M", "name": "vdo_nvme0n1", "slabsize": "2G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False"} skipping: [fmov1n3.sn.dtcorp.com] => (item={'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '2G', 'logicalsize': '500G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme2n1", "emulate512": "off", "logicalsize": "500G", "maxDiscardSize": "16M", "name": "vdo_nvme2n1", "slabsize": "2G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False"} skipping: [fmov1n3.sn.dtcorp.com] => (item={'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '2G', 'logicalsize': '500G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme1n1", "emulate512": "off", "logicalsize": "500G", "maxDiscardSize": "16M", "name": "vdo_nvme1n1", "slabsize": "2G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False"} TASK [Check if logical block size is 512 bytes for VDO devices] **************** task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:80 skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme0n1", "emulate512": "off", "logicalsize": "100G", "maxDiscardSize": "16M", "name": "vdo_nvme0n1", "slabsize": "2G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"} skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme2n1", "emulate512": "off", "logicalsize": "500G", "maxDiscardSize": "16M", "name": "vdo_nvme2n1", "slabsize": "2G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"} skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme1n1", "emulate512": "off", "logicalsize": "500G", "maxDiscardSize": "16M", "name": "vdo_nvme1n1", "slabsize": "2G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"} skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme0n1", "emulate512": "off", "logicalsize": "100G", "maxDiscardSize": "16M", "name": "vdo_nvme0n1", "slabsize": "2G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"} skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme2n1", "emulate512": "off", "logicalsize": "500G", "maxDiscardSize": "16M", "name": "vdo_nvme2n1", "slabsize": "2G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"} skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme1n1", "emulate512": "off", "logicalsize": "500G", "maxDiscardSize": "16M", "name": "vdo_nvme1n1", "slabsize": "2G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"} skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme0n1", "emulate512": "off", "logicalsize": "100G", "maxDiscardSize": "16M", "name": "vdo_nvme0n1", "slabsize": "2G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"} skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme2n1", "emulate512": "off", "logicalsize": "500G", "maxDiscardSize": "16M", "name": "vdo_nvme2n1", "slabsize": "2G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"} skipping: [fmov1n3.sn.dtcorp.com] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme1n1", "emulate512": "off", "logicalsize": "500G", "maxDiscardSize": "16M", "name": "vdo_nvme1n1", "slabsize": "2G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"} META: ran handlers TASK [gluster.infra/roles/firewall_config : Start firewalld if not already started] *** task path: /etc/ansible/roles/gluster.infra/roles/firewall_config/tasks/main.yml:3 ok: [fmov1n3.sn.dtcorp.com] => {"changed": false, "name": "firewalld", "state": "started", "status": {"ActiveEnterTimestamp": "Wed 2020-07-15 17:35:27 UTC", "ActiveEnterTimestampMonotonic": "74188272", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "system.slice polkit.service basic.target dbus.socket sysinit.target dbus.service", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "yes", "AssertTimestamp": "Wed 2020-07-15 17:35:27 UTC", "AssertTimestampMonotonic": "73834729", "Before": "multi-user.target network-pre.target shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedoraproject.FirewallD1", "CPUAccounting": "no", "CPUAffinity": "", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode ": "0755", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2020-07-15 17:35:27 UTC", "ConditionTimestampMonotonic": "73834729", "ConfigurationDirectoryMode": "0755", "Conflicts": "ebtables.service shutdown.target ip6tables.service iptables.service ipset.service", "ControlGroup": "/system.slice/firewalld.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "firewalld - dynamic firewall daemon", "DevicePolicy": "auto", "Documentation": "man:firewalld(1)", "DynamicUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "EnvironmentFiles": "/etc/sysconfig/firewalld (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "3368", "ExecMainStartTimestamp": "Wed 2020-07-15 17:35:27 UTC", "ExecMainStartTimestampMonotonic": "73836957", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; ignore_errors=no ; start_time=[Wed 2020-07-15 17:35:27 UTC] ; stop_time=[n/a] ; pid=3368 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/fir ewalld.service", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0", "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "firewalld.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Wed 2020-07-15 17:35:27 UTC", "InactiveExitTimestampMonotonic": "73837026", "InvocationID": "b7091178b85847019eb6ed9b736d4baa", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", " LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "65536", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "1540732", "LimitNPROCSoft": "1540732", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "1540732", "LimitSIGPENDINGSoft": "1540732", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "3368", "MemoryAccounting": "yes", "MemoryCurrent": "37281792", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infini ty", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAPolicy": "n/a", "Names": "firewalld.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "system.slice sysinit.target dbus.socket", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "null", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "null", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Wed 2020-07-15 17:35:27 UTC", "StateChangeTimestampMonotonic": "74188272", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "2", "TasksMax": "2465171", "TimeoutS tartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogTimestamp": "Wed 2020-07-15 17:35:27 UTC", "WatchdogTimestampMonotonic": "74188270", "WatchdogUSec": "0"}} ok: [fmov1n2.sn.dtcorp.com] => {"changed": false, "name": "firewalld", "state": "started", "status": {"ActiveEnterTimestamp": "Wed 2020-07-15 17:35:12 UTC", "ActiveEnterTimestampMonotonic": "62246946", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "basic.target sysinit.target dbus.service system.slice polkit.service dbus.socket", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "yes", "AssertTimestamp": "Wed 2020-07-15 17:35:11 UTC", "AssertTimestampMonotonic": "61911460", "Before": "network-pre.target multi-user.target shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedoraproject.FirewallD1", "CPUAccounting": "no", "CPUAffinity": "", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode ": "0755", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2020-07-15 17:35:11 UTC", "ConditionTimestampMonotonic": "61911459", "ConfigurationDirectoryMode": "0755", "Conflicts": "ip6tables.service ebtables.service ipset.service shutdown.target iptables.service", "ControlGroup": "/system.slice/firewalld.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "firewalld - dynamic firewall daemon", "DevicePolicy": "auto", "Documentation": "man:firewalld(1)", "DynamicUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "EnvironmentFiles": "/etc/sysconfig/firewalld (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "2916", "ExecMainStartTimestamp": "Wed 2020-07-15 17:35:11 UTC", "ExecMainStartTimestampMonotonic": "61913294", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; ignore_errors=no ; start_time=[Wed 2020-07-15 17:35:11 UTC] ; stop_time=[n/a] ; pid=2916 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/fir ewalld.service", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0", "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "firewalld.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Wed 2020-07-15 17:35:11 UTC", "InactiveExitTimestampMonotonic": "61913358", "InvocationID": "517381a064734478a6b8618719f303d5", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", " LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "65536", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "1540537", "LimitNPROCSoft": "1540537", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "1540537", "LimitSIGPENDINGSoft": "1540537", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "2916", "MemoryAccounting": "yes", "MemoryCurrent": "43405312", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infini ty", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAPolicy": "n/a", "Names": "firewalld.service dbus-org.fedoraproject.FirewallD1.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "system.slice dbus.socket sysinit.target", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectory StartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "null", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "null", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Wed 2020-07-15 17:35:12 UTC", "StateChangeTimestampMonotonic": "62246946", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurre nt": "2", "TasksMax": "2464860", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogTimestamp": "Wed 2020-07-15 17:35:12 UTC", "WatchdogTimestampMonotonic": "62246944", "WatchdogUSec": "0"}} ok: [fmov1n1.sn.dtcorp.com] => {"changed": false, "name": "firewalld", "state": "started", "status": {"ActiveEnterTimestamp": "Wed 2020-07-15 17:34:40 UTC", "ActiveEnterTimestampMonotonic": "43622760", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "system.slice basic.target dbus.service sysinit.target polkit.service dbus.socket", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "yes", "AssertTimestamp": "Wed 2020-07-15 17:34:40 UTC", "AssertTimestampMonotonic": "43287382", "Before": "shutdown.target network-pre.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedoraproject.FirewallD1", "CPUAccounting": "no", "CPUAffinity": "", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode ": "0755", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2020-07-15 17:34:40 UTC", "ConditionTimestampMonotonic": "43287382", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target iptables.service ip6tables.service ebtables.service ipset.service", "ControlGroup": "/system.slice/firewalld.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "firewalld - dynamic firewall daemon", "DevicePolicy": "auto", "Documentation": "man:firewalld(1)", "DynamicUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "EnvironmentFiles": "/etc/sysconfig/firewalld (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "2908", "ExecMainStartTimestamp": "Wed 2020-07-15 17:34:40 UTC", "ExecMainStartTimestampMonotonic": "43289228", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; ignore_errors=no ; start_time=[Wed 2020-07-15 17:34:40 UTC] ; stop_time=[n/a] ; pid=2908 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/fir ewalld.service", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0", "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "firewalld.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Wed 2020-07-15 17:34:40 UTC", "InactiveExitTimestampMonotonic": "43289295", "InvocationID": "39eb66790ee84e5fa63a6d0fb5ce9b75", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", " LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "65536", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "1540537", "LimitNPROCSoft": "1540537", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "1540537", "LimitSIGPENDINGSoft": "1540537", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "2908", "MemoryAccounting": "yes", "MemoryCurrent": "42672128", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infini ty", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAPolicy": "n/a", "Names": "dbus-org.fedoraproject.FirewallD1.service firewalld.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "system.slice dbus.socket sysinit.target", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectory StartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "null", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "null", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Wed 2020-07-15 17:34:40 UTC", "StateChangeTimestampMonotonic": "43622760", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurre nt": "2", "TasksMax": "2464860", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogTimestamp": "Wed 2020-07-15 17:34:40 UTC", "WatchdogTimestampMonotonic": "43622758", "WatchdogUSec": "0"}} TASK [gluster.infra/roles/firewall_config : check if required variables are set] *** task path: /etc/ansible/roles/gluster.infra/roles/firewall_config/tasks/main.yml:8 skipping: [fmov1n1.sn.dtcorp.com] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [fmov1n2.sn.dtcorp.com] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [fmov1n3.sn.dtcorp.com] => {"changed": false, "skip_reason": "Conditional result was False"} TASK [gluster.infra/roles/firewall_config : Open/Close firewalld ports] ******** task path: /etc/ansible/roles/gluster.infra/roles/firewall_config/tasks/main.yml:13 ok: [fmov1n3.sn.dtcorp.com] => (item=2049/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "2049/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"} ok: [fmov1n2.sn.dtcorp.com] => (item=2049/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "2049/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"} ok: [fmov1n1.sn.dtcorp.com] => (item=2049/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "2049/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"} ok: [fmov1n3.sn.dtcorp.com] => (item=54321/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "54321/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"} ok: [fmov1n2.sn.dtcorp.com] => (item=54321/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "54321/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"} ok: [fmov1n1.sn.dtcorp.com] => (item=54321/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "54321/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"} ok: [fmov1n3.sn.dtcorp.com] => (item=5900/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "5900/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"} ok: [fmov1n2.sn.dtcorp.com] => (item=5900/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "5900/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"} ok: [fmov1n1.sn.dtcorp.com] => (item=5900/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "5900/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"} ok: [fmov1n3.sn.dtcorp.com] => (item=5900-6923/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "5900-6923/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"} ok: [fmov1n2.sn.dtcorp.com] => (item=5900-6923/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "5900-6923/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"} ok: [fmov1n1.sn.dtcorp.com] => (item=5900-6923/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "5900-6923/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"} ok: [fmov1n3.sn.dtcorp.com] => (item=5666/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "5666/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"} ok: [fmov1n2.sn.dtcorp.com] => (item=5666/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "5666/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"} ok: [fmov1n1.sn.dtcorp.com] => (item=5666/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "5666/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"} ok: [fmov1n3.sn.dtcorp.com] => (item=16514/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "16514/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"} ok: [fmov1n2.sn.dtcorp.com] => (item=16514/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "16514/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"} ok: [fmov1n1.sn.dtcorp.com] => (item=16514/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "16514/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"} TASK [gluster.infra/roles/firewall_config : Add/Delete services to firewalld rules] *** task path: /etc/ansible/roles/gluster.infra/roles/firewall_config/tasks/main.yml:24 ok: [fmov1n3.sn.dtcorp.com] => (item=glusterfs) => {"ansible_loop_var": "item", "changed": false, "item": "glusterfs", "msg": "Permanent and Non-Permanent(immediate) operation"} ok: [fmov1n2.sn.dtcorp.com] => (item=glusterfs) => {"ansible_loop_var": "item", "changed": false, "item": "glusterfs", "msg": "Permanent and Non-Permanent(immediate) operation"} ok: [fmov1n1.sn.dtcorp.com] => (item=glusterfs) => {"ansible_loop_var": "item", "changed": false, "item": "glusterfs", "msg": "Permanent and Non-Permanent(immediate) operation"} TASK [gluster.infra/roles/backend_setup : Check that the multipath.conf exists] *** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/blacklist_mpath_devices.yml:3 ok: [fmov1n3.sn.dtcorp.com] => {"changed": false, "stat": {"atime": 1594753496.7839446, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 16, "charset": "us-ascii", "checksum": "da2254ee7938e2ca05dc3eb865fcc3ce061dbf69", "ctime": 1594753496.7689447, "dev": 64772, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 301990232, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1594753496.7679446, "nlink": 1, "path": "/etc/multipath.conf", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 6556, "uid": 0, "version": "1596722286", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} ok: [fmov1n1.sn.dtcorp.com] => {"changed": false, "stat": {"atime": 1594760778.7507734, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 16, "charset": "us-ascii", "checksum": "da2254ee7938e2ca05dc3eb865fcc3ce061dbf69", "ctime": 1594760778.7387724, "dev": 64772, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 134324630, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1594760778.7337718, "nlink": 1, "path": "/etc/multipath.conf", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 6556, "uid": 0, "version": "3088845013", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} ok: [fmov1n2.sn.dtcorp.com] => {"changed": false, "stat": {"atime": 1594760778.9185624, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 16, "charset": "us-ascii", "checksum": "da2254ee7938e2ca05dc3eb865fcc3ce061dbf69", "ctime": 1594760778.9055612, "dev": 64772, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 134324630, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1594760778.8995607, "nlink": 1, "path": "/etc/multipath.conf", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 6556, "uid": 0, "version": "1307511769", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} TASK [gluster.infra/roles/backend_setup : Ensure that multipathd services is enabled if not] *** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/blacklist_mpath_devices.yml:8 skipping: [fmov1n1.sn.dtcorp.com] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [fmov1n2.sn.dtcorp.com] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [fmov1n3.sn.dtcorp.com] => {"changed": false, "skip_reason": "Conditional result was False"} TASK [gluster.infra/roles/backend_setup : Ensure that multipathd services is running] *** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/blacklist_mpath_devices.yml:12 ok: [fmov1n3.sn.dtcorp.com] => {"changed": false, "enabled": true, "name": "multipathd", "state": "started", "status": {"ActiveEnterTimestamp": "Wed 2020-07-15 17:35:26 UTC", "ActiveEnterTimestampMonotonic": "73041017", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "systemd-udev-trigger.service multipathd.socket systemd-journald.socket systemd-udev-settle.service system.slice", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "yes", "AssertTimestamp": "Wed 2020-07-15 17:35:26 UTC", "AssertTimestampMonotonic": "72838210", "Before": "lvm2-activation-early.service blk-availability.service iscsi.service iscsid.service local-fs-pre.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "no", "CPUAffinity": "", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": " [not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2020-07-15 17:35:26 UTC", "ConditionTimestampMonotonic": "72838112", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/multipathd.service", "ControlPID": "0", "Defaul tDependencies": "no", "Delegate": "no", "Description": "Device-Mapper Multipath Device Controller", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "2956", "ExecMainStartTimestamp": "Wed 2020-07-15 17:35:26 UTC", "ExecMainStartTimestampMonotonic": "72853414", "ExecMainStatus": "0", "ExecReload": "{ path=/sbin/multipathd ; argv[]=/sbin/multipathd reconfigure ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/sbin/multipathd ; argv[]=/sbin/multipathd -d -s ; ignore_errors=no ; start_time=[Wed 2020-07-15 17:35:26 UTC] ; stop_time=[n/a] ; pid=2956 ; code=(null) ; status=0/0 }", "ExecStartPre": "{ path=/sbin/multipath ; argv[]=/sbin/multipath -A ; ignore_errors=yes ; start_time=[Wed 2020-07-15 17:35:26 UTC] ; stop_time=[Wed 2020-07-15 17:35:26 UTC] ; pid=2954 ; code=exited ; status=0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/multipathd.service", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0", "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "multipathd.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Wed 2020-07-15 17:35:26 UTC", "InactiveExitTimestampMonotonic": "72840228", "InvocationID": "0d8509009cdf497484b9afbca9bd1072", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": " infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "65536", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "1540732", "LimitNPROCSoft": "1540732", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "1540732", "LimitSIGPENDINGSoft": "1540732", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "2956", "MemoryAccounting": "yes", "MemoryCurrent": "14000128", "MemoryDenyWriteExecute": "no ", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAPolicy": "n/a", "Names": "multipathd.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "system.slice", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Wed 2020-07-15 17:35:26 UTC", "StateChangeTimestampMonotonic": "73041017", "StateDirectoryMode": "0755", "StatusErrno": "0", "StatusText": "up", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisal locate": "no", "TasksAccounting": "yes", "TasksCurrent": "7", "TasksMax": "infinity", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "TriggeredBy": "multipathd.socket", "Type": "notify", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "sysinit.target", "Wants": "systemd-udev-trigger.service systemd-udev-settle.service", "WatchdogTimestamp": "Wed 2020-07-15 17:35:26 UTC", "WatchdogTimestampMonotonic": "73041015", "WatchdogUSec": "0"}} ok: [fmov1n1.sn.dtcorp.com] => {"changed": false, "enabled": true, "name": "multipathd", "state": "started", "status": {"ActiveEnterTimestamp": "Wed 2020-07-15 17:34:39 UTC", "ActiveEnterTimestampMonotonic": "42492055", "ActiveExitTimestamp": "Wed 2020-07-15 17:34:34 UTC", "ActiveExitTimestampMonotonic": "37116840", "ActiveState": "active", "After": "multipathd.socket system.slice systemd-udev-settle.service systemd-journald.socket systemd-udev-trigger.service", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "yes", "AssertTimestamp": "Wed 2020-07-15 17:34:39 UTC", "AssertTimestampMonotonic": "42317339", "Before": "lvm2-activation-early.service local-fs-pre.target blk-availability.service iscsid.service iscsi.service", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "no", "CPUAffinity": "", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingRe setOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2020-07-15 17:34:39 UTC", "ConditionTimestampMonotonic": "42317295", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroup": " /system.slice/multipathd.service", "ControlPID": "0", "DefaultDependencies": "no", "Delegate": "no", "Description": "Device-Mapper Multipath Device Controller", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "2493", "ExecMainStartTimestamp": "Wed 2020-07-15 17:34:39 UTC", "ExecMainStartTimestampMonotonic": "42331506", "ExecMainStatus": "0", "ExecReload": "{ path=/sbin/multipathd ; argv[]=/sbin/multipathd reconfigure ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/sbin/multipathd ; argv[]=/sbin/multipathd -d -s ; ignore_errors=no ; start_time=[Wed 2020-07-15 17:34:39 UTC] ; stop_time=[n/a] ; pid=2493 ; code=(null) ; status=0/0 }", "ExecStartPre": "{ path=/sbin/multipath ; argv[]=/sbin/multipath -A ; ignore_errors=yes ; start_time=[Wed 2020-07-15 17:34:39 UTC] ; stop_time=[Wed 2020-07-15 17:34:39 UTC] ; pid=2487 ; code=exited ; status=0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/multipathd.service", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0", "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "multipathd.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Wed 2020-07-15 17:34:34 UTC", "InactiveEnterTimestampMonotonic": "37133688", "InactiveExitTimestamp": "Wed 2020-07-15 17:34:39 UTC", "InactiveExitTimestampMonotonic": "42319292", "InvocationID": "5ba7eea82a1642a48f8315facaeac352", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal" : "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "65536", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "1540537", "LimitNPROCSoft": "1540537", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "1540537", "LimitSIGPENDINGSoft": "1540537", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDir ectoryMode": "0755", "MainPID": "2493", "MemoryAccounting": "yes", "MemoryCurrent": "13725696", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAPolicy": "n/a", "Names": "multipathd.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "system.slice", "Rest art": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Wed 2020-07-15 17:34:39 UTC", "StateChangeTimestampMonotonic": "42492055", "StateDirectoryMode": "0755", "StatusErrno": "0", "StatusText": "up", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "Syslo gLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "7", "TasksMax": "infinity", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "TriggeredBy": "multipathd.socket", "Type": "notify", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "sysinit.target", "Wants": "systemd-udev-settle.service systemd-udev-trigger.service", "WatchdogTimestamp": "Wed 2020-07-15 17:34:39 UTC", "WatchdogTimestampMonotonic": "42492054", "WatchdogUSec": "0"}} ok: [fmov1n2.sn.dtcorp.com] => {"changed": false, "enabled": true, "name": "multipathd", "state": "started", "status": {"ActiveEnterTimestamp": "Wed 2020-07-15 17:35:11 UTC", "ActiveEnterTimestampMonotonic": "61304081", "ActiveExitTimestamp": "Wed 2020-07-15 17:35:05 UTC", "ActiveExitTimestampMonotonic": "55929642", "ActiveState": "active", "After": "systemd-journald.socket systemd-udev-trigger.service system.slice multipathd.socket systemd-udev-settle.service", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "yes", "AssertTimestamp": "Wed 2020-07-15 17:35:10 UTC", "AssertTimestampMonotonic": "61133190", "Before": "lvm2-activation-early.service local-fs-pre.target iscsid.service iscsi.service blk-availability.service", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "no", "CPUAffinity": "", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingRe setOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Wed 2020-07-15 17:35:10 UTC", "ConditionTimestampMonotonic": "61133113", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroup": " /system.slice/multipathd.service", "ControlPID": "0", "DefaultDependencies": "no", "Delegate": "no", "Description": "Device-Mapper Multipath Device Controller", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "2505", "ExecMainStartTimestamp": "Wed 2020-07-15 17:35:10 UTC", "ExecMainStartTimestampMonotonic": "61144214", "ExecMainStatus": "0", "ExecReload": "{ path=/sbin/multipathd ; argv[]=/sbin/multipathd reconfigure ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/sbin/multipathd ; argv[]=/sbin/multipathd -d -s ; ignore_errors=no ; start_time=[Wed 2020-07-15 17:35:10 UTC] ; stop_time=[n/a] ; pid=2505 ; code=(null) ; status=0/0 }", "ExecStartPre": "{ path=/sbin/multipath ; argv[]=/sbin/multipath -A ; ignore_errors=yes ; start_time=[Wed 2020-07-15 17:35:10 UTC] ; stop_time=[Wed 2020-07-15 17:35:10 UTC] ; pid=2501 ; code=exited ; status=0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/multipathd.service", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0", "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "multipathd.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Wed 2020-07-15 17:35:05 UTC", "InactiveEnterTimestampMonotonic": "55944529", "InactiveExitTimestamp": "Wed 2020-07-15 17:35:10 UTC", "InactiveExitTimestampMonotonic": "61134238", "InvocationID": "cdc84ca34d644ad5b6f43106fa4cd190", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal" : "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "65536", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "1540537", "LimitNPROCSoft": "1540537", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "1540537", "LimitSIGPENDINGSoft": "1540537", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDir ectoryMode": "0755", "MainPID": "2505", "MemoryAccounting": "yes", "MemoryCurrent": "13950976", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAPolicy": "n/a", "Names": "multipathd.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "system.slice", "Rest art": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Wed 2020-07-15 17:35:11 UTC", "StateChangeTimestampMonotonic": "61304081", "StateDirectoryMode": "0755", "StatusErrno": "0", "StatusText": "up", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "Syslo gLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "7", "TasksMax": "infinity", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "TriggeredBy": "multipathd.socket", "Type": "notify", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "sysinit.target", "Wants": "systemd-udev-settle.service systemd-udev-trigger.service", "WatchdogTimestamp": "Wed 2020-07-15 17:35:11 UTC", "WatchdogTimestampMonotonic": "61304080", "WatchdogUSec": "0"}} TASK [gluster.infra/roles/backend_setup : Create /etc/multipath/conf.d if doesn't exists] *** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/blacklist_mpath_devices.yml:18 ok: [fmov1n3.sn.dtcorp.com] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/multipath/conf.d", "secontext": "unconfined_u:object_r:lvm_metadata_t:s0", "size": 28, "state": "directory", "uid": 0} ok: [fmov1n2.sn.dtcorp.com] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/multipath/conf.d", "secontext": "unconfined_u:object_r:lvm_metadata_t:s0", "size": 28, "state": "directory", "uid": 0} ok: [fmov1n1.sn.dtcorp.com] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/multipath/conf.d", "secontext": "unconfined_u:object_r:lvm_metadata_t:s0", "size": 28, "state": "directory", "uid": 0} TASK [gluster.infra/roles/backend_setup : Get the UUID of the devices] ********* task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/blacklist_mpath_devices.yml:24 changed: [fmov1n3.sn.dtcorp.com] => (item=nvme0n1) => {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme0n1", "delta": "0:00:00.011607", "end": "2020-07-15 18:05:01.755402", "item": "nvme0n1", "rc": 0, "start": "2020-07-15 18:05:01.743795", "stderr": "", "stderr_lines": [], "stdout": "wwid 'eui.343756304d7020750025385800000004' added", "stdout_lines": ["wwid 'eui.343756304d7020750025385800000004' added"]} changed: [fmov1n2.sn.dtcorp.com] => (item=nvme0n1) => {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme0n1", "delta": "0:00:00.012920", "end": "2020-07-15 18:05:02.213598", "item": "nvme0n1", "rc": 0, "start": "2020-07-15 18:05:02.200678", "stderr": "", "stderr_lines": [], "stdout": "wwid 'eui.343756304d7020530025385800000004' added", "stdout_lines": ["wwid 'eui.343756304d7020530025385800000004' added"]} changed: [fmov1n1.sn.dtcorp.com] => (item=nvme0n1) => {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme0n1", "delta": "0:00:00.012176", "end": "2020-07-15 18:05:02.232564", "item": "nvme0n1", "rc": 0, "start": "2020-07-15 18:05:02.220388", "stderr": "", "stderr_lines": [], "stdout": "wwid 'eui.343756304d7020220025385800000004' added", "stdout_lines": ["wwid 'eui.343756304d7020220025385800000004' added"]} changed: [fmov1n3.sn.dtcorp.com] => (item=nvme2n1) => {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme2n1", "delta": "0:00:00.012672", "end": "2020-07-15 18:05:06.091794", "item": "nvme2n1", "rc": 0, "start": "2020-07-15 18:05:06.079122", "stderr": "", "stderr_lines": [], "stdout": "wwid 'eui.343756304d7020730025385800000004' added", "stdout_lines": ["wwid 'eui.343756304d7020730025385800000004' added"]} changed: [fmov1n2.sn.dtcorp.com] => (item=nvme2n1) => {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme2n1", "delta": "0:00:00.011007", "end": "2020-07-15 18:05:07.164691", "item": "nvme2n1", "rc": 0, "start": "2020-07-15 18:05:07.153684", "stderr": "", "stderr_lines": [], "stdout": "wwid 'eui.343756304d7020190025385800000004' added", "stdout_lines": ["wwid 'eui.343756304d7020190025385800000004' added"]} changed: [fmov1n1.sn.dtcorp.com] => (item=nvme2n1) => {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme2n1", "delta": "0:00:00.011155", "end": "2020-07-15 18:05:07.235360", "item": "nvme2n1", "rc": 0, "start": "2020-07-15 18:05:07.224205", "stderr": "", "stderr_lines": [], "stdout": "wwid 'eui.343756304d7007630025385800000004' added", "stdout_lines": ["wwid 'eui.343756304d7007630025385800000004' added"]} changed: [fmov1n3.sn.dtcorp.com] => (item=nvme1n1) => {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme1n1", "delta": "0:00:00.011619", "end": "2020-07-15 18:05:10.406977", "item": "nvme1n1", "rc": 0, "start": "2020-07-15 18:05:10.395358", "stderr": "", "stderr_lines": [], "stdout": "wwid 'eui.343756304d7020760025385800000004' added", "stdout_lines": ["wwid 'eui.343756304d7020760025385800000004' added"]} changed: [fmov1n2.sn.dtcorp.com] => (item=nvme1n1) => {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme1n1", "delta": "0:00:00.010991", "end": "2020-07-15 18:05:12.115257", "item": "nvme1n1", "rc": 0, "start": "2020-07-15 18:05:12.104266", "stderr": "", "stderr_lines": [], "stdout": "wwid 'eui.343756304d7020690025385800000004' added", "stdout_lines": ["wwid 'eui.343756304d7020690025385800000004' added"]} changed: [fmov1n1.sn.dtcorp.com] => (item=nvme1n1) => {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme1n1", "delta": "0:00:00.012432", "end": "2020-07-15 18:05:12.261194", "item": "nvme1n1", "rc": 0, "start": "2020-07-15 18:05:12.248762", "stderr": "", "stderr_lines": [], "stdout": "wwid 'eui.343756304d7020540025385800000004' added", "stdout_lines": ["wwid 'eui.343756304d7020540025385800000004' added"]} TASK [gluster.infra/roles/backend_setup : Check that the blacklist.conf exists] *** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/blacklist_mpath_devices.yml:30 ok: [fmov1n3.sn.dtcorp.com] => {"changed": false, "stat": {"atime": 1594834128.2723575, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "2c1ec58c96d37eeb81e0378bd4ce8e2bec52e47b", "ctime": 1594834121.8628232, "dev": 64772, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 402736753, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1594834121.8628232, "nlink": 1, "path": "/etc/multipath/conf.d/blacklist.conf", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 72, "uid": 0, "version": "3012466079", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} ok: [fmov1n2.sn.dtcorp.com] => {"changed": false, "stat": {"atime": 1594834128.826686, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "2c1ec58c96d37eeb81e0378bd4ce8e2bec52e47b", "ctime": 1594834123.5452447, "dev": 64772, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 369301917, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1594834123.5442448, "nlink": 1, "path": "/etc/multipath/conf.d/blacklist.conf", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 72, "uid": 0, "version": "1599580069", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} ok: [fmov1n1.sn.dtcorp.com] => {"changed": false, "stat": {"atime": 1594834128.8326972, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "2c1ec58c96d37eeb81e0378bd4ce8e2bec52e47b", "ctime": 1594834123.716216, "dev": 64772, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 235000867, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1594834123.716216, "nlink": 1, "path": "/etc/multipath/conf.d/blacklist.conf", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 72, "uid": 0, "version": "1403336632", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} TASK [gluster.infra/roles/backend_setup : Create blacklist template content] *** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/blacklist_mpath_devices.yml:35 skipping: [fmov1n1.sn.dtcorp.com] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [fmov1n2.sn.dtcorp.com] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [fmov1n3.sn.dtcorp.com] => {"changed": false, "skip_reason": "Conditional result was False"} TASK [gluster.infra/roles/backend_setup : Add wwid to blacklist in blacklist.conf file] *** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/blacklist_mpath_devices.yml:45 changed: [fmov1n3.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme0n1', 'stdout': "wwid 'eui.343756304d7020750025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-15 18:05:01.743795', 'end': '2020-07-15 18:05:01.755402', 'delta': '0:00:00.011607', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme0n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020750025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme0n1', 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": true, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme0n1", "delta": "0:00:00.011607", "end": "2020-07-15 18:05:01.755402", "failed": false, "invocation": {"module_args": {"_raw_params": "multipath -a /dev/nvme0n1", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": "nvme0n1", "rc": 0, "start": "2020-07-15 18:05:01.743795", "stderr": "", "stderr_lines": [], "stdout": "wwid 'eui.343756304d7020750025385800000004' added", "stdout_lines": ["wwid 'eui.343756304d7020750025385800000004' added"]}, "msg": "Block inserted"} changed: [fmov1n1.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme0n1', 'stdout': "wwid 'eui.343756304d7020220025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-15 18:05:02.220388', 'end': '2020-07-15 18:05:02.232564', 'delta': '0:00:00.012176', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme0n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020220025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme0n1', 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": true, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme0n1", "delta": "0:00:00.012176", "end": "2020-07-15 18:05:02.232564", "failed": false, "invocation": {"module_args": {"_raw_params": "multipath -a /dev/nvme0n1", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": "nvme0n1", "rc": 0, "start": "2020-07-15 18:05:02.220388", "stderr": "", "stderr_lines": [], "stdout": "wwid 'eui.343756304d7020220025385800000004' added", "stdout_lines": ["wwid 'eui.343756304d7020220025385800000004' added"]}, "msg": "Block inserted"} changed: [fmov1n2.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme0n1', 'stdout': "wwid 'eui.343756304d7020530025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-15 18:05:02.200678', 'end': '2020-07-15 18:05:02.213598', 'delta': '0:00:00.012920', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme0n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020530025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme0n1', 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": true, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme0n1", "delta": "0:00:00.012920", "end": "2020-07-15 18:05:02.213598", "failed": false, "invocation": {"module_args": {"_raw_params": "multipath -a /dev/nvme0n1", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": "nvme0n1", "rc": 0, "start": "2020-07-15 18:05:02.200678", "stderr": "", "stderr_lines": [], "stdout": "wwid 'eui.343756304d7020530025385800000004' added", "stdout_lines": ["wwid 'eui.343756304d7020530025385800000004' added"]}, "msg": "Block inserted"} changed: [fmov1n3.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme2n1', 'stdout': "wwid 'eui.343756304d7020730025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-15 18:05:06.079122', 'end': '2020-07-15 18:05:06.091794', 'delta': '0:00:00.012672', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme2n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020730025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme2n1', 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": true, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme2n1", "delta": "0:00:00.012672", "end": "2020-07-15 18:05:06.091794", "failed": false, "invocation": {"module_args": {"_raw_params": "multipath -a /dev/nvme2n1", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": "nvme2n1", "rc": 0, "start": "2020-07-15 18:05:06.079122", "stderr": "", "stderr_lines": [], "stdout": "wwid 'eui.343756304d7020730025385800000004' added", "stdout_lines": ["wwid 'eui.343756304d7020730025385800000004' added"]}, "msg": "Block inserted"} changed: [fmov1n2.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme2n1', 'stdout': "wwid 'eui.343756304d7020190025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-15 18:05:07.153684', 'end': '2020-07-15 18:05:07.164691', 'delta': '0:00:00.011007', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme2n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020190025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme2n1', 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": true, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme2n1", "delta": "0:00:00.011007", "end": "2020-07-15 18:05:07.164691", "failed": false, "invocation": {"module_args": {"_raw_params": "multipath -a /dev/nvme2n1", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": "nvme2n1", "rc": 0, "start": "2020-07-15 18:05:07.153684", "stderr": "", "stderr_lines": [], "stdout": "wwid 'eui.343756304d7020190025385800000004' added", "stdout_lines": ["wwid 'eui.343756304d7020190025385800000004' added"]}, "msg": "Block inserted"} changed: [fmov1n1.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme2n1', 'stdout': "wwid 'eui.343756304d7007630025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-15 18:05:07.224205', 'end': '2020-07-15 18:05:07.235360', 'delta': '0:00:00.011155', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme2n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7007630025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme2n1', 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": true, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme2n1", "delta": "0:00:00.011155", "end": "2020-07-15 18:05:07.235360", "failed": false, "invocation": {"module_args": {"_raw_params": "multipath -a /dev/nvme2n1", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": "nvme2n1", "rc": 0, "start": "2020-07-15 18:05:07.224205", "stderr": "", "stderr_lines": [], "stdout": "wwid 'eui.343756304d7007630025385800000004' added", "stdout_lines": ["wwid 'eui.343756304d7007630025385800000004' added"]}, "msg": "Block inserted"} changed: [fmov1n3.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme1n1', 'stdout': "wwid 'eui.343756304d7020760025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-15 18:05:10.395358', 'end': '2020-07-15 18:05:10.406977', 'delta': '0:00:00.011619', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme1n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020760025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme1n1', 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": true, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme1n1", "delta": "0:00:00.011619", "end": "2020-07-15 18:05:10.406977", "failed": false, "invocation": {"module_args": {"_raw_params": "multipath -a /dev/nvme1n1", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": "nvme1n1", "rc": 0, "start": "2020-07-15 18:05:10.395358", "stderr": "", "stderr_lines": [], "stdout": "wwid 'eui.343756304d7020760025385800000004' added", "stdout_lines": ["wwid 'eui.343756304d7020760025385800000004' added"]}, "msg": "Block inserted"} changed: [fmov1n2.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme1n1', 'stdout': "wwid 'eui.343756304d7020690025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-15 18:05:12.104266', 'end': '2020-07-15 18:05:12.115257', 'delta': '0:00:00.010991', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme1n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020690025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme1n1', 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": true, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme1n1", "delta": "0:00:00.010991", "end": "2020-07-15 18:05:12.115257", "failed": false, "invocation": {"module_args": {"_raw_params": "multipath -a /dev/nvme1n1", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": "nvme1n1", "rc": 0, "start": "2020-07-15 18:05:12.104266", "stderr": "", "stderr_lines": [], "stdout": "wwid 'eui.343756304d7020690025385800000004' added", "stdout_lines": ["wwid 'eui.343756304d7020690025385800000004' added"]}, "msg": "Block inserted"} changed: [fmov1n1.sn.dtcorp.com] => (item={'cmd': 'multipath -a /dev/nvme1n1', 'stdout': "wwid 'eui.343756304d7020540025385800000004' added", 'stderr': '', 'rc': 0, 'start': '2020-07-15 18:05:12.248762', 'end': '2020-07-15 18:05:12.261194', 'delta': '0:00:00.012432', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme1n1', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ["wwid 'eui.343756304d7020540025385800000004' added"], 'stderr_lines': [], 'failed': False, 'item': 'nvme1n1', 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": true, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme1n1", "delta": "0:00:00.012432", "end": "2020-07-15 18:05:12.261194", "failed": false, "invocation": {"module_args": {"_raw_params": "multipath -a /dev/nvme1n1", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": "nvme1n1", "rc": 0, "start": "2020-07-15 18:05:12.248762", "stderr": "", "stderr_lines": [], "stdout": "wwid 'eui.343756304d7020540025385800000004' added", "stdout_lines": ["wwid 'eui.343756304d7020540025385800000004' added"]}, "msg": "Block inserted"} TASK [gluster.infra/roles/backend_setup : Reload multipathd] ******************* task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/blacklist_mpath_devices.yml:55 changed: [fmov1n3.sn.dtcorp.com] => {"changed": true, "cmd": "systemctl reload multipathd", "delta": "0:00:00.024916", "end": "2020-07-15 18:05:37.070314", "rc": 0, "start": "2020-07-15 18:05:37.045398", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} changed: [fmov1n1.sn.dtcorp.com] => {"changed": true, "cmd": "systemctl reload multipathd", "delta": "0:00:00.027634", "end": "2020-07-15 18:05:37.564487", "rc": 0, "start": "2020-07-15 18:05:37.536853", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} changed: [fmov1n2.sn.dtcorp.com] => {"changed": true, "cmd": "systemctl reload multipathd", "delta": "0:00:00.026871", "end": "2020-07-15 18:05:37.616457", "rc": 0, "start": "2020-07-15 18:05:37.589586", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} TASK [gluster.infra/roles/backend_setup : Gather facts to determine the OS distribution] *** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml:19 ok: [fmov1n2.sn.dtcorp.com] ok: [fmov1n1.sn.dtcorp.com] ok: [fmov1n3.sn.dtcorp.com] TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for debian systems.] *** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml:27 skipping: [fmov1n1.sn.dtcorp.com] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [fmov1n2.sn.dtcorp.com] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [fmov1n3.sn.dtcorp.com] => {"changed": false, "skip_reason": "Conditional result was False"} TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for RHEL systems.] *** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml:33 fatal: [fmov1n3.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [fmov1n1.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [fmov1n2.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} NO MORE HOSTS LEFT ************************************************************* NO MORE HOSTS LEFT ************************************************************* PLAY RECAP ********************************************************************* fmov1n1.sn.dtcorp.com : ok=14 changed=4 unreachable=0 failed=1 skipped=10 rescued=0 ignored=0 fmov1n2.sn.dtcorp.com : ok=13 changed=3 unreachable=0 failed=1 skipped=10 rescued=0 ignored=0 fmov1n3.sn.dtcorp.com : ok=13 changed=3 unreachable=0 failed=1 skipped=10 rescued=0 ignored=0 Please check /var/log/cockpit/ovirt-dashboard/gluster-deployment.log for more informations. ---

I guess your only option is to edit /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml and replace 'package' with 'dnf' (keep the beginning 2 "spaces" deeper than '- name' -> just where "package" starts). Best Regards, Strahil Nikolov На 15 юли 2020 г. 22:39:09 GMT+03:00, clam2718@gmail.com написа:
Thank you very much Strahil for your continued assistance. I have tried cleaning up and redeploying four additional times and am still experiencing the same error.
To Summarize
(1) Attempt 1: change gluster_infra_thick_lvs --> size: 100G to size: '100%PVS' and change gluster_infra_thinpools --> lvsize: 500G to lvsize: '100%PVS' Result 1: deployment failed --> TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for RHEL systems.] *** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml:33 fatal: [fmov1n3.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [fmov1n1.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [fmov1n2.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."}
(2) Attempt 2: same as Attempt 1, but substituted 99G for '100%PVS' Result 2: same as Result 1
(3) Attempt 3: same as Attempt 1, but added vars: ansible_python_interpreter: /usr/bin/python3 Result 3: same as Result 1
(4) Attempt 4: reboot all three nodes, same as Attempt 1 but omitted previously edited size arguments as I read in documentation at https://github.com/gluster/gluster-ansible-infra that the size/lvsize arguements for variables gluster_infra_thick_lvs and gluster_infra_lv_logicalvols are optional and default to 100% size of LV.
At the end of this post are the latest version of the playbook and log output. As best I can tell the nodes are fully updated, default installs using verified images of v4.4.1.1.
From /var/log/cockpit/ovirt-dashboard/gluster-deployment.log I see that line 33 in task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml is what is causing the deployment to fail at this point
- name: Change to Install lvm tools for RHEL systems. package: name: device-mapper-persistent-data state: present when: ansible_os_family == 'RedHat'
But package device-mapper-persistent-data is installed:
[root@fmov1n1 ~]# dnf install device-mapper-persistent-data Last metadata expiration check: 0:32:10 ago on Wed 15 Jul 2020 06:44:19 PM UTC. Package device-mapper-persistent-data-0.8.5-3.el8.x86_64 is already installed. Dependencies resolved. Nothing to do. Complete!
[root@fmov1n1 ~]# dnf info device-mapper-persistent-data Last metadata expiration check: 0:31:44 ago on Wed 15 Jul 2020 06:44:19 PM UTC. Installed Packages Name : device-mapper-persistent-data Version : 0.8.5 Release : 3.el8 Architecture : x86_64 Size : 1.4 M Source : device-mapper-persistent-data-0.8.5-3.el8.src.rpm Repository : @System Summary : Device-mapper Persistent Data Tools URL : https://github.com/jthornber/thin-provisioning-tools License : GPLv3+ Description : thin-provisioning-tools contains check,dump,restore,repair,rmap : and metadata_size tools to manage device-mapper thin provisioning : target metadata devices; cache check,dump,metadata_size,restore : and repair tools to manage device-mapper cache metadata devices : are included and era check, dump, restore and invalidate to manage : snapshot eras
I can't figure out why Ansible v2.9.10 is not calling DNF. Ansible DNF package is installed:
[root@fmov1n1 modules]# ansible-doc -t module dnf
DNF (/usr/lib/python3.6/site-packages/ansible/modules/packaging/os/dnf.py)
Installs, upgrade, removes, and lists packages and groups with the `dnf' package manager.
* This module is maintained by The Ansible Core Team ...
I am unsure how to further troubleshoot from here!
Thank you again!!! Charles
--- Latest Gluster Playbook (edited from Wizard output)
hc_nodes: hosts: fmov1n1.sn.dtcorp.com: gluster_infra_volume_groups: - vgname: gluster_vg_nvme0n1 pvname: /dev/mapper/vdo_nvme0n1 - vgname: gluster_vg_nvme2n1 pvname: /dev/mapper/vdo_nvme2n1 - vgname: gluster_vg_nvme1n1 pvname: /dev/mapper/vdo_nvme1n1 gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_nvme0n1 - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_nvme2n1 - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_nvme1n1 gluster_infra_vdo: - name: vdo_nvme0n1 device: /dev/nvme0n1 slabsize: 2G logicalsize: 100G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme2n1 device: /dev/nvme2n1 slabsize: 2G logicalsize: 500G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme1n1 device: /dev/nvme1n1 slabsize: 2G logicalsize: 500G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M blacklist_mpath_devices: - nvme0n1 - nvme2n1 - nvme1n1 gluster_infra_thick_lvs: - vgname: gluster_vg_nvme0n1 lvname: gluster_lv_engine gluster_infra_thinpools: - vgname: gluster_vg_nvme2n1 thinpoolname: gluster_thinpool_gluster_vg_nvme2n1 poolmetadatasize: 1G - vgname: gluster_vg_nvme1n1 thinpoolname: gluster_thinpool_gluster_vg_nvme1n1 poolmetadatasize: 1G gluster_infra_lv_logicalvols: - vgname: gluster_vg_nvme2n1 thinpool: gluster_thinpool_gluster_vg_nvme2n1 lvname: gluster_lv_data - vgname: gluster_vg_nvme1n1 thinpool: gluster_thinpool_gluster_vg_nvme1n1 lvname: gluster_lv_vmstore fmov1n2.sn.dtcorp.com: gluster_infra_volume_groups: - vgname: gluster_vg_nvme0n1 pvname: /dev/mapper/vdo_nvme0n1 - vgname: gluster_vg_nvme2n1 pvname: /dev/mapper/vdo_nvme2n1 - vgname: gluster_vg_nvme1n1 pvname: /dev/mapper/vdo_nvme1n1 gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_nvme0n1 - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_nvme2n1 - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_nvme1n1 gluster_infra_vdo: - name: vdo_nvme0n1 device: /dev/nvme0n1 slabsize: 2G logicalsize: 100G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme2n1 device: /dev/nvme2n1 slabsize: 2G logicalsize: 500G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme1n1 device: /dev/nvme1n1 slabsize: 2G logicalsize: 500G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M blacklist_mpath_devices: - nvme0n1 - nvme2n1 - nvme1n1 gluster_infra_thick_lvs: - vgname: gluster_vg_nvme0n1 lvname: gluster_lv_engine gluster_infra_thinpools: - vgname: gluster_vg_nvme2n1 thinpoolname: gluster_thinpool_gluster_vg_nvme2n1 poolmetadatasize: 1G - vgname: gluster_vg_nvme1n1 thinpoolname: gluster_thinpool_gluster_vg_nvme1n1 poolmetadatasize: 1G gluster_infra_lv_logicalvols: - vgname: gluster_vg_nvme2n1 thinpool: gluster_thinpool_gluster_vg_nvme2n1 lvname: gluster_lv_data - vgname: gluster_vg_nvme1n1 thinpool: gluster_thinpool_gluster_vg_nvme1n1 lvname: gluster_lv_vmstore fmov1n3.sn.dtcorp.com: gluster_infra_volume_groups: - vgname: gluster_vg_nvme0n1 pvname: /dev/mapper/vdo_nvme0n1 - vgname: gluster_vg_nvme2n1 pvname: /dev/mapper/vdo_nvme2n1 - vgname: gluster_vg_nvme1n1 pvname: /dev/mapper/vdo_nvme1n1 gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_nvme0n1 - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_nvme2n1 - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_nvme1n1 gluster_infra_vdo: - name: vdo_nvme0n1 device: /dev/nvme0n1 slabsize: 2G logicalsize: 100G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme2n1 device: /dev/nvme2n1 slabsize: 2G logicalsize: 500G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M - name: vdo_nvme1n1 device: /dev/nvme1n1 slabsize: 2G logicalsize: 500G blockmapcachesize: 128M emulate512: 'off' writepolicy: auto maxDiscardSize: 16M blacklist_mpath_devices: - nvme0n1 - nvme2n1 - nvme1n1 gluster_infra_thick_lvs: - vgname: gluster_vg_nvme0n1 lvname: gluster_lv_engine gluster_infra_thinpools: - vgname: gluster_vg_nvme2n1 thinpoolname: gluster_thinpool_gluster_vg_nvme2n1 poolmetadatasize: 1G - vgname: gluster_vg_nvme1n1 thinpoolname: gluster_thinpool_gluster_vg_nvme1n1 poolmetadatasize: 1G gluster_infra_lv_logicalvols: - vgname: gluster_vg_nvme2n1 thinpool: gluster_thinpool_gluster_vg_nvme2n1 lvname: gluster_lv_data - vgname: gluster_vg_nvme1n1 thinpool: gluster_thinpool_gluster_vg_nvme1n1 lvname: gluster_lv_vmstore vars: ansible_python_interpreter: /usr/bin/python3 gluster_infra_disktype: JBOD gluster_set_selinux_labels: true gluster_infra_fw_ports: - 2049/tcp - 54321/tcp - 5900/tcp - 5900-6923/tcp - 5666/tcp - 16514/tcp gluster_infra_fw_permanent: true gluster_infra_fw_state: enabled gluster_infra_fw_zone: public gluster_infra_fw_services: - glusterfs gluster_features_force_varlogsizecheck: false cluster_nodes: - fmov1n1.sn.dtcorp.com - fmov1n2.sn.dtcorp.com - fmov1n3.sn.dtcorp.com gluster_features_hci_cluster: '{{ cluster_nodes }}' gluster_features_hci_volumes: - volname: engine brick: /gluster_bricks/engine/engine arbiter: 0 - volname: data brick: /gluster_bricks/data/data arbiter: 0 - volname: vmstore brick: /gluster_bricks/vmstore/vmstore arbiter: 0
--- Latest /var/log/cockpit/ovirt-dashboard/gluster-deployment.log
[root@fmov1n1 modules]# cat /var/log/cockpit/ovirt-dashboard/gluster-deployment.log ansible-playbook 2.9.10 config file = /etc/ansible/ansible.cfg configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.6/site-packages/ansible executable location = /root/../usr/bin/ansible-playbook python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] Using /etc/ansible/ansible.cfg as config file statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/blacklist_mpath_devices.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/lvm_config.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main-lvm.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thick_lv_create.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thin_pool_create.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thin_volume_create.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/cache_setup.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main-lvm.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thick_lv_create.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thin_pool_create.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thin_volume_create.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/cache_setup.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/fscreate.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/mount.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/lvm_kernelparams.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/fstrim_service.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/luks_device_encrypt.yml statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/bind_tang_server.yml statically imported: /etc/ansible/roles/gluster.features/roles/nfs_ganesha/tasks/prerequisites.yml statically imported: /etc/ansible/roles/gluster.features/roles/nfs_ganesha/tasks/distribute_keys.yml statically imported: /etc/ansible/roles/gluster.features/roles/nfs_ganesha/tasks/master_tasks.yml statically imported: /etc/ansible/roles/gluster.features/roles/nfs_ganesha/tasks/enable_ganesha.yml statically imported: /etc/ansible/roles/gluster.features/roles/nfs_ganesha/tasks/add_new_nodes.yml statically imported: /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/prerequisites.yml statically imported: /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/glusterd_ipv6.yml statically imported: /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml statically imported: /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/ssl-setup.yml statically imported: /etc/ansible/roles/gluster.features/roles/ctdb/tasks/setup_ctdb.yml
PLAYBOOK: hc_wizard.yml ******************************************************** 1 plays in /root/../usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml
PLAY [Setup backend] ***********************************************************
TASK [Gathering Facts] ********************************************************* task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:4 ok: [fmov1n2.sn.dtcorp.com] ok: [fmov1n1.sn.dtcorp.com] ok: [fmov1n3.sn.dtcorp.com]
TASK [Check if valid hostnames are provided] *********************************** task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:16 changed: [fmov1n1.sn.dtcorp.com] => (item=fmov1n1.sn.dtcorp.com) => {"ansible_loop_var": "item", "changed": true, "cmd": ["getent", "ahosts", "fmov1n1.sn.dtcorp.com"], "delta": "0:00:00.006835", "end": "2020-07-15 18:03:58.366109", "item": "fmov1n1.sn.dtcorp.com", "rc": 0, "start": "2020-07-15 18:03:58.359274", "stderr": "", "stderr_lines": [], "stdout": "172.16.16.21 STREAM fmov1n1.sn.dtcorp.com\n172.16.16.21 DGRAM \n172.16.16.21 RAW ", "stdout_lines": ["172.16.16.21 STREAM fmov1n1.sn.dtcorp.com", "172.16.16.21 DGRAM ", "172.16.16.21 RAW "]} changed: [fmov1n1.sn.dtcorp.com] => (item=fmov1n2.sn.dtcorp.com) => {"ansible_loop_var": "item", "changed": true, "cmd": ["getent", "ahosts", "fmov1n2.sn.dtcorp.com"], "delta": "0:00:00.004972", "end": "2020-07-15 18:03:58.569094", "item": "fmov1n2.sn.dtcorp.com", "rc": 0, "start": "2020-07-15 18:03:58.564122", "stderr": "", "stderr_lines": [], "stdout": "172.16.16.22 STREAM fmov1n2.sn.dtcorp.com\n172.16.16.22 DGRAM \n172.16.16.22 RAW ", "stdout_lines": ["172.16.16.22 STREAM fmov1n2.sn.dtcorp.com", "172.16.16.22 DGRAM ", "172.16.16.22 RAW "]} changed: [fmov1n1.sn.dtcorp.com] => (item=fmov1n3.sn.dtcorp.com) => {"ansible_loop_var": "item", "changed": true, "cmd": ["getent", "ahosts", "fmov1n3.sn.dtcorp.com"], "delta": "0:00:00.004759", "end": "2020-07-15 18:03:58.769052", "item": "fmov1n3.sn.dtcorp.com", "rc": 0, "start": "2020-07-15 18:03:58.764293", "stderr": "", "stderr_lines": [], "stdout": "172.16.16.23 STREAM fmov1n3.sn.dtcorp.com\n172.16.16.23 DGRAM \n172.16.16.23 RAW ", "stdout_lines": ["172.16.16.23 STREAM fmov1n3.sn.dtcorp.com", "172.16.16.23 DGRAM ", "172.16.16.23 RAW "]}
TASK [Check if provided hostnames are valid] *********************************** task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:29 ok: [fmov1n1.sn.dtcorp.com] => { "changed": false, "msg": "All assertions passed" } ok: [fmov1n2.sn.dtcorp.com] => { "changed": false, "msg": "All assertions passed" } ok: [fmov1n3.sn.dtcorp.com] => { "changed": false, "msg": "All assertions passed" }
TASK [Check if /var/log has enough disk space] ********************************* task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:38 skipping: [fmov1n1.sn.dtcorp.com] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [fmov1n2.sn.dtcorp.com] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [fmov1n3.sn.dtcorp.com] => {"changed": false, "skip_reason": "Conditional result was False"}
TASK [Check if the /var is greater than 15G] *********************************** task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:43 skipping: [fmov1n1.sn.dtcorp.com] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [fmov1n2.sn.dtcorp.com] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [fmov1n3.sn.dtcorp.com] => {"changed": false, "skip_reason": "Conditional result was False"}
TASK [Check if disks have logical block size of 512B] ************************** task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:53 skipping: [fmov1n1.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "skip_reason": "Conditional result was False"} skipping: [fmov1n1.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "skip_reason": "Conditional result was False"} skipping: [fmov1n1.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "skip_reason": "Conditional result was False"} skipping: [fmov1n2.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "skip_reason": "Conditional result was False"} skipping: [fmov1n2.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "skip_reason": "Conditional result was False"} skipping: [fmov1n2.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "skip_reason": "Conditional result was False"} skipping: [fmov1n3.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "skip_reason": "Conditional result was False"} skipping: [fmov1n3.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "skip_reason": "Conditional result was False"} skipping: [fmov1n3.sn.dtcorp.com] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "skip_reason": "Conditional result was False"}
TASK [Check if logical block size is 512 bytes] ******************************** task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:61 skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"} skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"} skipping: [fmov1n1.sn.dtcorp.com] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"} skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"} skipping: [fmov1n2.sn.dtcorp.com] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "skip_reason": "Conditional resul

I also have this message with the deployment of Gluster. I tried the modifications and it doesn't seem to work. Did you succeed ? here error : TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for RHEL systems.] *** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml:33 fatal: [ovnode2.telecom.lan]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [ovnode1.telecom.lan]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [ovnode3.telecom.lan]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."}

Same issue with ovirt-node-ng-installer 4.4.1-2020071311.el8 iso [image: gluster-fail.PNG] On Thu, Jul 16, 2020 at 9:33 AM <dominique.deschenes@gcgenicom.com> wrote:
I also have this message with the deployment of Gluster. I tried the modifications and it doesn't seem to work. Did you succeed ?
here error :
TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for RHEL systems.] *** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml:33 fatal: [ovnode2.telecom.lan]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [ovnode1.telecom.lan]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [ovnode3.telecom.lan]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VZVYGT7QUVWGQZ...

Have you tried to replace 'package' with 'dnf' in /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml (somewhere around line 33). Best Regards, Strahil Nikolov На 16 юли 2020 г. 16:30:04 GMT+03:00, dominique.deschenes@gcgenicom.com написа:
I also have this message with the deployment of Gluster. I tried the modifications and it doesn't seem to work. Did you succeed ?
here error :
TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for RHEL systems.] *** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml:33 fatal: [ovnode2.telecom.lan]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [ovnode1.telecom.lan]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [ovnode3.telecom.lan]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VZVYGT7QUVWGQZ...

Dear Strahil, Dominique and Edward: I reimaged the three hosts with ovirt-node-ng-installer-4.4.1-2020071311.el8.iso just to be sure everything was stock (I had upgraded from v4.4) and attempted a redeploy with all suggested changes EXCEPT replacing "package" with "dnf" --> same failure. I then made Strahil's recommended replacement of "package" with "dnf" and the Gluster deployment succeeded through that section of main.yml only to fail a little later at: - name: Install python-yaml package for Debian systems package: name: python-yaml state: present when: ansible_distribution == "Debian" or ansible_distribution == "Ubuntu" I found this notable given that I had not replaced "package" with "dnf" in the prior section: - name: Change to Install lvm tools for debian systems. package: name: thin-provisioning-tools state: present when: ansible_distribution == "Debian" or ansible_distribution == "Ubuntu" and deployment had not failed here. Anyhow, I deleted the two Debian statements as I am deploying from Node (CentOS based), cleaned up, cleaned up my drives ('dmsetup remove eui.xxx...' and 'wipefs --all --force /dev/nvme0n1 /dev/nvmeXn1 ...') and redeployed again. This time Gluster deployment seemed to execute main.yml OK only to fail in a new file, vdo_create.yml: TASK [gluster.infra/roles/backend_setup : Install VDO dependencies] ************ task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml:26 fatal: [fmov1n1.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [fmov1n3.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [fmov1n2.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} Expecting that this might continue, I have been looking into the documentation of how "package" works and if I can find a root cause for this rather than reviewing n *.yml files and replacing "package" with "dnf" in all of them. Thank you VERY much to Strahil for helping me! If Strahil or anyone else has any additional troubleshooting tips, suggestions, insight or solutions I am all ears. I will continue to update as I progress. Respectfully, Charles

HI, Thank you for your answers I tried to replace the "package" with "dnf". the installation of the gluster seems to work well but I had the similar message during the deployment of the Hosted engine. Here is the error [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 10, "changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} Dominique Deschênes Ingénieur chargé de projets, Responsable TI 816, boulevard Guimond, Longueuil J4G 1T5 450 670-8383 x105 450 670-2259 ----- Message reçu ----- De: clam2718@gmail.com Date: 16/07/20 13:40 À: users@ovirt.org Objet: [ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set Dear Strahil, Dominique and Edward: I reimaged the three hosts with ovirt-node-ng-installer-4.4.1-2020071311.el8.iso just to be sure everything was stock (I had upgraded from v4.4) and attempted a redeploy with all suggested changes EXCEPT replacing "package" with "dnf" --> same failure. I then made Strahil's recommended replacement of "package" with "dnf" and the Gluster deployment succeeded through that section of main.yml only to fail a little later at: - name: Install python-yaml package for Debian systems package: name: python-yaml state: present when: ansible_distribution == "Debian" or ansible_distribution == "Ubuntu" I found this notable given that I had not replaced "package" with "dnf" in the prior section: - name: Change to Install lvm tools for debian systems. package: name: thin-provisioning-tools state: present when: ansible_distribution == "Debian" or ansible_distribution == "Ubuntu" and deployment had not failed here. Anyhow, I deleted the two Debian statements as I am deploying from Node (CentOS based), cleaned up, cleaned up my drives ('dmsetup remove eui.xxx...' and 'wipefs --all --force /dev/nvme0n1 /dev/nvmeXn1 ...') and redeployed again. This time Gluster deployment seemed to execute main.yml OK only to fail in a new file, vdo_create.yml: TASK [gluster.infra/roles/backend_setup : Install VDO dependencies] ************ task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml:26 fatal: [fmov1n1.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [fmov1n3.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [fmov1n2.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} Expecting that this might continue, I have been looking into the documentation of how "package" works and if I can find a root cause for this rather than reviewing n *.yml files and replacing "package" with "dnf" in all of them. Thank you VERY much to Strahil for helping me! If Strahil or anyone else has any additional troubleshooting tips, suggestions, insight or solutions I am all ears. I will continue to update as I progress. Respectfully, Charles _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3JTZX2OF4JTGRE...

What version of CentOS 8 are you using -> Stream or regular, version ? Best Regards, Strahil Nikolov На 16 юли 2020 г. 21:07:57 GMT+03:00, "Dominique Deschênes" <dominique.deschenes@gcgenicom.com> написа:
HI, Thank you for your answers
I tried to replace the "package" with "dnf". the installation of the gluster seems to work well but I had the similar message during the deployment of the Hosted engine.
Here is the error
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 10, "changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."}
Dominique Deschênes Ingénieur chargé de projets, Responsable TI 816, boulevard Guimond, Longueuil J4G 1T5 450 670-8383 x105 450 670-2259
----- Message reçu ----- De: clam2718@gmail.com Date: 16/07/20 13:40 À: users@ovirt.org Objet: [ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set
Dear Strahil, Dominique and Edward:
I reimaged the three hosts with ovirt-node-ng-installer-4.4.1-2020071311.el8.iso just to be sure everything was stock (I had upgraded from v4.4) and attempted a redeploy with all suggested changes EXCEPT replacing "package" with "dnf" --> same failure. I then made Strahil's recommended replacement of "package" with "dnf" and the Gluster deployment succeeded through that section of main.yml only to fail a little later at:
- name: Install python-yaml package for Debian systems package: name: python-yaml state: present when: ansible_distribution == "Debian" or ansible_distribution == "Ubuntu"
I found this notable given that I had not replaced "package" with "dnf" in the prior section:
- name: Change to Install lvm tools for debian systems. package: name: thin-provisioning-tools state: present when: ansible_distribution == "Debian" or ansible_distribution == "Ubuntu"
and deployment had not failed here. Anyhow, I deleted the two Debian statements as I am deploying from Node (CentOS based), cleaned up, cleaned up my drives ('dmsetup remove eui.xxx...' and 'wipefs --all --force /dev/nvme0n1 /dev/nvmeXn1 ...') and redeployed again. This time Gluster deployment seemed to execute main.yml OK only to fail in a new file, vdo_create.yml:
TASK [gluster.infra/roles/backend_setup : Install VDO dependencies] ************ task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml:26 fatal: [fmov1n1.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [fmov1n3.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [fmov1n2.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."}
Expecting that this might continue, I have been looking into the documentation of how "package" works and if I can find a root cause for this rather than reviewing n *.yml files and replacing "package" with "dnf" in all of them. Thank you VERY much to Strahil for helping me!
If Strahil or anyone else has any additional troubleshooting tips, suggestions, insight or solutions I am all ears. I will continue to update as I progress.
Respectfully, Charles _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3JTZX2OF4JTGRE...

On Fri, Jul 17, 2020 at 10:09 AM Strahil Nikolov via Users <users@ovirt.org> wrote:
What version of CentOS 8 are you using -> Stream or regular, version ?
Best Regards, Strahil Nikolov
Strahil, see the other thread I have just opened. It happens also to me with latest oVirt node iso for 4.4.1.1 dated 13/07 In my opinion there is a major problem with all yaml files using yum and package modules: - yum because it expects python2 that is missing - package because it doesn't autodetect dnf and tries yum and fails for the same reason above A possible workaround for not modifying all yaml files is to install python2; I don't know if a channel could be enabled to have python2 back.... Gianluca

Hi, I use ovirt ISO file ovirt-node-ng-installer-4.4.1-2020070811.el8.iso (July 8). I just saw that there is a new version of July 13 (4.4.1-2020071311). I will try it. Dominique Deschênes Ingénieur chargé de projets, Responsable TI 816, boulevard Guimond, Longueuil J4G 1T5 450 670-8383 x105 450 670-2259 ----- Message reçu ----- De: Strahil Nikolov (hunter86_bg@yahoo.com) Date: 17/07/20 04:03 À: Dominique Deschênes (dominique.deschenes@gcgenicom.com), clam2718@gmail.com, users@ovirt.org Objet: Re: [ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set What version of CentOS 8 are you using -> Stream or regular, version ? Best Regards, Strahil Nikolov На 16 юли 2020 г. 21:07:57 GMT+03:00, "Dominique Deschênes" <dominique.deschenes@gcgenicom.com> написа:
HI, Thank you for your answers
I tried to replace the "package" with "dnf". the installation of the gluster seems to work well but I had the similar message during the deployment of the Hosted engine.
Here is the error
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 10, "changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."}
Dominique Deschênes Ingénieur chargé de projets, Responsable TI 816, boulevard Guimond, Longueuil J4G 1T5 450 670-8383 x105 450 670-2259
----- Message reçu ----- De: clam2718@gmail.com Date: 16/07/20 13:40 À: users@ovirt.org Objet: [ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set
Dear Strahil, Dominique and Edward:
I reimaged the three hosts with ovirt-node-ng-installer-4.4.1-2020071311.el8.iso just to be sure everything was stock (I had upgraded from v4.4) and attempted a redeploy with all suggested changes EXCEPT replacing "package" with "dnf" --> same failure. I then made Strahil's recommended replacement of "package" with "dnf" and the Gluster deployment succeeded through that section of main.yml only to fail a little later at:
- name: Install python-yaml package for Debian systems package: name: python-yaml state: present when: ansible_distribution == "Debian" or ansible_distribution == "Ubuntu"
I found this notable given that I had not replaced "package" with "dnf" in the prior section:
- name: Change to Install lvm tools for debian systems. package: name: thin-provisioning-tools state: present when: ansible_distribution == "Debian" or ansible_distribution == "Ubuntu"
and deployment had not failed here. Anyhow, I deleted the two Debian statements as I am deploying from Node (CentOS based), cleaned up, cleaned up my drives ('dmsetup remove eui.xxx...' and 'wipefs --all --force /dev/nvme0n1 /dev/nvmeXn1 ...') and redeployed again. This time Gluster deployment seemed to execute main.yml OK only to fail in a new file, vdo_create.yml:
TASK [gluster.infra/roles/backend_setup : Install VDO dependencies] ************ task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml:26 fatal: [fmov1n1.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [fmov1n3.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."} fatal: [fmov1n2.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead."}
Expecting that this might continue, I have been looking into the documentation of how "package" works and if I can find a root cause for this rather than reviewing n *.yml files and replacing "package" with "dnf" in all of them. Thank you VERY much to Strahil for helping me!
If Strahil or anyone else has any additional troubleshooting tips, suggestions, insight or solutions I am all ears. I will continue to update as I progress.
Respectfully, Charles _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3JTZX2OF4JTGRE...

On Fri, Jul 17, 2020 at 1:34 PM Dominique Deschênes < dominique.deschenes@gcgenicom.com> wrote:
Hi,
I use ovirt ISO file ovirt-node-ng-installer-4.4.1-2020070811.el8.iso (July 8).
I just saw that there is a new version of July 13 (4.4.1-2020071311). I will try it.
No. See my thread I referred and I'm using the July 13 version. Follow the bugzilla I have opened: https://bugzilla.redhat.com/show_bug.cgi?id=1858234 Gianluca

Can you provide the target's facts in the bug report ? Best Regards, Strahil Nikolov На 17 юли 2020 г. 14:48:39 GMT+03:00, Gianluca Cecchi <gianluca.cecchi@gmail.com> написа:
On Fri, Jul 17, 2020 at 1:34 PM Dominique Deschênes < dominique.deschenes@gcgenicom.com> wrote:
Hi,
I use ovirt ISO file ovirt-node-ng-installer-4.4.1-2020070811.el8.iso (July 8).
I just saw that there is a new version of July 13 (4.4.1-2020071311). I will try it.
No. See my thread I referred and I'm using the July 13 version. Follow the bugzilla I have opened: https://bugzilla.redhat.com/show_bug.cgi?id=1858234
Gianluca

They should be in the log files I attached to the bugzilla, if you download the tar,gz file Gianluca On Fri, Jul 17, 2020 at 4:45 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Can you provide the target's facts in the bug report ?
Best Regards, Strahil Nikolov
На 17 юли 2020 г. 14:48:39 GMT+03:00, Gianluca Cecchi < gianluca.cecchi@gmail.com> написа:
On Fri, Jul 17, 2020 at 1:34 PM Dominique Deschênes < dominique.deschenes@gcgenicom.com> wrote:
Hi,
I use ovirt ISO file ovirt-node-ng-installer-4.4.1-2020070811.el8.iso (July 8).
I just saw that there is a new version of July 13 (4.4.1-2020071311). I will try it.
No. See my thread I referred and I'm using the July 13 version. Follow the bugzilla I have opened: https://bugzilla.redhat.com/show_bug.cgi?id=1858234
Gianluca
participants (7)
-
clam2718@gmail.com
-
Dominique Deschênes
-
dominique.deschenes@gcgenicom.com
-
Edward Berger
-
Gianluca Cecchi
-
Ritesh Chikatwar
-
Strahil Nikolov