Gluster Hyperconverged fails with single disk partitioned

Hi, I am trying to setup a single host Self-Hosted hyperconverged setup with GlusterFS. I have a custom partitioning where I provide 100G for oVirt and its partitions and rest 800G to a physical partition (/dev/sda4). When I try to create gluster deployment with the wizard, it fails TASK [gluster.infra/roles/backend_setup : Create volume groups] **************** failed: [ovirt-macpro-16.lab.ced.bskyb.com] (item={'key': 'gluster_vg_sda4', 'value': [{'vgname': 'gluster_vg_sda4', 'pvname': '/dev/sda4'}]}) => {"ansible_loop_var": "item", "changed": false, "err": " Device /dev/sda4 excluded by a filter.\n", "item": {"key": "gluster_vg_sda4", "value": [{"pvname": "/dev/sda4", "vgname": "gluster_vg_sda4"}]}, "msg": "Creating physical volume '/dev/sda4' failed", "rc": 5} I checked and /etc/lvm/lvm.conf filter doesn't allow /dev/sda4. It only allows PV for onn VG. Once I manually allow /dev/sda4 to lvm filter, it works fine and gluster deployment completes. Fdisk : # fdisk -l /dev/sda Disk /dev/sda: 931.9 GiB, 1000555581440 bytes, 1954210120 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: FE209000-85B5-489A-8A86-4CF0C91B2E7D Device Start End Sectors Size Type /dev/sda1 2048 1230847 1228800 600M EFI System /dev/sda2 1230848 3327999 2097152 1G Linux filesystem /dev/sda3 3328000 213043199 209715200 100G Linux LVM /dev/sda4 213043200 1954209791 1741166592 830.3G Linux filesystem LVS # lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home onn Vwi-aotz-- 10.00g pool0 0.11 ovirt-node-ng-4.4.4-0.20201221.0 onn Vwi---tz-k 10.00g pool0 root ovirt-node-ng-4.4.4-0.20201221.0+1 onn Vwi-aotz-- 10.00g pool0 ovirt-node-ng-4.4.4-0.20201221.0 25.26 pool0 onn twi-aotz-- 95.89g 2.95 14.39 root onn Vri---tz-k 10.00g pool0 swap onn -wi-ao---- 4.00g tmp onn Vwi-aotz-- 10.00g pool0 0.12 var onn Vwi-aotz-- 20.00g pool0 0.92 var_crash onn Vwi-aotz-- 10.00g pool0 0.11 var_log onn Vwi-aotz-- 10.00g pool0 0.13 var_log_audit onn Vwi-aotz-- 4.00g pool0 0.27 # grep filter /etc/lvm/lvm.conf filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-QrvErF-eaS9-PxbI-wCBV-3OxJ-V600-NG7raZ$|", "r|.*|"] Am I doing something which oVirt isn't expecting? Is there anyway to provide tell gluster deployment to add it to lvm config. Thanks, Shantur

Ovirt is expecting an LVM volume, not a raw partition. -derek Sent using my mobile device. Please excuse any typos. On January 20, 2021 7:13:45 PM Shantur Rathore <rathore4u@gmail.com> wrote:
Hi,
I am trying to setup a single host Self-Hosted hyperconverged setup with GlusterFS. I have a custom partitioning where I provide 100G for oVirt and its partitions and rest 800G to a physical partition (/dev/sda4).
When I try to create gluster deployment with the wizard, it fails
TASK [gluster.infra/roles/backend_setup : Create volume groups] **************** failed: [ovirt-macpro-16.lab.ced.bskyb.com] (item={'key': 'gluster_vg_sda4', 'value': [{'vgname': 'gluster_vg_sda4', 'pvname': '/dev/sda4'}]}) => {"ansible_loop_var": "item", "changed": false, "err": " Device /dev/sda4 excluded by a filter.\n", "item": {"key": "gluster_vg_sda4", "value": [{"pvname": "/dev/sda4", "vgname": "gluster_vg_sda4"}]}, "msg": "Creating physical volume '/dev/sda4' failed", "rc": 5}
I checked and /etc/lvm/lvm.conf filter doesn't allow /dev/sda4. It only allows PV for onn VG. Once I manually allow /dev/sda4 to lvm filter, it works fine and gluster deployment completes.
Fdisk :
# fdisk -l /dev/sda Disk /dev/sda: 931.9 GiB, 1000555581440 bytes, 1954210120 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: FE209000-85B5-489A-8A86-4CF0C91B2E7D
Device Start End Sectors Size Type /dev/sda1 2048 1230847 1228800 600M EFI System /dev/sda2 1230848 3327999 2097152 1G Linux filesystem /dev/sda3 3328000 213043199 209715200 100G Linux LVM /dev/sda4 213043200 1954209791 1741166592 830.3G Linux filesystem
LVS
# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home onn Vwi-aotz-- 10.00g pool0 0.11 ovirt-node-ng-4.4.4-0.20201221.0 onn Vwi---tz-k 10.00g pool0 root ovirt-node-ng-4.4.4-0.20201221.0+1 onn Vwi-aotz-- 10.00g pool0 ovirt-node-ng-4.4.4-0.20201221.0 25.26 pool0 onn twi-aotz-- 95.89g 2.95 14.39 root onn Vri---tz-k 10.00g pool0 swap onn -wi-ao---- 4.00g tmp onn Vwi-aotz-- 10.00g pool0 0.12 var onn Vwi-aotz-- 20.00g pool0 0.92 var_crash onn Vwi-aotz-- 10.00g pool0 0.11 var_log onn Vwi-aotz-- 10.00g pool0 0.13 var_log_audit onn Vwi-aotz-- 4.00g pool0 0.27
# grep filter /etc/lvm/lvm.conf filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-QrvErF-eaS9-PxbI-wCBV-3OxJ-V600-NG7raZ$|", "r|.*|"]
Am I doing something which oVirt isn't expecting? Is there anyway to provide tell gluster deployment to add it to lvm config.
Thanks, Shantur _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/BP7BQWG3O7IFRL...

Thanks Derek, I don't think that is the case as per documentation https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hy... https://blogs.ovirt.org/2018/02/up-and-running-with-ovirt-4-2-and-gluster-st... On Thu, Jan 21, 2021 at 12:17 AM Derek Atkins <derek@ihtfp.com> wrote:
Ovirt is expecting an LVM volume, not a raw partition.
-derek Sent using my mobile device. Please excuse any typos.
On January 20, 2021 7:13:45 PM Shantur Rathore <rathore4u@gmail.com> wrote:
Hi,
I am trying to setup a single host Self-Hosted hyperconverged setup with GlusterFS. I have a custom partitioning where I provide 100G for oVirt and its partitions and rest 800G to a physical partition (/dev/sda4).
When I try to create gluster deployment with the wizard, it fails
TASK [gluster.infra/roles/backend_setup : Create volume groups] **************** failed: [ovirt-macpro-16.lab.ced.bskyb.com] (item={'key': 'gluster_vg_sda4', 'value': [{'vgname': 'gluster_vg_sda4', 'pvname': '/dev/sda4'}]}) => {"ansible_loop_var": "item", "changed": false, "err": " Device /dev/sda4 excluded by a filter.\n", "item": {"key": "gluster_vg_sda4", "value": [{"pvname": "/dev/sda4", "vgname": "gluster_vg_sda4"}]}, "msg": "Creating physical volume '/dev/sda4' failed", "rc": 5}
I checked and /etc/lvm/lvm.conf filter doesn't allow /dev/sda4. It only allows PV for onn VG. Once I manually allow /dev/sda4 to lvm filter, it works fine and gluster deployment completes.
Fdisk :
# fdisk -l /dev/sda Disk /dev/sda: 931.9 GiB, 1000555581440 bytes, 1954210120 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: FE209000-85B5-489A-8A86-4CF0C91B2E7D
Device Start End Sectors Size Type /dev/sda1 2048 1230847 1228800 600M EFI System /dev/sda2 1230848 3327999 2097152 1G Linux filesystem /dev/sda3 3328000 213043199 209715200 100G Linux LVM /dev/sda4 213043200 1954209791 1741166592 830.3G Linux filesystem
LVS
# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home onn Vwi-aotz-- 10.00g pool0 0.11 ovirt-node-ng-4.4.4-0.20201221.0 onn Vwi---tz-k 10.00g pool0 root ovirt-node-ng-4.4.4-0.20201221.0+1 onn Vwi-aotz-- 10.00g pool0 ovirt-node-ng-4.4.4-0.20201221.0 25.26 pool0 onn twi-aotz-- 95.89g 2.95 14.39 root onn Vri---tz-k 10.00g pool0 swap onn -wi-ao---- 4.00g tmp onn Vwi-aotz-- 10.00g pool0 0.12 var onn Vwi-aotz-- 20.00g pool0 0.92 var_crash onn Vwi-aotz-- 10.00g pool0 0.11 var_log onn Vwi-aotz-- 10.00g pool0 0.13 var_log_audit onn Vwi-aotz-- 4.00g pool0 0.27
# grep filter /etc/lvm/lvm.conf filter = ["a|^/dev/disk/by-id/lvm-pv-uuid-QrvErF-eaS9-PxbI-wCBV-3OxJ-V600-NG7raZ$|", "r|.*|"]
Am I doing something which oVirt isn't expecting? Is there anyway to provide tell gluster deployment to add it to lvm config.
Thanks, Shantur _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/BP7BQWG3O7IFRL...

I have found a workaround to this. gluster.infra ansible role can exclude and reset lvm filters when "gluster_infra_lvm" variable defined. https://github.com/gluster/gluster-ansible-infra/blob/2522d3bd722be86139c572... 1. Go with gluster deployment wizard till the configuration display step just before deploy. 2. Click Edit and scroll down to "vars:" section. 3. Just under "vars:" section, add "gluster_infra_lvm: SOMETHING" and adjust spaces to match other variables. 4. Don't forget to click save on top before clicking Deploy. This will reset the filter and set it back again with correct devices.
participants (2)
-
Derek Atkins
-
Shantur Rathore