Gluster deployment fails with missing UUID

Hi, I'm running the gluster deployment flow and am trying to use a second drive as the gluster volume. It's /dev/sdb on each node and I'm using the JBOD mode. I'm seeing the following gluster ansible task fail and a google search doesn't bring up much. TASK [gluster.infra/roles/backend_setup : Create volume groups] **************** failed: [ovirt-gluster-01.jalloq.co.uk] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "err": " Couldn't find device with uuid Y8FVs8-LP6w-R6CR-Yosh-c40j-17XP-ttP3Np.\n Couldn't find device with uuid tA4lpO-hM9f-S8ci-BdPh-lTve-0Rh1-3Bcsfy.\n Couldn't find device with uuid RG3w6j-yrxn-2iMw-ngd0-HgMS-i5dP-CGjaRk.\n Couldn't find device with uuid lQV02e-TUZE-PXCd-GWEd-eGqe-c2xC-pauHG7.\n Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}

Has the drive been used before, it might have existing partition/filesystem on it? If you are sure it's fine to overwrite try running wipefs -a /dev/sdb on all hosts. Also make sure there aren't any filters setup in lvm.conf (there shouldn't be on fresh install, but worth checking). On Tue, Apr 28, 2020 at 8:22 PM Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Hi,
I'm running the gluster deployment flow and am trying to use a second drive as the gluster volume. It's /dev/sdb on each node and I'm using the JBOD mode.
I'm seeing the following gluster ansible task fail and a google search doesn't bring up much.
TASK [gluster.infra/roles/backend_setup : Create volume groups] ****************
failed: [ovirt-gluster-01.jalloq.co.uk] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "err": " Couldn't find device with uuid Y8FVs8-LP6w-R6CR-Yosh-c40j-17XP-ttP3Np.\n Couldn't find device with uuid tA4lpO-hM9f-S8ci-BdPh-lTve-0Rh1-3Bcsfy.\n Couldn't find device with uuid RG3w6j-yrxn-2iMw-ngd0-HgMS-i5dP-CGjaRk.\n Couldn't find device with uuid lQV02e-TUZE-PXCd-GWEd-eGqe-c2xC-pauHG7.\n Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5U3K3IPYCFOLUF...

On April 29, 2020 2:39:05 AM GMT+03:00, Jayme <jaymef@gmail.com> wrote:
Has the drive been used before, it might have existing partition/filesystem on it? If you are sure it's fine to overwrite try running wipefs -a /dev/sdb on all hosts. Also make sure there aren't any filters setup in lvm.conf (there shouldn't be on fresh install, but worth checking).
On Tue, Apr 28, 2020 at 8:22 PM Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Hi,
I'm running the gluster deployment flow and am trying to use a second drive as the gluster volume. It's /dev/sdb on each node and I'm using the JBOD mode.
I'm seeing the following gluster ansible task fail and a google search doesn't bring up much.
TASK [gluster.infra/roles/backend_setup : Create volume groups] ****************
failed: [ovirt-gluster-01.jalloq.co.uk] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "err": " Couldn't find device with uuid Y8FVs8-LP6w-R6CR-Yosh-c40j-17XP-ttP3Np.\n Couldn't find device with uuid tA4lpO-hM9f-S8ci-BdPh-lTve-0Rh1-3Bcsfy.\n Couldn't find device with uuid RG3w6j-yrxn-2iMw-ngd0-HgMS-i5dP-CGjaRk.\n Couldn't find device with uuid lQV02e-TUZE-PXCd-GWEd-eGqe-c2xC-pauHG7.\n Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5U3K3IPYCFOLUF...
Actually best practice is not to use /dev/sdxxx as they can change. In your case most peobably the LUN is not fresh, so wipe it with dd/blktrim so any remnants of old FS signature is gone. Best Regards, Strahil Nikolov

Ah of course. I was assuming something had gone wrong with the deployment and it couldn't clean up its own mess. I'll raise a bug on the documentation. Strahil, what are the other options to using /dev/sdxxx? On Wed, Apr 29, 2020 at 10:17 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
On April 29, 2020 2:39:05 AM GMT+03:00, Jayme <jaymef@gmail.com> wrote:
Has the drive been used before, it might have existing partition/filesystem on it? If you are sure it's fine to overwrite try running wipefs -a /dev/sdb on all hosts. Also make sure there aren't any filters setup in lvm.conf (there shouldn't be on fresh install, but worth checking).
On Tue, Apr 28, 2020 at 8:22 PM Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Hi,
I'm running the gluster deployment flow and am trying to use a second drive as the gluster volume. It's /dev/sdb on each node and I'm using the JBOD mode.
I'm seeing the following gluster ansible task fail and a google search doesn't bring up much.
TASK [gluster.infra/roles/backend_setup : Create volume groups] ****************
failed: [ovirt-gluster-01.jalloq.co.uk] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "err": " Couldn't find device with uuid Y8FVs8-LP6w-R6CR-Yosh-c40j-17XP-ttP3Np.\n Couldn't find device with uuid tA4lpO-hM9f-S8ci-BdPh-lTve-0Rh1-3Bcsfy.\n Couldn't find device with uuid RG3w6j-yrxn-2iMw-ngd0-HgMS-i5dP-CGjaRk.\n Couldn't find device with uuid lQV02e-TUZE-PXCd-GWEd-eGqe-c2xC-pauHG7.\n Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5U3K3IPYCFOLUF...
Actually best practice is not to use /dev/sdxxx as they can change.
In your case most peobably the LUN is not fresh, so wipe it with dd/blktrim so any remnants of old FS signature is gone.
Best Regards, Strahil Nikolov

Actually, now I've fixed that, indeed, the deployment now fails with an lvm filter error. I'm not familiar with filters but there aren't any uncommented instances of 'filter' in /etc/lvm/lvm.conf. On Wed, Apr 29, 2020 at 5:42 PM Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Ah of course. I was assuming something had gone wrong with the deployment and it couldn't clean up its own mess. I'll raise a bug on the documentation.
Strahil, what are the other options to using /dev/sdxxx?
On Wed, Apr 29, 2020 at 10:17 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
On April 29, 2020 2:39:05 AM GMT+03:00, Jayme <jaymef@gmail.com> wrote:
Has the drive been used before, it might have existing partition/filesystem on it? If you are sure it's fine to overwrite try running wipefs -a /dev/sdb on all hosts. Also make sure there aren't any filters setup in lvm.conf (there shouldn't be on fresh install, but worth checking).
On Tue, Apr 28, 2020 at 8:22 PM Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Hi,
I'm running the gluster deployment flow and am trying to use a second drive as the gluster volume. It's /dev/sdb on each node and I'm using the JBOD mode.
I'm seeing the following gluster ansible task fail and a google search doesn't bring up much.
TASK [gluster.infra/roles/backend_setup : Create volume groups] ****************
failed: [ovirt-gluster-01.jalloq.co.uk] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "err": " Couldn't find device with uuid Y8FVs8-LP6w-R6CR-Yosh-c40j-17XP-ttP3Np.\n Couldn't find device with uuid tA4lpO-hM9f-S8ci-BdPh-lTve-0Rh1-3Bcsfy.\n Couldn't find device with uuid RG3w6j-yrxn-2iMw-ngd0-HgMS-i5dP-CGjaRk.\n Couldn't find device with uuid lQV02e-TUZE-PXCd-GWEd-eGqe-c2xC-pauHG7.\n Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5U3K3IPYCFOLUF...
Actually best practice is not to use /dev/sdxxx as they can change.
In your case most peobably the LUN is not fresh, so wipe it with dd/blktrim so any remnants of old FS signature is gone.
Best Regards, Strahil Nikolov

Having no luck here. I've had a read on the LVM config usage and there were no filters enabled in lvm.conf. I enabled debug logging and can see the default global filter being applied. I then manually forced the 'all' fiter and 'pvcreate /dev/sdb' still tells me it is excluded by a filter. The device section from 'lvmconfig' follows. What's wrong? devices { dir="/dev" scan="/dev" obtain_device_list_from_udev=1 external_device_info_source="none" preferred_names=["^/dev/mpath/","^/dev/mapper/mpath","^/dev/[hs]d"] filter="a|.*/|" cache_dir="/etc/lvm/cache" cache_file_prefix="" write_cache_state=1 sysfs_scan=1 scan_lvs=0 multipath_component_detection=1 md_component_detection=1 fw_raid_component_detection=0 md_chunk_alignment=1 data_alignment_detection=1 data_alignment=0 data_alignment_offset_detection=1 ignore_suspended_devices=0 ignore_lvm_mirrors=1 disable_after_error_count=0 require_restorefile_with_uuid=1 pv_min_size=2048 issue_discards=0 allow_changes_with_duplicate_pvs=1 } On Wed, Apr 29, 2020 at 6:21 PM Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Actually, now I've fixed that, indeed, the deployment now fails with an lvm filter error. I'm not familiar with filters but there aren't any uncommented instances of 'filter' in /etc/lvm/lvm.conf.
On Wed, Apr 29, 2020 at 5:42 PM Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Ah of course. I was assuming something had gone wrong with the deployment and it couldn't clean up its own mess. I'll raise a bug on the documentation.
Strahil, what are the other options to using /dev/sdxxx?
On Wed, Apr 29, 2020 at 10:17 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
On April 29, 2020 2:39:05 AM GMT+03:00, Jayme <jaymef@gmail.com> wrote:
Has the drive been used before, it might have existing partition/filesystem on it? If you are sure it's fine to overwrite try running wipefs -a /dev/sdb on all hosts. Also make sure there aren't any filters setup in lvm.conf (there shouldn't be on fresh install, but worth checking).
On Tue, Apr 28, 2020 at 8:22 PM Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Hi,
I'm running the gluster deployment flow and am trying to use a second drive as the gluster volume. It's /dev/sdb on each node and I'm using the JBOD mode.
I'm seeing the following gluster ansible task fail and a google search doesn't bring up much.
TASK [gluster.infra/roles/backend_setup : Create volume groups] ****************
failed: [ovirt-gluster-01.jalloq.co.uk] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "err": " Couldn't find device with uuid Y8FVs8-LP6w-R6CR-Yosh-c40j-17XP-ttP3Np.\n Couldn't find device with uuid tA4lpO-hM9f-S8ci-BdPh-lTve-0Rh1-3Bcsfy.\n Couldn't find device with uuid RG3w6j-yrxn-2iMw-ngd0-HgMS-i5dP-CGjaRk.\n Couldn't find device with uuid lQV02e-TUZE-PXCd-GWEd-eGqe-c2xC-pauHG7.\n Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5U3K3IPYCFOLUF...
Actually best practice is not to use /dev/sdxxx as they can change.
In your case most peobably the LUN is not fresh, so wipe it with dd/blktrim so any remnants of old FS signature is gone.
Best Regards, Strahil Nikolov

Changing to /dev/mapper names seems to work but if anyone can tell me why the /dev/sd* naming is filtered that would help my understanding. On Thu, Apr 30, 2020 at 10:13 AM Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Having no luck here. I've had a read on the LVM config usage and there were no filters enabled in lvm.conf. I enabled debug logging and can see the default global filter being applied. I then manually forced the 'all' fiter and 'pvcreate /dev/sdb' still tells me it is excluded by a filter. The device section from 'lvmconfig' follows. What's wrong?
devices {
dir="/dev"
scan="/dev"
obtain_device_list_from_udev=1
external_device_info_source="none"
preferred_names=["^/dev/mpath/","^/dev/mapper/mpath","^/dev/[hs]d"]
filter="a|.*/|"
cache_dir="/etc/lvm/cache"
cache_file_prefix=""
write_cache_state=1
sysfs_scan=1
scan_lvs=0
multipath_component_detection=1
md_component_detection=1
fw_raid_component_detection=0
md_chunk_alignment=1
data_alignment_detection=1
data_alignment=0
data_alignment_offset_detection=1
ignore_suspended_devices=0
ignore_lvm_mirrors=1
disable_after_error_count=0
require_restorefile_with_uuid=1
pv_min_size=2048
issue_discards=0
allow_changes_with_duplicate_pvs=1
}
On Wed, Apr 29, 2020 at 6:21 PM Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Actually, now I've fixed that, indeed, the deployment now fails with an lvm filter error. I'm not familiar with filters but there aren't any uncommented instances of 'filter' in /etc/lvm/lvm.conf.
On Wed, Apr 29, 2020 at 5:42 PM Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Ah of course. I was assuming something had gone wrong with the deployment and it couldn't clean up its own mess. I'll raise a bug on the documentation.
Strahil, what are the other options to using /dev/sdxxx?
On Wed, Apr 29, 2020 at 10:17 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
On April 29, 2020 2:39:05 AM GMT+03:00, Jayme <jaymef@gmail.com> wrote:
Has the drive been used before, it might have existing partition/filesystem on it? If you are sure it's fine to overwrite try running wipefs -a /dev/sdb on all hosts. Also make sure there aren't any filters setup in lvm.conf (there shouldn't be on fresh install, but worth checking).
On Tue, Apr 28, 2020 at 8:22 PM Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Hi,
I'm running the gluster deployment flow and am trying to use a second drive as the gluster volume. It's /dev/sdb on each node and I'm using the JBOD mode.
I'm seeing the following gluster ansible task fail and a google search doesn't bring up much.
TASK [gluster.infra/roles/backend_setup : Create volume groups] ****************
failed: [ovirt-gluster-01.jalloq.co.uk] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "err": " Couldn't find device with uuid Y8FVs8-LP6w-R6CR-Yosh-c40j-17XP-ttP3Np.\n Couldn't find device with uuid tA4lpO-hM9f-S8ci-BdPh-lTve-0Rh1-3Bcsfy.\n Couldn't find device with uuid RG3w6j-yrxn-2iMw-ngd0-HgMS-i5dP-CGjaRk.\n Couldn't find device with uuid lQV02e-TUZE-PXCd-GWEd-eGqe-c2xC-pauHG7.\n Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5U3K3IPYCFOLUF...
Actually best practice is not to use /dev/sdxxx as they can change.
In your case most peobably the LUN is not fresh, so wipe it with dd/blktrim so any remnants of old FS signature is gone.
Best Regards, Strahil Nikolov

On April 30, 2020 12:31:59 PM GMT+03:00, Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Changing to /dev/mapper names seems to work but if anyone can tell me why the /dev/sd* naming is filtered that would help my understanding.
On Thu, Apr 30, 2020 at 10:13 AM Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Having no luck here. I've had a read on the LVM config usage and there were no filters enabled in lvm.conf. I enabled debug logging and can see the default global filter being applied. I then manually forced the 'all' fiter and 'pvcreate /dev/sdb' still tells me it is excluded by a filter. The device section from 'lvmconfig' follows. What's wrong?
devices {
dir="/dev"
scan="/dev"
obtain_device_list_from_udev=1
external_device_info_source="none"
preferred_names=["^/dev/mpath/","^/dev/mapper/mpath","^/dev/[hs]d"]
filter="a|.*/|"
cache_dir="/etc/lvm/cache"
cache_file_prefix=""
write_cache_state=1
sysfs_scan=1
scan_lvs=0
multipath_component_detection=1
md_component_detection=1
fw_raid_component_detection=0
md_chunk_alignment=1
data_alignment_detection=1
data_alignment=0
data_alignment_offset_detection=1
ignore_suspended_devices=0
ignore_lvm_mirrors=1
disable_after_error_count=0
require_restorefile_with_uuid=1
pv_min_size=2048
issue_discards=0
allow_changes_with_duplicate_pvs=1
}
On Wed, Apr 29, 2020 at 6:21 PM Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Actually, now I've fixed that, indeed, the deployment now fails with an lvm filter error. I'm not familiar with filters but there aren't any uncommented instances of 'filter' in /etc/lvm/lvm.conf.
On Wed, Apr 29, 2020 at 5:42 PM Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Ah of course. I was assuming something had gone wrong with the deployment and it couldn't clean up its own mess. I'll raise a bug on the documentation.
Strahil, what are the other options to using /dev/sdxxx?
On Wed, Apr 29, 2020 at 10:17 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
On April 29, 2020 2:39:05 AM GMT+03:00, Jayme <jaymef@gmail.com> wrote:
Has the drive been used before, it might have existing partition/filesystem on it? If you are sure it's fine to overwrite try running wipefs -a /dev/sdb on all hosts. Also make sure there aren't any filters setup in lvm.conf (there shouldn't be on fresh install, but worth checking).
On Tue, Apr 28, 2020 at 8:22 PM Shareef Jalloq <shareef@jalloq.co.uk> wrote:
> Hi, > > I'm running the gluster deployment flow and am trying to use a second > drive as the gluster volume. It's /dev/sdb on each node and I'm using the > JBOD mode. > > I'm seeing the following gluster ansible task fail and a google search > doesn't bring up much. > > TASK [gluster.infra/roles/backend_setup : Create volume groups] > **************** > > failed: [ovirt-gluster-01.jalloq.co.uk] (item={u'vgname': > u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", > "changed": false, "err": " Couldn't find device with uuid > Y8FVs8-LP6w-R6CR-Yosh-c40j-17XP-ttP3Np.\n Couldn't find device with uuid > tA4lpO-hM9f-S8ci-BdPh-lTve-0Rh1-3Bcsfy.\n Couldn't find device with uuid > RG3w6j-yrxn-2iMw-ngd0-HgMS-i5dP-CGjaRk.\n Couldn't find device with uuid > lQV02e-TUZE-PXCd-GWEd-eGqe-c2xC-pauHG7.\n Device /dev/sdb excluded by a > filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, > "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} > _______________________________________________ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-leave@ovirt.org > Privacy Statement: https://www.ovirt.org/privacy-policy.html > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: >
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5U3K3IPYCFOLUF...
>
Actually best practice is not to use /dev/sdxxx as they can change.
In your case most peobably the LUN is not fresh, so wipe it with dd/blktrim so any remnants of old FS signature is gone.
Best Regards, Strahil Nikolov
Do you use multipath or VDO ? In the Linux stack you should always use the top layer device. For example you might have something like this (formatting is hard to be done over e-mail) LV Lool LV VG PV /dev/mapper/VDO1 /dev/md0 /dev/mapper/mpatha /dev/sdb As you can see , in that case sdb is a path for mpatha , which is a raid device and that raid device is VDO device. In your case it could be something else. Best Regards, Strahil Nikolov

I would recommend to do cleanup from cockpit or if you are using cli based deployment then use "/etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/tasks/gluster_cleanup.yml" with your inventory. Then try to deploy again. Cleanup takes care everything. On Thu, Apr 30, 2020 at 9:59 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
On April 30, 2020 12:31:59 PM GMT+03:00, Shareef Jalloq < shareef@jalloq.co.uk> wrote:
Changing to /dev/mapper names seems to work but if anyone can tell me why the /dev/sd* naming is filtered that would help my understanding.
On Thu, Apr 30, 2020 at 10:13 AM Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Having no luck here. I've had a read on the LVM config usage and there were no filters enabled in lvm.conf. I enabled debug logging and can see the default global filter being applied. I then manually forced the 'all' fiter and 'pvcreate /dev/sdb' still tells me it is excluded by a filter. The device section from 'lvmconfig' follows. What's wrong?
devices {
dir="/dev"
scan="/dev"
obtain_device_list_from_udev=1
external_device_info_source="none"
preferred_names=["^/dev/mpath/","^/dev/mapper/mpath","^/dev/[hs]d"]
filter="a|.*/|"
cache_dir="/etc/lvm/cache"
cache_file_prefix=""
write_cache_state=1
sysfs_scan=1
scan_lvs=0
multipath_component_detection=1
md_component_detection=1
fw_raid_component_detection=0
md_chunk_alignment=1
data_alignment_detection=1
data_alignment=0
data_alignment_offset_detection=1
ignore_suspended_devices=0
ignore_lvm_mirrors=1
disable_after_error_count=0
require_restorefile_with_uuid=1
pv_min_size=2048
issue_discards=0
allow_changes_with_duplicate_pvs=1
}
On Wed, Apr 29, 2020 at 6:21 PM Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Actually, now I've fixed that, indeed, the deployment now fails with an lvm filter error. I'm not familiar with filters but there aren't any uncommented instances of 'filter' in /etc/lvm/lvm.conf.
On Wed, Apr 29, 2020 at 5:42 PM Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Ah of course. I was assuming something had gone wrong with the deployment and it couldn't clean up its own mess. I'll raise a bug on the documentation.
Strahil, what are the other options to using /dev/sdxxx?
On Wed, Apr 29, 2020 at 10:17 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
On April 29, 2020 2:39:05 AM GMT+03:00, Jayme <jaymef@gmail.com> wrote: >Has the drive been used before, it might have existing >partition/filesystem >on it? If you are sure it's fine to overwrite try running wipefs -a >/dev/sdb on all hosts. Also make sure there aren't any filters setup in >lvm.conf (there shouldn't be on fresh install, but worth checking). > >On Tue, Apr 28, 2020 at 8:22 PM Shareef Jalloq <shareef@jalloq.co.uk> >wrote: > >> Hi, >> >> I'm running the gluster deployment flow and am trying to use a second >> drive as the gluster volume. It's /dev/sdb on each node and I'm >using the >> JBOD mode. >> >> I'm seeing the following gluster ansible task fail and a google >search >> doesn't bring up much. >> >> TASK [gluster.infra/roles/backend_setup : Create volume groups] >> **************** >> >> failed: [ovirt-gluster-01.jalloq.co.uk] (item={u'vgname': >> u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": >"item", >> "changed": false, "err": " Couldn't find device with uuid >> Y8FVs8-LP6w-R6CR-Yosh-c40j-17XP-ttP3Np.\n Couldn't find device with >uuid >> tA4lpO-hM9f-S8ci-BdPh-lTve-0Rh1-3Bcsfy.\n Couldn't find device with >uuid >> RG3w6j-yrxn-2iMw-ngd0-HgMS-i5dP-CGjaRk.\n Couldn't find device with >uuid >> lQV02e-TUZE-PXCd-GWEd-eGqe-c2xC-pauHG7.\n Device /dev/sdb excluded >by a >> filter.\n", "item": {"pvname": "/dev/sdb", "vgname": >"gluster_vg_sdb"}, >> "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} >> _______________________________________________ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-leave@ovirt.org >> Privacy Statement: https://www.ovirt.org/privacy-policy.html >> oVirt Code of Conduct: >> https://www.ovirt.org/community/about/community-guidelines/ >> List Archives: >> >
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5U3K3IPYCFOLUF...
>>
Actually best practice is not to use /dev/sdxxx as they can change.
In your case most peobably the LUN is not fresh, so wipe it with dd/blktrim so any remnants of old FS signature is gone.
Best Regards, Strahil Nikolov
Do you use multipath or VDO ? In the Linux stack you should always use the top layer device.
For example you might have something like this (formatting is hard to be done over e-mail) LV Lool LV VG PV /dev/mapper/VDO1 /dev/md0 /dev/mapper/mpatha /dev/sdb
As you can see , in that case sdb is a path for mpatha , which is a raid device and that raid device is VDO device.
In your case it could be something else.
Best Regards, Strahil Nikolov _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NO5XINMD4YLU5C...
-- Thanks, Gobinda

On April 29, 2020 8:21:58 PM GMT+03:00, Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Actually, now I've fixed that, indeed, the deployment now fails with an lvm filter error. I'm not familiar with filters but there aren't any uncommented instances of 'filter' in /etc/lvm/lvm.conf.
On Wed, Apr 29, 2020 at 5:42 PM Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Ah of course. I was assuming something had gone wrong with the deployment and it couldn't clean up its own mess. I'll raise a bug on the documentation.
Strahil, what are the other options to using /dev/sdxxx?
On Wed, Apr 29, 2020 at 10:17 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
On April 29, 2020 2:39:05 AM GMT+03:00, Jayme <jaymef@gmail.com> wrote:
Has the drive been used before, it might have existing partition/filesystem on it? If you are sure it's fine to overwrite try running wipefs -a /dev/sdb on all hosts. Also make sure there aren't any filters setup in lvm.conf (there shouldn't be on fresh install, but worth checking).
On Tue, Apr 28, 2020 at 8:22 PM Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Hi,
I'm running the gluster deployment flow and am trying to use a second drive as the gluster volume. It's /dev/sdb on each node and I'm using the JBOD mode.
I'm seeing the following gluster ansible task fail and a google search doesn't bring up much.
TASK [gluster.infra/roles/backend_setup : Create volume groups] ****************
failed: [ovirt-gluster-01.jalloq.co.uk] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "err": " Couldn't find device with uuid Y8FVs8-LP6w-R6CR-Yosh-c40j-17XP-ttP3Np.\n Couldn't find device with uuid tA4lpO-hM9f-S8ci-BdPh-lTve-0Rh1-3Bcsfy.\n Couldn't find device with uuid RG3w6j-yrxn-2iMw-ngd0-HgMS-i5dP-CGjaRk.\n Couldn't find device with uuid lQV02e-TUZE-PXCd-GWEd-eGqe-c2xC-pauHG7.\n Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5U3K3IPYCFOLUF...
Actually best practice is not to use /dev/sdxxx as they can change.
In your case most peobably the LUN is not fresh, so wipe it with dd/blktrim so any remnants of old FS signature is gone.
Best Regards, Strahil Nikolov
Can you get a debug output from ansible ? I haven't deployed oVirt recently. Best Regards, Strahil Nikolov

It's running now using the /dev/mapper/by-id name so I'll just stick with that and use this in the future. Thanks. On Thu, Apr 30, 2020 at 3:43 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
On April 29, 2020 8:21:58 PM GMT+03:00, Shareef Jalloq < shareef@jalloq.co.uk> wrote:
Actually, now I've fixed that, indeed, the deployment now fails with an lvm filter error. I'm not familiar with filters but there aren't any uncommented instances of 'filter' in /etc/lvm/lvm.conf.
On Wed, Apr 29, 2020 at 5:42 PM Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Ah of course. I was assuming something had gone wrong with the deployment and it couldn't clean up its own mess. I'll raise a bug on the documentation.
Strahil, what are the other options to using /dev/sdxxx?
On Wed, Apr 29, 2020 at 10:17 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
On April 29, 2020 2:39:05 AM GMT+03:00, Jayme <jaymef@gmail.com> wrote:
Has the drive been used before, it might have existing partition/filesystem on it? If you are sure it's fine to overwrite try running wipefs -a /dev/sdb on all hosts. Also make sure there aren't any filters setup in lvm.conf (there shouldn't be on fresh install, but worth checking).
On Tue, Apr 28, 2020 at 8:22 PM Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Hi,
I'm running the gluster deployment flow and am trying to use a second drive as the gluster volume. It's /dev/sdb on each node and I'm using the JBOD mode.
I'm seeing the following gluster ansible task fail and a google search doesn't bring up much.
TASK [gluster.infra/roles/backend_setup : Create volume groups] ****************
failed: [ovirt-gluster-01.jalloq.co.uk] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "err": " Couldn't find device with uuid Y8FVs8-LP6w-R6CR-Yosh-c40j-17XP-ttP3Np.\n Couldn't find device with uuid tA4lpO-hM9f-S8ci-BdPh-lTve-0Rh1-3Bcsfy.\n Couldn't find device with uuid RG3w6j-yrxn-2iMw-ngd0-HgMS-i5dP-CGjaRk.\n Couldn't find device with uuid lQV02e-TUZE-PXCd-GWEd-eGqe-c2xC-pauHG7.\n Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5U3K3IPYCFOLUF...
Actually best practice is not to use /dev/sdxxx as they can change.
In your case most peobably the LUN is not fresh, so wipe it with dd/blktrim so any remnants of old FS signature is gone.
Best Regards, Strahil Nikolov
Can you get a debug output from ansible ? I haven't deployed oVirt recently.
Best Regards, Strahil Nikolov

On April 29, 2020 7:42:55 PM GMT+03:00, Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Ah of course. I was assuming something had gone wrong with the deployment and it couldn't clean up its own mess. I'll raise a bug on the documentation.
Strahil, what are the other options to using /dev/sdxxx?
On Wed, Apr 29, 2020 at 10:17 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
On April 29, 2020 2:39:05 AM GMT+03:00, Jayme <jaymef@gmail.com> wrote:
Has the drive been used before, it might have existing partition/filesystem on it? If you are sure it's fine to overwrite try running wipefs -a /dev/sdb on all hosts. Also make sure there aren't any filters setup in lvm.conf (there shouldn't be on fresh install, but worth checking).
On Tue, Apr 28, 2020 at 8:22 PM Shareef Jalloq <shareef@jalloq.co.uk> wrote:
Hi,
I'm running the gluster deployment flow and am trying to use a second drive as the gluster volume. It's /dev/sdb on each node and I'm using the JBOD mode.
I'm seeing the following gluster ansible task fail and a google search doesn't bring up much.
TASK [gluster.infra/roles/backend_setup : Create volume groups] ****************
failed: [ovirt-gluster-01.jalloq.co.uk] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "err": " Couldn't find device with uuid Y8FVs8-LP6w-R6CR-Yosh-c40j-17XP-ttP3Np.\n Couldn't find device with uuid tA4lpO-hM9f-S8ci-BdPh-lTve-0Rh1-3Bcsfy.\n Couldn't find device with uuid RG3w6j-yrxn-2iMw-ngd0-HgMS-i5dP-CGjaRk.\n Couldn't find device with uuid lQV02e-TUZE-PXCd-GWEd-eGqe-c2xC-pauHG7.\n Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5U3K3IPYCFOLUF...
Actually best practice is not to use /dev/sdxxx as they can change.
In your case most peobably the LUN is not fresh, so wipe it with dd/blktrim so any remnants of old FS signature is gone.
Best Regards, Strahil Nikolov
Hi Schareef, In general we should use persistent names like '/dev/disk/by-id/scsi-XYZ or /dev/disk/by-id/wwn-XYZ if we want to be idempotent (be able to rerun the ansible play/role multiple times /even after a reboot/). For example , I make an array of all available disks with the exclusion of the system disk (filtering disks with partitions) and then use those as PVs in a VG and then everything else is easy. If you need to separate the disks by size (multiple VGs), you can sort the array and then select which disk to be a PV for a specific VG. Aanother approach is to filter the disks by vendor or type and then create your VGs with ansible. Anyway, for initial deployment /dev/sdXYZ is enough. Best Regards, Strahil Nikolov
participants (4)
-
Gobinda Das
-
Jayme
-
Shareef Jalloq
-
Strahil Nikolov