On April 30, 2020 12:31:59 PM GMT+03:00, Shareef Jalloq <shareef(a)jalloq.co.uk>
wrote:
Changing to /dev/mapper names seems to work but if anyone can tell me
why
the /dev/sd* naming is filtered that would help my understanding.
On Thu, Apr 30, 2020 at 10:13 AM Shareef Jalloq <shareef(a)jalloq.co.uk>
wrote:
> Having no luck here. I've had a read on the LVM config usage and
there
> were no filters enabled in lvm.conf. I enabled debug logging and can
see
> the default global filter being applied. I then manually forced the
'all'
> fiter and 'pvcreate /dev/sdb' still tells me it is excluded by a
filter.
> The device section from 'lvmconfig' follows. What's wrong?
>
> devices {
>
> dir="/dev"
>
> scan="/dev"
>
> obtain_device_list_from_udev=1
>
> external_device_info_source="none"
>
>
preferred_names=["^/dev/mpath/","^/dev/mapper/mpath","^/dev/[hs]d"]
>
> filter="a|.*/|"
>
> cache_dir="/etc/lvm/cache"
>
> cache_file_prefix=""
>
> write_cache_state=1
>
> sysfs_scan=1
>
> scan_lvs=0
>
> multipath_component_detection=1
>
> md_component_detection=1
>
> fw_raid_component_detection=0
>
> md_chunk_alignment=1
>
> data_alignment_detection=1
>
> data_alignment=0
>
> data_alignment_offset_detection=1
>
> ignore_suspended_devices=0
>
> ignore_lvm_mirrors=1
>
> disable_after_error_count=0
>
> require_restorefile_with_uuid=1
>
> pv_min_size=2048
>
> issue_discards=0
>
> allow_changes_with_duplicate_pvs=1
>
> }
>
> On Wed, Apr 29, 2020 at 6:21 PM Shareef Jalloq <shareef(a)jalloq.co.uk>
> wrote:
>
>> Actually, now I've fixed that, indeed, the deployment now fails with
an
>> lvm filter error. I'm not familiar with filters but there aren't
any
>> uncommented instances of 'filter' in /etc/lvm/lvm.conf.
>>
>>
>>
>> On Wed, Apr 29, 2020 at 5:42 PM Shareef Jalloq
<shareef(a)jalloq.co.uk>
>> wrote:
>>
>>> Ah of course. I was assuming something had gone wrong with the
>>> deployment and it couldn't clean up its own mess. I'll raise a bug
on the
>>> documentation.
>>>
>>> Strahil, what are the other options to using /dev/sdxxx?
>>>
>>> On Wed, Apr 29, 2020 at 10:17 AM Strahil Nikolov
<hunter86_bg(a)yahoo.com>
>>> wrote:
>>>
>>>> On April 29, 2020 2:39:05 AM GMT+03:00, Jayme <jaymef(a)gmail.com>
wrote:
>>>> >Has the drive been used before, it might have existing
>>>> >partition/filesystem
>>>> >on it? If you are sure it's fine to overwrite try running wipefs
-a
>>>> >/dev/sdb on all hosts. Also make sure there aren't any filters
setup in
>>>> >lvm.conf (there shouldn't be on fresh install, but worth
checking).
>>>> >
>>>> >On Tue, Apr 28, 2020 at 8:22 PM Shareef Jalloq
<shareef(a)jalloq.co.uk>
>>>> >wrote:
>>>> >
>>>> >> Hi,
>>>> >>
>>>> >> I'm running the gluster deployment flow and am trying to use
a
second
>>>> >> drive as the gluster volume. It's /dev/sdb on each node
and
I'm
>>>> >using the
>>>> >> JBOD mode.
>>>> >>
>>>> >> I'm seeing the following gluster ansible task fail and a
google
>>>> >search
>>>> >> doesn't bring up much.
>>>> >>
>>>> >> TASK [gluster.infra/roles/backend_setup : Create volume groups]
>>>> >> ****************
>>>> >>
>>>> >> failed: [ovirt-gluster-01.jalloq.co.uk]
(item={u'vgname':
>>>> >> u'gluster_vg_sdb', u'pvname':
u'/dev/sdb'}) =>
{"ansible_loop_var":
>>>> >"item",
>>>> >> "changed": false, "err": "
Couldn't find device with uuid
>>>> >> Y8FVs8-LP6w-R6CR-Yosh-c40j-17XP-ttP3Np.\n Couldn't find
device
with
>>>> >uuid
>>>> >> tA4lpO-hM9f-S8ci-BdPh-lTve-0Rh1-3Bcsfy.\n Couldn't find
device
with
>>>> >uuid
>>>> >> RG3w6j-yrxn-2iMw-ngd0-HgMS-i5dP-CGjaRk.\n Couldn't find
device
with
>>>> >uuid
>>>> >> lQV02e-TUZE-PXCd-GWEd-eGqe-c2xC-pauHG7.\n Device /dev/sdb
excluded
>>>> >by a
>>>> >> filter.\n", "item": {"pvname":
"/dev/sdb", "vgname":
>>>> >"gluster_vg_sdb"},
>>>> >> "msg": "Creating physical volume
'/dev/sdb' failed", "rc": 5}
>>>> >> _______________________________________________
>>>> >> Users mailing list -- users(a)ovirt.org
>>>> >> To unsubscribe send an email to users-leave(a)ovirt.org
>>>> >> Privacy Statement:
https://www.ovirt.org/privacy-policy.html
>>>> >> oVirt Code of Conduct:
>>>> >>
https://www.ovirt.org/community/about/community-guidelines/
>>>> >> List Archives:
>>>> >>
>>>> >
>>>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5U3K3IPYCFO...
>>>> >>
>>>>
>>>> Actually best practice is not to use /dev/sdxxx as they can
change.
>>>>
>>>> In your case most peobably the LUN is not fresh, so wipe it with
>>>> dd/blktrim so any remnants of old FS signature is gone.
>>>>
>>>> Best Regards,
>>>> Strahil Nikolov
>>>>
>>>
Do you use multipath or VDO ?
In the Linux stack you should always use the top layer device.
For example you might have something like this (formatting is hard to be done over
e-mail)
LV
Lool LV
VG
PV
/dev/mapper/VDO1
/dev/md0
/dev/mapper/mpatha
/dev/sdb
As you can see , in that case sdb is a path for mpatha , which is a raid device and
that raid device is VDO device.
In your case it could be something else.
Best Regards,
Strahil Nikolov