Gluster set up fails - Nearly there I think...

Gluster fails with vdo: ERROR - Device /dev/sdb excluded by a filter.\n", however I have run [root@ovirt1 ~]# vdo create --name=vdo1 --device=/dev/sdb --force Creating VDO vdo1 Starting VDO vdo1 Starting compression on VDO vdo1 VDO instance 1 volume is ready at /dev/mapper/vdo1 [root@ovirt1 ~]# there are no filters in lvm.conf I have run wipefs -a /dev/sdb —force on all hosts before start

Full log of point of failure TASK [gluster.infra/roles/backend_setup : Create VDO with specified size] ****** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml:9 failed: [gfs1.gluster.private] (item={u'writepolicy': u'auto', u'name': u'vdo_sdb', u'readcachesize': u'20M', u'readcache': u'enabled', u'emulate512': u'on', u'logicalsize': u'11000G', u'device': u'/dev/sdb', u'slabsize': u'32G', u'blockmapcachesize': u'128M'}) => {"ansible_loop_var": "item", "changed": false, "err": "vdo: ERROR - Device /dev/sdb excluded by a filter.\n", "item": {"blockmapcachesize": "128M", "device": "/dev/sdb", "emulate512": "on", "logicalsize": "11000G", "name": "vdo_sdb", "readcache": "enabled", "readcachesize": "20M", "slabsize": "32G", "writepolicy": "auto"}, "msg": "Creating VDO vdo_sdb failed.", "rc": 1} failed: [gfs2.gluster.private] (item={u'writepolicy': u'auto', u'name': u'vdo_sdb', u'readcachesize': u'20M', u'readcache': u'enabled', u'emulate512': u'on', u'logicalsize': u'11000G', u'device': u'/dev/sdb', u'slabsize': u'32G', u'blockmapcachesize': u'128M'}) => {"ansible_loop_var": "item", "changed": false, "err": "vdo: ERROR - Device /dev/sdb excluded by a filter.\n", "item": {"blockmapcachesize": "128M", "device": "/dev/sdb", "emulate512": "on", "logicalsize": "11000G", "name": "vdo_sdb", "readcache": "enabled", "readcachesize": "20M", "slabsize": "32G", "writepolicy": "auto"}, "msg": "Creating VDO vdo_sdb failed.", "rc": 1} failed: [gfs3.gluster.private] (item={u'writepolicy': u'auto', u'name': u'vdo_sdb', u'readcachesize': u'20M', u'readcache': u'enabled', u'emulate512': u'on', u'logicalsize': u'11000G', u'device': u'/dev/sdb', u'slabsize': u'32G', u'blockmapcachesize': u'128M'}) => {"ansible_loop_var": "item", "changed": false, "err": "vdo: ERROR - Device /dev/sdb excluded by a filter.\n", "item": {"blockmapcachesize": "128M", "device": "/dev/sdb", "emulate512": "on", "logicalsize": "11000G", "name": "vdo_sdb", "readcache": "enabled", "readcachesize": "20M", "slabsize": "32G", "writepolicy": "auto"}, "msg": "Creating VDO vdo_sdb failed.", "rc": 1} NO MORE HOSTS LEFT ************************************************************* NO MORE HOSTS LEFT ************************************************************* PLAY RECAP ********************************************************************* gfs1.gluster.private : ok=12 changed=1 unreachable=0 failed=1 skipped=9 rescued=0 ignored=0 gfs2.gluster.private : ok=13 changed=2 unreachable=0 failed=1 skipped=9 rescued=0 ignored=0 gfs3.gluster.private : ok=12 changed=1 unreachable=0 failed=1 skipped=9 rescued=0 ignored=0 Please check /var/log/cockpit/ovirt-dashboard/gluster-deployment.log for more informations.

Check lvm.conf so see if there are any filters set on that device On Sat, Nov 23, 2019 at 3:27 PM <rob.downer@orbitalsystems.co.uk> wrote:
Gluster fails with vdo: ERROR - Device /dev/sdb excluded by a filter.\n",
however I have run
[root@ovirt1 ~]# vdo create --name=vdo1 --device=/dev/sdb --force Creating VDO vdo1 Starting VDO vdo1 Starting compression on VDO vdo1 VDO instance 1 volume is ready at /dev/mapper/vdo1 [root@ovirt1 ~]#
there are no filters in lvm.conf
I have run
wipefs -a /dev/sdb —force
on all hosts before start _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RPVWGWIP35QWFN...

So no Filters in there... Also Gluster / Engin wizard only shows setup for single node. I have 3 nodes in Dashboard and have shared passwordless SSH keys between the host and the other two hosts via the backend Gluster network

Ok so i found that the System was set to MBR ... there are no filters in LVM.conf However disk creation still fails on VDO disk creation with the same error... I have set the disk format to Do you want to proceed? (Y/N): y OK; writing new GUID partition table (GPT) to /dev/sdb. The operation has completed successfully. [root@ovirt2 ~]#

I deployed Gluster FS by first applying the vdo Volume on each host that Gluster tried to create and failed with vdo create --name=vdo_sdb --device=/dev/sdb --force re deploying the gluster wizard then completed without error....

I guess it's filtered because /dev/sdb today is not /dev/sdb after a reboot.You'd better use a persistent name like /dev/disk/by-id/scsi-xxx or anything in /dev/disk/by-id/, /dev/disk/by/uuid/, /dev/disk/by-path/, /dev/disk/by-partuuid/ - as they are supposed to be persistent. Best Regards,Strahil Nikolov В събота, 23 ноември 2019 г., 21:27:38 ч. Гринуич+2, <rob.downer@orbitalsystems.co.uk> написа: Gluster fails with vdo: ERROR - Device /dev/sdb excluded by a filter.\n", however I have run [root@ovirt1 ~]# vdo create --name=vdo1 --device=/dev/sdb --force Creating VDO vdo1 Starting VDO vdo1 Starting compression on VDO vdo1 VDO instance 1 volume is ready at /dev/mapper/vdo1 [root@ovirt1 ~]# there are no filters in lvm.conf I have run wipefs -a /dev/sdb —force on all hosts before start _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RPVWGWIP35QWFN...

Hi, So I wiped everything and went through the wizard to set up hyper converged Storage & Hosted engine… Storage was created OK and it went straight on to hosted engine deployment this failed with the same error… I have this in vdsm.log Do you have any idea ? the system it’s self created the Gluster storage etc cat: 8.9G: No such file or directory [root@ovirt3 ~]# [root@ovirt3 tmp]# tail /var/log/vdsm/vdsm.log -bash: [root@ovirt3: command not found [root@ovirt3 ~]# 2019-12-07 19:45:19,879+0000 INFO (jsonrpc/6) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::ffff:192.168.222.198,35950 (api:54) -bash: syntax error near unexpected token `(' [root@ovirt3 ~]# 2019-12-07 19:45:19,880+0000 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:312) -bash: syntax error near unexpected token `(' [root@ovirt3 ~]# 2019-12-07 19:45:19,945+0000 INFO (vmrecovery) [vdsm.api] START getConnectedStoragePoolsList(options=None) from=internal, task_id=621dfe30-de01-428d-b9fb-c6fd2e6df33e (api:48) -bash: syntax error near unexpected token `(' [root@ovirt3 ~]# 2019-12-07 19:45:19,946+0000 INFO (vmrecovery) [vdsm.api] FINISH getConnectedStoragePoolsList return={'poollist': []} from=internal, task_id=621dfe30-de01-428d-b9fb-c6fd2e6df33e (api:54) -bash: syntax error near unexpected token `(' [root@ovirt3 ~]# 2019-12-07 19:45:19,946+0000 INFO (vmrecovery) [vds] recovery: waiting for storage pool to go up (clientIF:711) -bash: syntax error near unexpected token `(' [root@ovirt3 ~]# 2019-12-07 19:45:24,770+0000 INFO (periodic/3) [vdsm.api] START repoStats(domains=()) from=internal, task_id=434e98ab-eb9c-4dff-b92d-81f57fdf6ed3 (api:48) -bash: syntax error near unexpected token `(' [root@ovirt3 ~]# 2019-12-07 19:45:24,770+0000 INFO (periodic/3) [vdsm.api] FINISH repoStats return={} from=internal, task_id=434e98ab-eb9c-4dff-b92d-81f57fdf6ed3 (api:54) -bash: syntax error near unexpected token `(' [root@ovirt3 ~]# 2019-12-07 19:45:24,950+0000 INFO (vmrecovery) [vdsm.api] START getConnectedStoragePoolsList(options=None) from=internal, task_id=3c831c51-3f79-4d87-ad8a-a4a794f47b8f (api:48) -bash: syntax error near unexpected token `(' [root@ovirt3 ~]# 2019-12-07 19:45:24,950+0000 INFO (vmrecovery) [vdsm.api] FINISH getConnectedStoragePoolsList return={'poollist': []} from=internal, task_id=3c831c51-3f79-4d87-ad8a-a4a794f47b8f (api:54) -bash: syntax error near unexpected token `(' [root@ovirt3 ~]# 2019-12-07 19:45:24,950+0000 INFO (vmrecovery) [vds] recovery: waiting for storage pool to go up (clientIF:711) -bash: syntax error near unexpected token `(' [root@ovirt3 ~]# [root@ovirt3 tmp]# -bash: [root@ovirt3: command not found [root@ovirt3 ~]#
On 24 Nov 2019, at 23:07, Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
I guess it's filtered because /dev/sdb today is not /dev/sdb after a reboot. You'd better use a persistent name like /dev/disk/by-id/scsi-xxx or anything in /dev/disk/by-id/, /dev/disk/by/uuid/, /dev/disk/by-path/, /dev/disk/by-partuuid/ - as they are supposed to be persistent.
Best Regards, Strahil Nikolov
В събота, 23 ноември 2019 г., 21:27:38 ч. Гринуич+2, <rob.downer@orbitalsystems.co.uk> написа:
Gluster fails with vdo: ERROR - Device /dev/sdb excluded by a filter.\n",
however I have run
[root@ovirt1 <mailto:root@ovirt1> ~]# vdo create --name=vdo1 --device=/dev/sdb --force Creating VDO vdo1 Starting VDO vdo1 Starting compression on VDO vdo1 VDO instance 1 volume is ready at /dev/mapper/vdo1 [root@ovirt1 <mailto:root@ovirt1> ~]#
there are no filters in lvm.conf
I have run
wipefs -a /dev/sdb —force
on all hosts before start _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RPVWGWIP35QWFN... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/RPVWGWIP35QWFNCAGABMF4GC24IEZWX5/>

The error message "/dev/sdXX has been excluded by a filter" is potentially very misleading, because it catches all sorts of conditions. Basically any known storage signature on the storage you may recycle (perhaps from a previous attempt) will trigger this. The functionality is more of a feature, it's just the message that could be improved. And then there is simply too many different storage allocation schemes, which LVM tries to recognize all. I have occasionally had to fight hard with dmsetup, because some fluke devmapper signature was still found that lsblk -f, lvs, pvs etc. could not see. I like to use cockpit/storage to test creating volumes before I do another run with the HCI wizard.
participants (5)
-
Jayme
-
Rob
-
rob.downer@orbitalsystems.co.uk
-
Strahil Nikolov
-
thomas@hoberg.net