Hi

I was having this error earlier today in my lab environment I didn't resolve it and gave up!

Prior to the messages you mention, I noticed something about "multipath -a /dev/sdb" which was failing and for sure you can try that from the command line and see the failure message. I just assumed the 4.4.2 setup scripts require you to have a multipath device as the underlying storage and no longer supported direct attached single disks. I used the current (4.4.2) ovirt node ISO and was trying to deploy on 3 x VMWare Workstation v15 VMs.

[root@ovn3 ~]# multipath /dev/sdb
Nov 02 19:41:54 | sdb: failed to get udev uid: Invalid argument
Nov 02 19:41:54 | sdb: failed to get sysfs uid: Invalid argument
Nov 02 19:41:54 | sdb: failed to get sgio uid: No such file or directory
[root@ovn3 ~]# echo $?
1

I instead manually configured the gluster fs first and then deployed the hosted engine, I am unfamiliar but next I need to add my 2nd and 3rd gluster nodes as KVM compute nodes, but that's for tmrw now. I'm not sure if I can use the 3rd gluster node as a KVM compute node as that was marked as arbiter.

Here's my success run, if it's any use to you:


<-- snip
[ALL]#
parted /dev/sdb mklabel gpt
parted /dev/sdb mkpart primary 0% 100%

mkfs.xfs -i size=512 -Lglusterfs /dev/sdb1
mkdir -p /data/glusterfs/myvolume/mybrick
echo 'LABEL=glusterfs /data/glusterfs/myvolume/mybrick xfs defaults  0 0' >> /etc/fstab
mount /data/glusterfs/myvolume/mybrick

firewall-cmd --permanent --add-service=glusterfs
systemctl enable --now glusterd


[ovn1]#
gluster peer probe ovn2.int.ajc
gluster peer probe ovn3.int.ajc

gluster volume create myvolume replica 3 arbiter 1 ovn{1,2,3}.int.ajc:/data/glusterfs/myvolume/mybrick/brick
gluster volume set myvolume features.shard enable
gluster volume start myvolume
volume start: myvolume: success

[ALL]#
mkdir -p /a
echo 'ovn1:myvolume /a        glusterfs       defaults        0 0' >> /etc/fstab
mount /a
chown vdsm:kvm /a


Cockpit setup
#############

(This to preload the appliance RPM into yum cache rather than download it again:)
[ovn1]#
mkdir -p /var/cache/dnf/ovirt-4.4-8fb26fb2b8638243/packages
cp ovirt-engine-appliance-4.4-20200916125954.1.el8.x86_64.rpm /var/cache/dnf/ovirt-4.4-8fb26fb2b8638243/packages


Hosted Engine

VM -> Engine -> PrepareVM
  Standard options

Storage
  I added the gluster storage as ovn1.int.ajc:/myvolume

Tadaaaa!
<-- snip


Regards
Angus


From: Parth Dhanjal <dparth@redhat.com>
Sent: 02 November 2020 17:15
To: garcialiang.anne@gmail.com <garcialiang.anne@gmail.com>
Cc: users <users@ovirt.org>
Subject: [ovirt-users] Re: ovirt glusterfs
 
Hey!

In case you are deploying on any server which is not RHVH based, the devices are not automatically blacklisted.
Or it could be because the disk was previously partitioned.
You can try these solutions if they help -

If the filter is correct (/etc/lvm/lvm.conf) and old partition table information found on the disk, you can wipe out the old partition information with "wipefs".
wipefs -a /dev/sdx

In case the filter is incorrect then you can edit the /etc/lvm/lvm.conf and add this parameter under the filter
"a|^/dev/sdx$|"
 

On Mon, Nov 2, 2020 at 9:29 PM <garcialiang.anne@gmail.com> wrote:
Hello,

I've some problem with Hyperconverged Configure Gluster storage and oVirt hosted engine. I've the message error:

failed: [node2.xxxxx.fr] (item={u'key': u'gluster_vg_sdb', u'value': [{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}]}) => {"ansible_loop_var": "item", "changed": false, "err": "  Device /dev/sdb excluded by a filter.\n", "item": {"key": "gluster_vg_sdb", "value": [{"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}]}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}

Could you help me for know where is the problem, please?

Thanks,

Anne Garcia
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FRJTPHMZHZOS6FPTBA2D6TLQIO6MD3HG/