Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

I have edited my multipath.conf to exclude local disks , but you need to set '#VDSM private' as per the comments in the header of the file. Otherwise, use the /dev/mapper/multipath-device notation - as you would do with any linux. Best Regards, Strahil NikolovOn Apr 23, 2019 01:07, adrianquintero@gmail.com wrote:
Thanks Alex, that makes more sense now while trying to follow the instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd are locked and inidicating " multpath_member" hence not letting me create new bricks. And on the logs I see
Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} Same thing for sdc, sdd
Should I manually edit the filters inside the OS, what will be the impact?
thanks again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLX...

Strahil, this is the issue I am seeing now [image: image.png] The is thru the UI when I try to create a new brick. So my concern is if I modify the filters on the OS what impact will that have after server reboots? thanks, On Mon, Apr 22, 2019 at 11:39 PM Strahil <hunter86_bg@yahoo.com> wrote:
I have edited my multipath.conf to exclude local disks , but you need to set '#VDSM private' as per the comments in the header of the file. Otherwise, use the /dev/mapper/multipath-device notation - as you would do with any linux.
Best Regards, Strahil NikolovOn Apr 23, 2019 01:07, adrianquintero@gmail.com wrote:
Thanks Alex, that makes more sense now while trying to follow the
instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd are locked and inidicating " multpath_member" hence not letting me create new bricks. And on the logs I see
Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb",
"vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
Same thing for sdc, sdd
Should I manually edit the filters inside the OS, what will be the impact?
thanks again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLX...
-- Adrian Quintero

In which menu do you see it this way ? Best Regards,Strahil Nikolov В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero <adrianquintero@gmail.com> написа: Strahil,this is the issue I am seeing now The is thru the UI when I try to create a new brick. So my concern is if I modify the filters on the OS what impact will that have after server reboots? thanks, On Mon, Apr 22, 2019 at 11:39 PM Strahil <hunter86_bg@yahoo.com> wrote: I have edited my multipath.conf to exclude local disks , but you need to set '#VDSM private' as per the comments in the header of the file. Otherwise, use the /dev/mapper/multipath-device notation - as you would do with any linux. Best Regards, Strahil NikolovOn Apr 23, 2019 01:07, adrianquintero@gmail.com wrote:
Thanks Alex, that makes more sense now while trying to follow the instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd are locked and inidicating " multpath_member" hence not letting me create new bricks. And on the logs I see
Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} Same thing for sdc, sdd
Should I manually edit the filters inside the OS, what will be the impact?
thanks again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLX...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLP...

under Compute, hosts, select the host that has the locks on /dev/sdb, /dev/sdc, etc.., select storage devices and in here is where you see a small column with a bunch of lock images showing for each row. However as a work around, on the newly added hosts (3 total), I had to manually modify /etc/multipath.conf and add the following at the end as this is what I noticed from the original 3 node setup. ------------------------------------------------------------- # VDSM REVISION 1.3 # VDSM PRIVATE # BEGIN Added by gluster_hci role blacklist { devnode "*" } # END Added by gluster_hci role ---------------------------------------------------------- After this I restarted multipath and the lock went away and was able to configure the new bricks thru the UI, however my concern is what will happen if I reboot the server will the disks be read the same way by the OS? Also now able to expand the gluster with a new replicate 3 volume if needed using http://host4.mydomain.com:9090. thanks again On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
In which menu do you see it this way ?
Best Regards, Strahil Nikolov
В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero < adrianquintero@gmail.com> написа:
Strahil, this is the issue I am seeing now
[image: image.png]
The is thru the UI when I try to create a new brick.
So my concern is if I modify the filters on the OS what impact will that have after server reboots?
thanks,
On Mon, Apr 22, 2019 at 11:39 PM Strahil <hunter86_bg@yahoo.com> wrote:
I have edited my multipath.conf to exclude local disks , but you need to set '#VDSM private' as per the comments in the header of the file. Otherwise, use the /dev/mapper/multipath-device notation - as you would do with any linux.
Best Regards, Strahil NikolovOn Apr 23, 2019 01:07, adrianquintero@gmail.com wrote:
Thanks Alex, that makes more sense now while trying to follow the
instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd are locked and inidicating " multpath_member" hence not letting me create new bricks. And on the logs I see
Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb",
"vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
Same thing for sdc, sdd
Should I manually edit the filters inside the OS, what will be the impact?
thanks again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLX...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLP...
-- Adrian Quintero

All my hosts have the same locks, so it seems to be OK. Best Regards,Strahil Nikolov В четвъртък, 25 април 2019 г., 8:28:31 ч. Гринуич-4, Adrian Quintero <adrianquintero@gmail.com> написа: under Compute, hosts, select the host that has the locks on /dev/sdb, /dev/sdc, etc.., select storage devices and in here is where you see a small column with a bunch of lock images showing for each row. However as a work around, on the newly added hosts (3 total), I had to manually modify /etc/multipath.conf and add the following at the end as this is what I noticed from the original 3 node setup. ------------------------------------------------------------- # VDSM REVISION 1.3 # VDSM PRIVATE # BEGIN Added by gluster_hci role blacklist { devnode "*" } # END Added by gluster_hci role ----------------------------------------------------------After this I restarted multipath and the lock went away and was able to configure the new bricks thru the UI, however my concern is what will happen if I reboot the server will the disks be read the same way by the OS? Also now able to expand the gluster with a new replicate 3 volume if needed using http://host4.mydomain.com:9090. thanks again On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote: In which menu do you see it this way ? Best Regards,Strahil Nikolov В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero <adrianquintero@gmail.com> написа: Strahil,this is the issue I am seeing now The is thru the UI when I try to create a new brick. So my concern is if I modify the filters on the OS what impact will that have after server reboots? thanks, On Mon, Apr 22, 2019 at 11:39 PM Strahil <hunter86_bg@yahoo.com> wrote: I have edited my multipath.conf to exclude local disks , but you need to set '#VDSM private' as per the comments in the header of the file. Otherwise, use the /dev/mapper/multipath-device notation - as you would do with any linux. Best Regards, Strahil NikolovOn Apr 23, 2019 01:07, adrianquintero@gmail.com wrote:
Thanks Alex, that makes more sense now while trying to follow the instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd are locked and inidicating " multpath_member" hence not letting me create new bricks. And on the logs I see
Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} Same thing for sdc, sdd
Should I manually edit the filters inside the OS, what will be the impact?
thanks again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLX...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLP... -- Adrian Quintero

I have the same locks , despite I have blacklisted all local disks: # VDSM PRIVATE blacklist { devnode "*" wwid Crucial_CT256MX100SSD1_14390D52DCF5 wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126 wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378 wwid nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-00000001 } If you have multipath reconfigured, do not forget to rebuild the initramfs (dracut -f). It's a linux issue , and not oVirt one. In your case you had something like this: /dev/VG/LV /dev/disk/by-id/pvuuid /dev/mapper/multipath-uuid /dev/sdb Linux will not allow you to work with /dev/sdb , when multipath is locking the block device. Best Regards,Strahil Nikolov В четвъртък, 25 април 2019 г., 8:30:16 ч. Гринуич-4, Adrian Quintero <adrianquintero@gmail.com> написа: under Compute, hosts, select the host that has the locks on /dev/sdb, /dev/sdc, etc.., select storage devices and in here is where you see a small column with a bunch of lock images showing for each row. However as a work around, on the newly added hosts (3 total), I had to manually modify /etc/multipath.conf and add the following at the end as this is what I noticed from the original 3 node setup. ------------------------------------------------------------- # VDSM REVISION 1.3 # VDSM PRIVATE # BEGIN Added by gluster_hci role blacklist { devnode "*" } # END Added by gluster_hci role ----------------------------------------------------------After this I restarted multipath and the lock went away and was able to configure the new bricks thru the UI, however my concern is what will happen if I reboot the server will the disks be read the same way by the OS? Also now able to expand the gluster with a new replicate 3 volume if needed using http://host4.mydomain.com:9090. thanks again On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote: In which menu do you see it this way ? Best Regards,Strahil Nikolov В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero <adrianquintero@gmail.com> написа: Strahil,this is the issue I am seeing now The is thru the UI when I try to create a new brick. So my concern is if I modify the filters on the OS what impact will that have after server reboots? thanks, On Mon, Apr 22, 2019 at 11:39 PM Strahil <hunter86_bg@yahoo.com> wrote: I have edited my multipath.conf to exclude local disks , but you need to set '#VDSM private' as per the comments in the header of the file. Otherwise, use the /dev/mapper/multipath-device notation - as you would do with any linux. Best Regards, Strahil NikolovOn Apr 23, 2019 01:07, adrianquintero@gmail.com wrote:
Thanks Alex, that makes more sense now while trying to follow the instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd are locked and inidicating " multpath_member" hence not letting me create new bricks. And on the logs I see
Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} Same thing for sdc, sdd
Should I manually edit the filters inside the OS, what will be the impact?
thanks again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLX...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLP... -- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KRKR5LFARNRHFR...

Ok, I will remove the extra 3 hosts, rebuild them from scratch and re-attach them to clear any possible issues and try out the suggestions provided. thank you! On Thu, Apr 25, 2019 at 9:22 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
I have the same locks , despite I have blacklisted all local disks:
# VDSM PRIVATE blacklist { devnode "*" wwid Crucial_CT256MX100SSD1_14390D52DCF5 wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126 wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378 wwid nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-00000001 }
If you have multipath reconfigured, do not forget to rebuild the initramfs (dracut -f). It's a linux issue , and not oVirt one.
In your case you had something like this: /dev/VG/LV /dev/disk/by-id/pvuuid /dev/mapper/multipath-uuid /dev/sdb
Linux will not allow you to work with /dev/sdb , when multipath is locking the block device.
Best Regards, Strahil Nikolov
В четвъртък, 25 април 2019 г., 8:30:16 ч. Гринуич-4, Adrian Quintero < adrianquintero@gmail.com> написа:
under Compute, hosts, select the host that has the locks on /dev/sdb, /dev/sdc, etc.., select storage devices and in here is where you see a small column with a bunch of lock images showing for each row.
However as a work around, on the newly added hosts (3 total), I had to manually modify /etc/multipath.conf and add the following at the end as this is what I noticed from the original 3 node setup.
------------------------------------------------------------- # VDSM REVISION 1.3 # VDSM PRIVATE # BEGIN Added by gluster_hci role
blacklist { devnode "*" } # END Added by gluster_hci role ---------------------------------------------------------- After this I restarted multipath and the lock went away and was able to configure the new bricks thru the UI, however my concern is what will happen if I reboot the server will the disks be read the same way by the OS?
Also now able to expand the gluster with a new replicate 3 volume if needed using http://host4.mydomain.com:9090.
thanks again
On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
In which menu do you see it this way ?
Best Regards, Strahil Nikolov
В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero < adrianquintero@gmail.com> написа:
Strahil, this is the issue I am seeing now
[image: image.png]
The is thru the UI when I try to create a new brick.
So my concern is if I modify the filters on the OS what impact will that have after server reboots?
thanks,
On Mon, Apr 22, 2019 at 11:39 PM Strahil <hunter86_bg@yahoo.com> wrote:
I have edited my multipath.conf to exclude local disks , but you need to set '#VDSM private' as per the comments in the header of the file. Otherwise, use the /dev/mapper/multipath-device notation - as you would do with any linux.
Best Regards, Strahil NikolovOn Apr 23, 2019 01:07, adrianquintero@gmail.com wrote:
Thanks Alex, that makes more sense now while trying to follow the
instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd are locked and inidicating " multpath_member" hence not letting me create new bricks. And on the logs I see
Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb",
"vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
Same thing for sdc, sdd
Should I manually edit the filters inside the OS, what will be the impact?
thanks again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLX...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLP...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KRKR5LFARNRHFR...
-- Adrian Quintero

Sahina, Can someone from your team review the steps done by Adrian? Thanks, Freddy On Thu, Apr 25, 2019 at 5:14 PM Adrian Quintero <adrianquintero@gmail.com> wrote:
Ok, I will remove the extra 3 hosts, rebuild them from scratch and re-attach them to clear any possible issues and try out the suggestions provided.
thank you!
On Thu, Apr 25, 2019 at 9:22 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
I have the same locks , despite I have blacklisted all local disks:
# VDSM PRIVATE blacklist { devnode "*" wwid Crucial_CT256MX100SSD1_14390D52DCF5 wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126 wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378 wwid nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-00000001 }
If you have multipath reconfigured, do not forget to rebuild the initramfs (dracut -f). It's a linux issue , and not oVirt one.
In your case you had something like this: /dev/VG/LV /dev/disk/by-id/pvuuid /dev/mapper/multipath-uuid /dev/sdb
Linux will not allow you to work with /dev/sdb , when multipath is locking the block device.
Best Regards, Strahil Nikolov
В четвъртък, 25 април 2019 г., 8:30:16 ч. Гринуич-4, Adrian Quintero < adrianquintero@gmail.com> написа:
under Compute, hosts, select the host that has the locks on /dev/sdb, /dev/sdc, etc.., select storage devices and in here is where you see a small column with a bunch of lock images showing for each row.
However as a work around, on the newly added hosts (3 total), I had to manually modify /etc/multipath.conf and add the following at the end as this is what I noticed from the original 3 node setup.
------------------------------------------------------------- # VDSM REVISION 1.3 # VDSM PRIVATE # BEGIN Added by gluster_hci role
blacklist { devnode "*" } # END Added by gluster_hci role ---------------------------------------------------------- After this I restarted multipath and the lock went away and was able to configure the new bricks thru the UI, however my concern is what will happen if I reboot the server will the disks be read the same way by the OS?
Also now able to expand the gluster with a new replicate 3 volume if needed using http://host4.mydomain.com:9090.
thanks again
On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
In which menu do you see it this way ?
Best Regards, Strahil Nikolov
В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero < adrianquintero@gmail.com> написа:
Strahil, this is the issue I am seeing now
[image: image.png]
The is thru the UI when I try to create a new brick.
So my concern is if I modify the filters on the OS what impact will that have after server reboots?
thanks,
On Mon, Apr 22, 2019 at 11:39 PM Strahil <hunter86_bg@yahoo.com> wrote:
I have edited my multipath.conf to exclude local disks , but you need to set '#VDSM private' as per the comments in the header of the file. Otherwise, use the /dev/mapper/multipath-device notation - as you would do with any linux.
Best Regards, Strahil NikolovOn Apr 23, 2019 01:07, adrianquintero@gmail.com wrote:
Thanks Alex, that makes more sense now while trying to follow the
instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd are locked and inidicating " multpath_member" hence not letting me create new bricks. And on the logs I see
Device /dev/sdb excluded by a filter.\n", "item": {"pvname":
"/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
Same thing for sdc, sdd
Should I manually edit the filters inside the OS, what will be the impact?
thanks again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLX...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLP...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KRKR5LFARNRHFR...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5EQPZTGK6PWG42...

To scale existing volumes - you need to add bricks and run rebalance on the gluster volume so that data is correctly redistributed as Alex mentioned. We do support expanding existing volumes as the bug https://bugzilla.redhat.com/show_bug.cgi?id=1471031 has been fixed As to procedure to expand volumes: 1. Create bricks from UI - select Host -> Storage Device -> Storage device. Click on "Create Brick" If the device is shown as locked, make sure there's no signature on device. If multipath entries have been created for local devices, you can blacklist those devices in multipath.conf and restart multipath. (If you see device as locked even after you do this -please report back). 2. Expand volume using Volume -> Bricks -> Add Bricks, and select the 3 bricks created in previous step 3. Run Rebalance on the volume. Volume -> Rebalance. On Thu, May 16, 2019 at 2:48 PM Fred Rolland <frolland@redhat.com> wrote:
Sahina, Can someone from your team review the steps done by Adrian? Thanks, Freddy
On Thu, Apr 25, 2019 at 5:14 PM Adrian Quintero <adrianquintero@gmail.com> wrote:
Ok, I will remove the extra 3 hosts, rebuild them from scratch and re-attach them to clear any possible issues and try out the suggestions provided.
thank you!
On Thu, Apr 25, 2019 at 9:22 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
I have the same locks , despite I have blacklisted all local disks:
# VDSM PRIVATE blacklist { devnode "*" wwid Crucial_CT256MX100SSD1_14390D52DCF5 wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126 wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378 wwid nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-00000001 }
If you have multipath reconfigured, do not forget to rebuild the initramfs (dracut -f). It's a linux issue , and not oVirt one.
In your case you had something like this: /dev/VG/LV /dev/disk/by-id/pvuuid /dev/mapper/multipath-uuid /dev/sdb
Linux will not allow you to work with /dev/sdb , when multipath is locking the block device.
Best Regards, Strahil Nikolov
В четвъртък, 25 април 2019 г., 8:30:16 ч. Гринуич-4, Adrian Quintero < adrianquintero@gmail.com> написа:
under Compute, hosts, select the host that has the locks on /dev/sdb, /dev/sdc, etc.., select storage devices and in here is where you see a small column with a bunch of lock images showing for each row.
However as a work around, on the newly added hosts (3 total), I had to manually modify /etc/multipath.conf and add the following at the end as this is what I noticed from the original 3 node setup.
------------------------------------------------------------- # VDSM REVISION 1.3 # VDSM PRIVATE # BEGIN Added by gluster_hci role
blacklist { devnode "*" } # END Added by gluster_hci role ---------------------------------------------------------- After this I restarted multipath and the lock went away and was able to configure the new bricks thru the UI, however my concern is what will happen if I reboot the server will the disks be read the same way by the OS?
Also now able to expand the gluster with a new replicate 3 volume if needed using http://host4.mydomain.com:9090.
thanks again
On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
In which menu do you see it this way ?
Best Regards, Strahil Nikolov
В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero < adrianquintero@gmail.com> написа:
Strahil, this is the issue I am seeing now
[image: image.png]
The is thru the UI when I try to create a new brick.
So my concern is if I modify the filters on the OS what impact will that have after server reboots?
thanks,
On Mon, Apr 22, 2019 at 11:39 PM Strahil <hunter86_bg@yahoo.com> wrote:
I have edited my multipath.conf to exclude local disks , but you need to set '#VDSM private' as per the comments in the header of the file. Otherwise, use the /dev/mapper/multipath-device notation - as you would do with any linux.
Best Regards, Strahil NikolovOn Apr 23, 2019 01:07, adrianquintero@gmail.com wrote:
Thanks Alex, that makes more sense now while trying to follow the
instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd are locked and inidicating " multpath_member" hence not letting me create new bricks. And on the logs I see
Device /dev/sdb excluded by a filter.\n", "item": {"pvname":
"/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
Same thing for sdc, sdd
Should I manually edit the filters inside the OS, what will be the impact?
thanks again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLX...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLP...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KRKR5LFARNRHFR...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5EQPZTGK6PWG42...

Sahina, Yesterday I started with a fresh install, I completely wiped clean all the disks, recreated the arrays from within my controller of our DL380 Gen 9's. OS: RAID 1 (2x600GB HDDs): /dev/sda // Using ovirt node 4.3.3.1 iso. engine and VMSTORE1: JBOD (1x3TB HDD):/dev/sdb DATA1: JBOD (1x3TB HDD): /dev/sdc DATA2: JBOD (1x3TB HDD): /dev/sdd Caching disk: JOBD (1x440GB SDD): /dev/sde *After the OS install on the first 3 servers and setting up ssh keys, I started the Hyperconverged deploy process:* 1.-Logged int to the first server http://host1.example.com:9090 2.-Selected Hyperconverged, clicked on "Run Gluster Wizard" 3.-Followed the wizard steps (Hosts, FQDNs, Packages, Volumes, Bricks, Review) *Hosts/FQDNs:* host1.example.com host2.example.com host3.example.com *Packages:* *Volumes:* engine:replicate:/gluster_bricks/engine/engine vmstore1:replicate:/gluster_bricks/vmstore1/vmstore1 data1:replicate:/gluster_bricks/data1/data1 data2:replicate:/gluster_bricks/data2/data2 *Bricks:* engine:/dev/sdb:100GB:/gluster_bricks/engine vmstore1:/dev/sdb:2600GB:/gluster_bricks/vmstrore1 data1:/dev/sdc:2700GB:/gluster_bricks/data1 data2:/dev/sdd:2700GB:/gluster_bricks/data2 LV Cache: /dev/sde:400GB:writethrough 4.-After I hit deploy on the last step of the "Wizard" that is when I get the disk filter error. TASK [gluster.infra/roles/backend_setup : Create volume groups] **************** failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"changed": false, "err": " Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"changed": false, "err": " Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"changed": false, "err": " Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': u'/dev/sdc'}) => {"changed": false, "err": " Device /dev/sdc excluded by a filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": "Creating physical volume '/dev/sdc' failed", "rc": 5} failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': u'/dev/sdc'}) => {"changed": false, "err": " Device /dev/sdc excluded by a filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": "Creating physical volume '/dev/sdc' failed", "rc": 5} failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': u'/dev/sdc'}) => {"changed": false, "err": " Device /dev/sdc excluded by a filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": "Creating physical volume '/dev/sdc' failed", "rc": 5} failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd excluded by a filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": "Creating physical volume '/dev/sdd' failed", "rc": 5} failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd excluded by a filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": "Creating physical volume '/dev/sdd' failed", "rc": 5} failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd excluded by a filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": "Creating physical volume '/dev/sdd' failed", "rc": 5} Attached is the generated yml file ( /etc/ansible/hc_wizard_inventory.yml) and the "Deployment Failed" file Also wondering if I hit this bug? https://bugzilla.redhat.com/show_bug.cgi?id=1635614 Thanks for looking into this. *Adrian Quintero* *adrianquintero@gmail.com <adrianquintero@gmail.com> | adrian.quintero@rackspace.com <adrian.quintero@rackspace.com>* On Mon, May 20, 2019 at 7:56 AM Sahina Bose <sabose@redhat.com> wrote:
To scale existing volumes - you need to add bricks and run rebalance on the gluster volume so that data is correctly redistributed as Alex mentioned. We do support expanding existing volumes as the bug https://bugzilla.redhat.com/show_bug.cgi?id=1471031 has been fixed
As to procedure to expand volumes: 1. Create bricks from UI - select Host -> Storage Device -> Storage device. Click on "Create Brick" If the device is shown as locked, make sure there's no signature on device. If multipath entries have been created for local devices, you can blacklist those devices in multipath.conf and restart multipath. (If you see device as locked even after you do this -please report back). 2. Expand volume using Volume -> Bricks -> Add Bricks, and select the 3 bricks created in previous step 3. Run Rebalance on the volume. Volume -> Rebalance.
On Thu, May 16, 2019 at 2:48 PM Fred Rolland <frolland@redhat.com> wrote:
Sahina, Can someone from your team review the steps done by Adrian? Thanks, Freddy
On Thu, Apr 25, 2019 at 5:14 PM Adrian Quintero <adrianquintero@gmail.com> wrote:
Ok, I will remove the extra 3 hosts, rebuild them from scratch and re-attach them to clear any possible issues and try out the suggestions provided.
thank you!
On Thu, Apr 25, 2019 at 9:22 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
I have the same locks , despite I have blacklisted all local disks:
# VDSM PRIVATE blacklist { devnode "*" wwid Crucial_CT256MX100SSD1_14390D52DCF5 wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126 wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378 wwid nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-00000001 }
If you have multipath reconfigured, do not forget to rebuild the initramfs (dracut -f). It's a linux issue , and not oVirt one.
In your case you had something like this: /dev/VG/LV /dev/disk/by-id/pvuuid /dev/mapper/multipath-uuid /dev/sdb
Linux will not allow you to work with /dev/sdb , when multipath is locking the block device.
Best Regards, Strahil Nikolov
В четвъртък, 25 април 2019 г., 8:30:16 ч. Гринуич-4, Adrian Quintero < adrianquintero@gmail.com> написа:
under Compute, hosts, select the host that has the locks on /dev/sdb, /dev/sdc, etc.., select storage devices and in here is where you see a small column with a bunch of lock images showing for each row.
However as a work around, on the newly added hosts (3 total), I had to manually modify /etc/multipath.conf and add the following at the end as this is what I noticed from the original 3 node setup.
------------------------------------------------------------- # VDSM REVISION 1.3 # VDSM PRIVATE # BEGIN Added by gluster_hci role
blacklist { devnode "*" } # END Added by gluster_hci role ---------------------------------------------------------- After this I restarted multipath and the lock went away and was able to configure the new bricks thru the UI, however my concern is what will happen if I reboot the server will the disks be read the same way by the OS?
Also now able to expand the gluster with a new replicate 3 volume if needed using http://host4.mydomain.com:9090.
thanks again
On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
In which menu do you see it this way ?
Best Regards, Strahil Nikolov
В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero < adrianquintero@gmail.com> написа:
Strahil, this is the issue I am seeing now
[image: image.png]
The is thru the UI when I try to create a new brick.
So my concern is if I modify the filters on the OS what impact will that have after server reboots?
thanks,
On Mon, Apr 22, 2019 at 11:39 PM Strahil <hunter86_bg@yahoo.com> wrote:
I have edited my multipath.conf to exclude local disks , but you need to set '#VDSM private' as per the comments in the header of the file. Otherwise, use the /dev/mapper/multipath-device notation - as you would do with any linux.
Best Regards, Strahil NikolovOn Apr 23, 2019 01:07, adrianquintero@gmail.com wrote:
Thanks Alex, that makes more sense now while trying to follow the
instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd are locked and inidicating " multpath_member" hence not letting me create new bricks. And on the logs I see
Device /dev/sdb excluded by a filter.\n", "item": {"pvname":
"/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
Same thing for sdc, sdd
Should I manually edit the filters inside the OS, what will be the impact?
thanks again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLX...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLP...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KRKR5LFARNRHFR...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5EQPZTGK6PWG42...
-- Adrian Quintero

Hi Adrian, are you using local storage ? If yes, set a blacklist in multipath.conf (don't forget the "#VDSM PRIVATE" flag) and rebuild the initramfs and reboot.When multipath locks a path - no direct access is possible - thus your pvcreate should not be possible.Also , multipath is not needed for local storage ;) Best Regards,Strahil Nikolov В понеделник, 20 май 2019 г., 19:31:04 ч. Гринуич+3, Adrian Quintero <adrianquintero@gmail.com> написа: Sahina,Yesterday I started with a fresh install, I completely wiped clean all the disks, recreated the arrays from within my controller of our DL380 Gen 9's. OS: RAID 1 (2x600GB HDDs): /dev/sda // Using ovirt node 4.3.3.1 iso. engine and VMSTORE1: JBOD (1x3TB HDD):/dev/sdb DATA1: JBOD (1x3TB HDD): /dev/sdc DATA2: JBOD (1x3TB HDD): /dev/sdd Caching disk: JOBD (1x440GB SDD): /dev/sde After the OS install on the first 3 servers and setting up ssh keys, I started the Hyperconverged deploy process:1.-Logged int to the first server http://host1.example.com:90902.-Selected Hyperconverged, clicked on "Run Gluster Wizard"3.-Followed the wizard steps (Hosts, FQDNs, Packages, Volumes, Bricks, Review)Hosts/FQDNs:host1.example.comhost2.example.comhost3.example.comPackages:Volumes:engine:replicate:/gluster_bricks/engine/enginevmstore1:replicate:/gluster_bricks/vmstore1/vmstore1data1:replicate:/gluster_bricks/data1/data1data2:replicate:/gluster_bricks/data2/data2Bricks:engine:/dev/sdb:100GB:/gluster_bricks/enginevmstore1:/dev/sdb:2600GB:/gluster_bricks/vmstrore1data1:/dev/sdc:2700GB:/gluster_bricks/data1data2:/dev/sdd:2700GB:/gluster_bricks/data2LV Cache:/dev/sde:400GB:writethrough4.-After I hit deploy on the last step of the "Wizard" that is when I get the disk filter error. TASK [gluster.infra/roles/backend_setup : Create volume groups] **************** failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"changed": false, "err": " Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"changed": false, "err": " Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"changed": false, "err": " Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': u'/dev/sdc'}) => {"changed": false, "err": " Device /dev/sdc excluded by a filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": "Creating physical volume '/dev/sdc' failed", "rc": 5} failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': u'/dev/sdc'}) => {"changed": false, "err": " Device /dev/sdc excluded by a filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": "Creating physical volume '/dev/sdc' failed", "rc": 5} failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': u'/dev/sdc'}) => {"changed": false, "err": " Device /dev/sdc excluded by a filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": "Creating physical volume '/dev/sdc' failed", "rc": 5} failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd excluded by a filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": "Creating physical volume '/dev/sdd' failed", "rc": 5} failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd excluded by a filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": "Creating physical volume '/dev/sdd' failed", "rc": 5} failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd excluded by a filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": "Creating physical volume '/dev/sdd' failed", "rc": 5} Attached is the generated yml file ( /etc/ansible/hc_wizard_inventory.yml) and the "Deployment Failed" file Also wondering if I hit this bug? https://bugzilla.redhat.com/show_bug.cgi?id=1635614 Thanks for looking into this. Adrian Quinteroadrianquintero@gmail.com | adrian.quintero@rackspace.com On Mon, May 20, 2019 at 7:56 AM Sahina Bose <sabose@redhat.com> wrote: To scale existing volumes - you need to add bricks and run rebalance on the gluster volume so that data is correctly redistributed as Alex mentioned.We do support expanding existing volumes as the bug https://bugzilla.redhat.com/show_bug.cgi?id=1471031 has been fixed As to procedure to expand volumes:1. Create bricks from UI - select Host -> Storage Device -> Storage device. Click on "Create Brick"If the device is shown as locked, make sure there's no signature on device. If multipath entries have been created for local devices, you can blacklist those devices in multipath.conf and restart multipath. (If you see device as locked even after you do this -please report back).2. Expand volume using Volume -> Bricks -> Add Bricks, and select the 3 bricks created in previous step3. Run Rebalance on the volume. Volume -> Rebalance. On Thu, May 16, 2019 at 2:48 PM Fred Rolland <frolland@redhat.com> wrote: Sahina,Can someone from your team review the steps done by Adrian? Thanks,Freddy On Thu, Apr 25, 2019 at 5:14 PM Adrian Quintero <adrianquintero@gmail.com> wrote: Ok, I will remove the extra 3 hosts, rebuild them from scratch and re-attach them to clear any possible issues and try out the suggestions provided. thank you! On Thu, Apr 25, 2019 at 9:22 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote: I have the same locks , despite I have blacklisted all local disks: # VDSM PRIVATE blacklist { devnode "*" wwid Crucial_CT256MX100SSD1_14390D52DCF5 wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126 wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378 wwid nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-00000001 } If you have multipath reconfigured, do not forget to rebuild the initramfs (dracut -f). It's a linux issue , and not oVirt one. In your case you had something like this: /dev/VG/LV /dev/disk/by-id/pvuuid /dev/mapper/multipath-uuid /dev/sdb Linux will not allow you to work with /dev/sdb , when multipath is locking the block device. Best Regards,Strahil Nikolov В четвъртък, 25 април 2019 г., 8:30:16 ч. Гринуич-4, Adrian Quintero <adrianquintero@gmail.com> написа: under Compute, hosts, select the host that has the locks on /dev/sdb, /dev/sdc, etc.., select storage devices and in here is where you see a small column with a bunch of lock images showing for each row. However as a work around, on the newly added hosts (3 total), I had to manually modify /etc/multipath.conf and add the following at the end as this is what I noticed from the original 3 node setup. ------------------------------------------------------------- # VDSM REVISION 1.3 # VDSM PRIVATE # BEGIN Added by gluster_hci role blacklist { devnode "*" } # END Added by gluster_hci role ----------------------------------------------------------After this I restarted multipath and the lock went away and was able to configure the new bricks thru the UI, however my concern is what will happen if I reboot the server will the disks be read the same way by the OS? Also now able to expand the gluster with a new replicate 3 volume if needed using http://host4.mydomain.com:9090. thanks again On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote: In which menu do you see it this way ? Best Regards,Strahil Nikolov В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero <adrianquintero@gmail.com> написа: Strahil,this is the issue I am seeing now The is thru the UI when I try to create a new brick. So my concern is if I modify the filters on the OS what impact will that have after server reboots? thanks, On Mon, Apr 22, 2019 at 11:39 PM Strahil <hunter86_bg@yahoo.com> wrote: I have edited my multipath.conf to exclude local disks , but you need to set '#VDSM private' as per the comments in the header of the file. Otherwise, use the /dev/mapper/multipath-device notation - as you would do with any linux. Best Regards, Strahil NikolovOn Apr 23, 2019 01:07, adrianquintero@gmail.com wrote:
Thanks Alex, that makes more sense now while trying to follow the instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd are locked and inidicating " multpath_member" hence not letting me create new bricks. And on the logs I see
Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} Same thing for sdc, sdd
Should I manually edit the filters inside the OS, what will be the impact?
thanks again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLX...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLP... -- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KRKR5LFARNRHFR... -- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5EQPZTGK6PWG42... -- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/OJWUV5JJ7TAU7L...

On Mon, May 20, 2019 at 9:55 PM Adrian Quintero <adrianquintero@gmail.com> wrote:
Sahina, Yesterday I started with a fresh install, I completely wiped clean all the disks, recreated the arrays from within my controller of our DL380 Gen 9's.
OS: RAID 1 (2x600GB HDDs): /dev/sda // Using ovirt node 4.3.3.1 iso. engine and VMSTORE1: JBOD (1x3TB HDD):/dev/sdb DATA1: JBOD (1x3TB HDD): /dev/sdc DATA2: JBOD (1x3TB HDD): /dev/sdd Caching disk: JOBD (1x440GB SDD): /dev/sde
*After the OS install on the first 3 servers and setting up ssh keys, I started the Hyperconverged deploy process:* 1.-Logged int to the first server http://host1.example.com:9090 2.-Selected Hyperconverged, clicked on "Run Gluster Wizard" 3.-Followed the wizard steps (Hosts, FQDNs, Packages, Volumes, Bricks, Review) *Hosts/FQDNs:* host1.example.com host2.example.com host3.example.com *Packages:* *Volumes:* engine:replicate:/gluster_bricks/engine/engine vmstore1:replicate:/gluster_bricks/vmstore1/vmstore1 data1:replicate:/gluster_bricks/data1/data1 data2:replicate:/gluster_bricks/data2/data2 *Bricks:* engine:/dev/sdb:100GB:/gluster_bricks/engine vmstore1:/dev/sdb:2600GB:/gluster_bricks/vmstrore1 data1:/dev/sdc:2700GB:/gluster_bricks/data1 data2:/dev/sdd:2700GB:/gluster_bricks/data2 LV Cache: /dev/sde:400GB:writethrough 4.-After I hit deploy on the last step of the "Wizard" that is when I get the disk filter error. TASK [gluster.infra/roles/backend_setup : Create volume groups] **************** failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"changed": false, "err": " Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"changed": false, "err": " Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"changed": false, "err": " Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': u'/dev/sdc'}) => {"changed": false, "err": " Device /dev/sdc excluded by a filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": "Creating physical volume '/dev/sdc' failed", "rc": 5} failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': u'/dev/sdc'}) => {"changed": false, "err": " Device /dev/sdc excluded by a filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": "Creating physical volume '/dev/sdc' failed", "rc": 5} failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': u'/dev/sdc'}) => {"changed": false, "err": " Device /dev/sdc excluded by a filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": "Creating physical volume '/dev/sdc' failed", "rc": 5} failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd excluded by a filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": "Creating physical volume '/dev/sdd' failed", "rc": 5} failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd excluded by a filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": "Creating physical volume '/dev/sdd' failed", "rc": 5} failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd excluded by a filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": "Creating physical volume '/dev/sdd' failed", "rc": 5}
Attached is the generated yml file ( /etc/ansible/hc_wizard_inventory.yml) and the "Deployment Failed" file
Also wondering if I hit this bug? https://bugzilla.redhat.com/show_bug.cgi?id=1635614
+Sachidananda URS <surs@redhat.com> +Gobinda Das <godas@redhat.com> to review the inventory file and failures
Thanks for looking into this.
*Adrian Quintero* *adrianquintero@gmail.com <adrianquintero@gmail.com> | adrian.quintero@rackspace.com <adrian.quintero@rackspace.com>*
On Mon, May 20, 2019 at 7:56 AM Sahina Bose <sabose@redhat.com> wrote:
To scale existing volumes - you need to add bricks and run rebalance on the gluster volume so that data is correctly redistributed as Alex mentioned. We do support expanding existing volumes as the bug https://bugzilla.redhat.com/show_bug.cgi?id=1471031 has been fixed
As to procedure to expand volumes: 1. Create bricks from UI - select Host -> Storage Device -> Storage device. Click on "Create Brick" If the device is shown as locked, make sure there's no signature on device. If multipath entries have been created for local devices, you can blacklist those devices in multipath.conf and restart multipath. (If you see device as locked even after you do this -please report back). 2. Expand volume using Volume -> Bricks -> Add Bricks, and select the 3 bricks created in previous step 3. Run Rebalance on the volume. Volume -> Rebalance.
On Thu, May 16, 2019 at 2:48 PM Fred Rolland <frolland@redhat.com> wrote:
Sahina, Can someone from your team review the steps done by Adrian? Thanks, Freddy
On Thu, Apr 25, 2019 at 5:14 PM Adrian Quintero < adrianquintero@gmail.com> wrote:
Ok, I will remove the extra 3 hosts, rebuild them from scratch and re-attach them to clear any possible issues and try out the suggestions provided.
thank you!
On Thu, Apr 25, 2019 at 9:22 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
I have the same locks , despite I have blacklisted all local disks:
# VDSM PRIVATE blacklist { devnode "*" wwid Crucial_CT256MX100SSD1_14390D52DCF5 wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126 wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378 wwid nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-00000001 }
If you have multipath reconfigured, do not forget to rebuild the initramfs (dracut -f). It's a linux issue , and not oVirt one.
In your case you had something like this: /dev/VG/LV /dev/disk/by-id/pvuuid /dev/mapper/multipath-uuid /dev/sdb
Linux will not allow you to work with /dev/sdb , when multipath is locking the block device.
Best Regards, Strahil Nikolov
В четвъртък, 25 април 2019 г., 8:30:16 ч. Гринуич-4, Adrian Quintero < adrianquintero@gmail.com> написа:
under Compute, hosts, select the host that has the locks on /dev/sdb, /dev/sdc, etc.., select storage devices and in here is where you see a small column with a bunch of lock images showing for each row.
However as a work around, on the newly added hosts (3 total), I had to manually modify /etc/multipath.conf and add the following at the end as this is what I noticed from the original 3 node setup.
------------------------------------------------------------- # VDSM REVISION 1.3 # VDSM PRIVATE # BEGIN Added by gluster_hci role
blacklist { devnode "*" } # END Added by gluster_hci role ---------------------------------------------------------- After this I restarted multipath and the lock went away and was able to configure the new bricks thru the UI, however my concern is what will happen if I reboot the server will the disks be read the same way by the OS?
Also now able to expand the gluster with a new replicate 3 volume if needed using http://host4.mydomain.com:9090.
thanks again
On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
In which menu do you see it this way ?
Best Regards, Strahil Nikolov
В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero < adrianquintero@gmail.com> написа:
Strahil, this is the issue I am seeing now
[image: image.png]
The is thru the UI when I try to create a new brick.
So my concern is if I modify the filters on the OS what impact will that have after server reboots?
thanks,
On Mon, Apr 22, 2019 at 11:39 PM Strahil <hunter86_bg@yahoo.com> wrote:
I have edited my multipath.conf to exclude local disks , but you need to set '#VDSM private' as per the comments in the header of the file. Otherwise, use the /dev/mapper/multipath-device notation - as you would do with any linux.
Best Regards, Strahil NikolovOn Apr 23, 2019 01:07, adrianquintero@gmail.com wrote:
Thanks Alex, that makes more sense now while trying to follow the
instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd are locked and inidicating " multpath_member" hence not letting me create new bricks. And on the logs I see
Device /dev/sdb excluded by a filter.\n", "item": {"pvname":
"/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
Same thing for sdc, sdd
Should I manually edit the filters inside the OS, what will be the impact?
thanks again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLX...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLP...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KRKR5LFARNRHFR...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5EQPZTGK6PWG42...
-- Adrian Quintero

On Tue, May 21, 2019 at 12:16 PM Sahina Bose <sabose@redhat.com> wrote:
On Mon, May 20, 2019 at 9:55 PM Adrian Quintero <adrianquintero@gmail.com> wrote:
Sahina, Yesterday I started with a fresh install, I completely wiped clean all the disks, recreated the arrays from within my controller of our DL380 Gen 9's.
OS: RAID 1 (2x600GB HDDs): /dev/sda // Using ovirt node 4.3.3.1 iso. engine and VMSTORE1: JBOD (1x3TB HDD):/dev/sdb DATA1: JBOD (1x3TB HDD): /dev/sdc DATA2: JBOD (1x3TB HDD): /dev/sdd Caching disk: JOBD (1x440GB SDD): /dev/sde
*After the OS install on the first 3 servers and setting up ssh keys, I started the Hyperconverged deploy process:* 1.-Logged int to the first server http://host1.example.com:9090 2.-Selected Hyperconverged, clicked on "Run Gluster Wizard" 3.-Followed the wizard steps (Hosts, FQDNs, Packages, Volumes, Bricks, Review) *Hosts/FQDNs:* host1.example.com host2.example.com host3.example.com *Packages:* *Volumes:* engine:replicate:/gluster_bricks/engine/engine vmstore1:replicate:/gluster_bricks/vmstore1/vmstore1 data1:replicate:/gluster_bricks/data1/data1 data2:replicate:/gluster_bricks/data2/data2 *Bricks:* engine:/dev/sdb:100GB:/gluster_bricks/engine vmstore1:/dev/sdb:2600GB:/gluster_bricks/vmstrore1 data1:/dev/sdc:2700GB:/gluster_bricks/data1 data2:/dev/sdd:2700GB:/gluster_bricks/data2 LV Cache: /dev/sde:400GB:writethrough 4.-After I hit deploy on the last step of the "Wizard" that is when I get the disk filter error. TASK [gluster.infra/roles/backend_setup : Create volume groups] **************** failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"changed": false, "err": " Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"changed": false, "err": " Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"changed": false, "err": " Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': u'/dev/sdc'}) => {"changed": false, "err": " Device /dev/sdc excluded by a filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": "Creating physical volume '/dev/sdc' failed", "rc": 5} failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': u'/dev/sdc'}) => {"changed": false, "err": " Device /dev/sdc excluded by a filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": "Creating physical volume '/dev/sdc' failed", "rc": 5} failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': u'/dev/sdc'}) => {"changed": false, "err": " Device /dev/sdc excluded by a filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": "Creating physical volume '/dev/sdc' failed", "rc": 5} failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd excluded by a filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": "Creating physical volume '/dev/sdd' failed", "rc": 5} failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd excluded by a filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": "Creating physical volume '/dev/sdd' failed", "rc": 5} failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd excluded by a filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": "Creating physical volume '/dev/sdd' failed", "rc": 5}
Attached is the generated yml file ( /etc/ansible/hc_wizard_inventory.yml) and the "Deployment Failed" file
Also wondering if I hit this bug? https://bugzilla.redhat.com/show_bug.cgi?id=1635614
+Sachidananda URS <surs@redhat.com> +Gobinda Das <godas@redhat.com> to review the inventory file and failures
Hello Adrian, Can you please provide the output of: # fdisk -l /dev/sdd # fdisk -l /dev/sdb I think there could be stale signature on the disk causing this error. Some of the possible solutions to try: 1) # wipefs -a /dev/sdb # wipefs -a /dev/sdd 2) You can zero out first few sectors of disk by: # dd if=/dev/zero of=/dev/sdb bs=1M count=10 3) Check if partition is visible in /proc/partitions If not: # partprobe /dev/sdb 4) Check if filtering is configured wrongly in /etc/lvm/lvm.conf grep for 'filter =' -sac

Sac, *To answer some of your questions:* *fdisk -l:* [root@host1 ~]# fdisk -l /dev/sdb Disk /dev/sde: 480.1 GB, 480070426624 bytes, 937637552 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 262144 bytes / 262144 bytes [root@host1 ~]# fdisk -l /dev/sdc Disk /dev/sdc: 3000.6 GB, 3000559427584 bytes, 5860467632 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 262144 bytes / 262144 bytes [root@host1 ~]# fdisk -l /dev/sdd Disk /dev/sdd: 3000.6 GB, 3000559427584 bytes, 5860467632 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 262144 bytes / 262144 bytes *1) i did wipefs to all /dev/sdb,c,d,e* *2) I did not zero out the disks as I had done it thru the controller.* *3) cat /proc/partitions:* [root@host1 ~]# cat /proc/partitions major minor #blocks name 8 0 586029016 sda 8 1 1048576 sda1 8 2 584978432 sda2 8 16 2930233816 sdb 8 32 2930233816 sdc 8 48 2930233816 sdd 8 64 468818776 sde *4) grep filter /etc/lvm/lvm.conf (I did not modify the lvm.conf file)* [root@host1 ~]# grep "filter =" /etc/lvm/lvm.conf # filter = [ "a|.*/|" ] # filter = [ "r|/dev/cdrom|" ] # filter = [ "a|loop|", "r|.*|" ] # filter = [ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ] # filter = [ "a|^/dev/hda8$|", "r|.*/|" ] # filter = [ "a|.*/|" ] # global_filter = [ "a|.*/|" ] # mlock_filter = [ "locale/locale-archive", "gconv/gconv-modules.cache" ] *What I did to get it working:* I re-installed my first 3 hosts using "ovirt-node-ng-installer-4.3.3-2019041712.el7.iso"and made sure I zeroed the disks from within the controller, then I performed the following steps: 1.- modifed the blacklist section on /etc/multipath.conf to this: blacklist { # protocol "(scsi:adt|scsi:sbp)" devnode "*" } 2.-Made sure the second line of /etc/multipath.conf has: # VDSM PRIVATE 3.-Increased /var/log to 15GB 4.-Rebuilt initramfs, rebooted 5.-wipefs -a /dev/sdb /dev/sdc /dev/sdd /dev/sde 6.-started the hyperconverged setup wizard and added* "gluster_features_force_varlogsizecheck: false"* to the "vars:" section on the Generated Ansible inventory : */etc/ansible/hc_wizard_inventory.yml* file as it was complaining about /var/log messages LV. *EUREKA: *After doing the above I was able to get past the filter issues, however I am still concerned if during a reboot the disks might come up differently. For example /dev/sdb might come up as /dev/sdx... I am trying to make sure this setup is always the same as we want to move this to production, however seems I still don't have the full hang of it and the RHV 4.1 course is way to old :) Thanks again for helping out with this. -AQ On Tue, May 21, 2019 at 3:29 AM Sachidananda URS <surs@redhat.com> wrote:
On Tue, May 21, 2019 at 12:16 PM Sahina Bose <sabose@redhat.com> wrote:
On Mon, May 20, 2019 at 9:55 PM Adrian Quintero <adrianquintero@gmail.com> wrote:
Sahina, Yesterday I started with a fresh install, I completely wiped clean all the disks, recreated the arrays from within my controller of our DL380 Gen 9's.
OS: RAID 1 (2x600GB HDDs): /dev/sda // Using ovirt node 4.3.3.1 iso. engine and VMSTORE1: JBOD (1x3TB HDD):/dev/sdb DATA1: JBOD (1x3TB HDD): /dev/sdc DATA2: JBOD (1x3TB HDD): /dev/sdd Caching disk: JOBD (1x440GB SDD): /dev/sde
*After the OS install on the first 3 servers and setting up ssh keys, I started the Hyperconverged deploy process:* 1.-Logged int to the first server http://host1.example.com:9090 2.-Selected Hyperconverged, clicked on "Run Gluster Wizard" 3.-Followed the wizard steps (Hosts, FQDNs, Packages, Volumes, Bricks, Review) *Hosts/FQDNs:* host1.example.com host2.example.com host3.example.com *Packages:* *Volumes:* engine:replicate:/gluster_bricks/engine/engine vmstore1:replicate:/gluster_bricks/vmstore1/vmstore1 data1:replicate:/gluster_bricks/data1/data1 data2:replicate:/gluster_bricks/data2/data2 *Bricks:* engine:/dev/sdb:100GB:/gluster_bricks/engine vmstore1:/dev/sdb:2600GB:/gluster_bricks/vmstrore1 data1:/dev/sdc:2700GB:/gluster_bricks/data1 data2:/dev/sdd:2700GB:/gluster_bricks/data2 LV Cache: /dev/sde:400GB:writethrough 4.-After I hit deploy on the last step of the "Wizard" that is when I get the disk filter error. TASK [gluster.infra/roles/backend_setup : Create volume groups] **************** failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"changed": false, "err": " Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"changed": false, "err": " Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"changed": false, "err": " Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': u'/dev/sdc'}) => {"changed": false, "err": " Device /dev/sdc excluded by a filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": "Creating physical volume '/dev/sdc' failed", "rc": 5} failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': u'/dev/sdc'}) => {"changed": false, "err": " Device /dev/sdc excluded by a filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": "Creating physical volume '/dev/sdc' failed", "rc": 5} failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': u'/dev/sdc'}) => {"changed": false, "err": " Device /dev/sdc excluded by a filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": "Creating physical volume '/dev/sdc' failed", "rc": 5} failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd excluded by a filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": "Creating physical volume '/dev/sdd' failed", "rc": 5} failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd excluded by a filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": "Creating physical volume '/dev/sdd' failed", "rc": 5} failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd excluded by a filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": "Creating physical volume '/dev/sdd' failed", "rc": 5}
Attached is the generated yml file ( /etc/ansible/hc_wizard_inventory.yml) and the "Deployment Failed" file
Also wondering if I hit this bug? https://bugzilla.redhat.com/show_bug.cgi?id=1635614
+Sachidananda URS <surs@redhat.com> +Gobinda Das <godas@redhat.com> to review the inventory file and failures
Hello Adrian,
Can you please provide the output of: # fdisk -l /dev/sdd # fdisk -l /dev/sdb
I think there could be stale signature on the disk causing this error. Some of the possible solutions to try: 1) # wipefs -a /dev/sdb # wipefs -a /dev/sdd
2) You can zero out first few sectors of disk by:
# dd if=/dev/zero of=/dev/sdb bs=1M count=10
3) Check if partition is visible in /proc/partitions If not: # partprobe /dev/sdb
4) Check if filtering is configured wrongly in /etc/lvm/lvm.conf grep for 'filter ='
-sac
-- Adrian Quintero

On Tue, May 21, 2019 at 9:00 PM Adrian Quintero <adrianquintero@gmail.com> wrote:
Sac,
6.-started the hyperconverged setup wizard and added* "gluster_features_force_varlogsizecheck: false"* to the "vars:" section on the Generated Ansible inventory : */etc/ansible/hc_wizard_inventory.yml* file as it was complaining about /var/log messages LV.
In the upcoming release I plan to remove this check. Since we will go ahead with logrotate.
*EUREKA: *After doing the above I was able to get past the filter issues, however I am still concerned if during a reboot the disks might come up differently. For example /dev/sdb might come up as /dev/sdx...
Even this shouldn't be a problem going forward, since we will use UUID to mount the devices. And the device name change shouldn't matter. Thanks for your feedback, I will see how we can improve the install experience. -sac

Hey Sahina, it seems that almost all of my devices are locked - just like Fred's.What exactly does it mean - I don't have any issues with my bricks/storage domains. Best Regards,Strahil Nikolov В понеделник, 20 май 2019 г., 14:56:11 ч. Гринуич+3, Sahina Bose <sabose@redhat.com> написа: To scale existing volumes - you need to add bricks and run rebalance on the gluster volume so that data is correctly redistributed as Alex mentioned.We do support expanding existing volumes as the bug https://bugzilla.redhat.com/show_bug.cgi?id=1471031 has been fixed As to procedure to expand volumes:1. Create bricks from UI - select Host -> Storage Device -> Storage device. Click on "Create Brick"If the device is shown as locked, make sure there's no signature on device. If multipath entries have been created for local devices, you can blacklist those devices in multipath.conf and restart multipath. (If you see device as locked even after you do this -please report back).2. Expand volume using Volume -> Bricks -> Add Bricks, and select the 3 bricks created in previous step3. Run Rebalance on the volume. Volume -> Rebalance. On Thu, May 16, 2019 at 2:48 PM Fred Rolland <frolland@redhat.com> wrote: Sahina,Can someone from your team review the steps done by Adrian? Thanks,Freddy On Thu, Apr 25, 2019 at 5:14 PM Adrian Quintero <adrianquintero@gmail.com> wrote: Ok, I will remove the extra 3 hosts, rebuild them from scratch and re-attach them to clear any possible issues and try out the suggestions provided. thank you! On Thu, Apr 25, 2019 at 9:22 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote: I have the same locks , despite I have blacklisted all local disks: # VDSM PRIVATE blacklist { devnode "*" wwid Crucial_CT256MX100SSD1_14390D52DCF5 wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126 wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378 wwid nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-00000001 } If you have multipath reconfigured, do not forget to rebuild the initramfs (dracut -f). It's a linux issue , and not oVirt one. In your case you had something like this: /dev/VG/LV /dev/disk/by-id/pvuuid /dev/mapper/multipath-uuid /dev/sdb Linux will not allow you to work with /dev/sdb , when multipath is locking the block device. Best Regards,Strahil Nikolov В четвъртък, 25 април 2019 г., 8:30:16 ч. Гринуич-4, Adrian Quintero <adrianquintero@gmail.com> написа: under Compute, hosts, select the host that has the locks on /dev/sdb, /dev/sdc, etc.., select storage devices and in here is where you see a small column with a bunch of lock images showing for each row. However as a work around, on the newly added hosts (3 total), I had to manually modify /etc/multipath.conf and add the following at the end as this is what I noticed from the original 3 node setup. ------------------------------------------------------------- # VDSM REVISION 1.3 # VDSM PRIVATE # BEGIN Added by gluster_hci role blacklist { devnode "*" } # END Added by gluster_hci role ----------------------------------------------------------After this I restarted multipath and the lock went away and was able to configure the new bricks thru the UI, however my concern is what will happen if I reboot the server will the disks be read the same way by the OS? Also now able to expand the gluster with a new replicate 3 volume if needed using http://host4.mydomain.com:9090. thanks again On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote: In which menu do you see it this way ? Best Regards,Strahil Nikolov В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero <adrianquintero@gmail.com> написа: Strahil,this is the issue I am seeing now The is thru the UI when I try to create a new brick. So my concern is if I modify the filters on the OS what impact will that have after server reboots? thanks, On Mon, Apr 22, 2019 at 11:39 PM Strahil <hunter86_bg@yahoo.com> wrote: I have edited my multipath.conf to exclude local disks , but you need to set '#VDSM private' as per the comments in the header of the file. Otherwise, use the /dev/mapper/multipath-device notation - as you would do with any linux. Best Regards, Strahil NikolovOn Apr 23, 2019 01:07, adrianquintero@gmail.com wrote:
Thanks Alex, that makes more sense now while trying to follow the instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd are locked and inidicating " multpath_member" hence not letting me create new bricks. And on the logs I see
Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} Same thing for sdc, sdd
Should I manually edit the filters inside the OS, what will be the impact?
thanks again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLX...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLP... -- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KRKR5LFARNRHFR... -- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5EQPZTGK6PWG42...

On Tue, May 21, 2019 at 2:36 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Hey Sahina,
it seems that almost all of my devices are locked - just like Fred's. What exactly does it mean - I don't have any issues with my bricks/storage domains.
If the devices show up as locked - it means the disk cannot be used to create a brick. This is when the disk either already has a filesystem or is in use. But if the device is a clean device and it still shows up as locked - this could be a bug with how python-blivet/ vdsm reads this The code to check is implemented as _canCreateBrick(device): if not device or device.kids > 0 or device.format.type or \ hasattr(device.format, 'mountpoint') or \ device.type in ['cdrom', 'lvmvg', 'lvmthinpool', 'lvmlv', 'lvmthinlv']: return False return True
Best Regards, Strahil Nikolov
В понеделник, 20 май 2019 г., 14:56:11 ч. Гринуич+3, Sahina Bose < sabose@redhat.com> написа:
To scale existing volumes - you need to add bricks and run rebalance on the gluster volume so that data is correctly redistributed as Alex mentioned. We do support expanding existing volumes as the bug https://bugzilla.redhat.com/show_bug.cgi?id=1471031 has been fixed
As to procedure to expand volumes: 1. Create bricks from UI - select Host -> Storage Device -> Storage device. Click on "Create Brick" If the device is shown as locked, make sure there's no signature on device. If multipath entries have been created for local devices, you can blacklist those devices in multipath.conf and restart multipath. (If you see device as locked even after you do this -please report back). 2. Expand volume using Volume -> Bricks -> Add Bricks, and select the 3 bricks created in previous step 3. Run Rebalance on the volume. Volume -> Rebalance.
On Thu, May 16, 2019 at 2:48 PM Fred Rolland <frolland@redhat.com> wrote:
Sahina, Can someone from your team review the steps done by Adrian? Thanks, Freddy
On Thu, Apr 25, 2019 at 5:14 PM Adrian Quintero <adrianquintero@gmail.com> wrote:
Ok, I will remove the extra 3 hosts, rebuild them from scratch and re-attach them to clear any possible issues and try out the suggestions provided.
thank you!
On Thu, Apr 25, 2019 at 9:22 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
I have the same locks , despite I have blacklisted all local disks:
# VDSM PRIVATE blacklist { devnode "*" wwid Crucial_CT256MX100SSD1_14390D52DCF5 wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126 wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378 wwid nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-00000001 }
If you have multipath reconfigured, do not forget to rebuild the initramfs (dracut -f). It's a linux issue , and not oVirt one.
In your case you had something like this: /dev/VG/LV /dev/disk/by-id/pvuuid /dev/mapper/multipath-uuid /dev/sdb
Linux will not allow you to work with /dev/sdb , when multipath is locking the block device.
Best Regards, Strahil Nikolov
В четвъртък, 25 април 2019 г., 8:30:16 ч. Гринуич-4, Adrian Quintero < adrianquintero@gmail.com> написа:
under Compute, hosts, select the host that has the locks on /dev/sdb, /dev/sdc, etc.., select storage devices and in here is where you see a small column with a bunch of lock images showing for each row.
However as a work around, on the newly added hosts (3 total), I had to manually modify /etc/multipath.conf and add the following at the end as this is what I noticed from the original 3 node setup.
------------------------------------------------------------- # VDSM REVISION 1.3 # VDSM PRIVATE # BEGIN Added by gluster_hci role
blacklist { devnode "*" } # END Added by gluster_hci role ---------------------------------------------------------- After this I restarted multipath and the lock went away and was able to configure the new bricks thru the UI, however my concern is what will happen if I reboot the server will the disks be read the same way by the OS?
Also now able to expand the gluster with a new replicate 3 volume if needed using http://host4.mydomain.com:9090.
thanks again
On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
In which menu do you see it this way ?
Best Regards, Strahil Nikolov
В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero < adrianquintero@gmail.com> написа:
Strahil, this is the issue I am seeing now
[image: image.png]
The is thru the UI when I try to create a new brick.
So my concern is if I modify the filters on the OS what impact will that have after server reboots?
thanks,
On Mon, Apr 22, 2019 at 11:39 PM Strahil <hunter86_bg@yahoo.com> wrote:
I have edited my multipath.conf to exclude local disks , but you need to set '#VDSM private' as per the comments in the header of the file. Otherwise, use the /dev/mapper/multipath-device notation - as you would do with any linux.
Best Regards, Strahil NikolovOn Apr 23, 2019 01:07, adrianquintero@gmail.com wrote:
Thanks Alex, that makes more sense now while trying to follow the
instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd are locked and inidicating " multpath_member" hence not letting me create new bricks. And on the logs I see
Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb",
"vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
Same thing for sdc, sdd
Should I manually edit the filters inside the OS, what will be the impact?
thanks again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLX...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLP...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KRKR5LFARNRHFR...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5EQPZTGK6PWG42...

You create the brick on top of the multipath device. Look for one that is the same size as the /dev/sd* device that you want to use. On 2019-04-25 08:00, Strahil Nikolov wrote:
In which menu do you see it this way ?
Best Regards, Strahil Nikolov
В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero <adrianquintero@gmail.com> написа:
Strahil, this is the issue I am seeing now
The is thru the UI when I try to create a new brick.
So my concern is if I modify the filters on the OS what impact will that have after server reboots?
thanks,
On Mon, Apr 22, 2019 at 11:39 PM Strahil <hunter86_bg@yahoo.com> wrote:
I have edited my multipath.conf to exclude local disks , but you need to set '#VDSM private' as per the comments in the header of the file. Otherwise, use the /dev/mapper/multipath-device notation - as you would do with any linux.
Best Regards, Strahil NikolovOn Apr 23, 2019 01:07, adrianquintero@gmail.com wrote:
Thanks Alex, that makes more sense now while trying to follow the instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd are locked and inidicating " multpath_member" hence not letting me create new bricks. And on the logs I see
Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} Same thing for sdc, sdd
Should I manually edit the filters inside the OS, what will be the impact?
thanks again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLX...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLP... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XDQFZKYIQGFSAW...

I understand, however the "create brick" option is greyed out (not enabled), the only way I could get that option to be enabled is if I manually edit the multipathd.conf file and add ------------------------------------------------------------- # VDSM REVISION 1.3 # VDSM PRIVATE # BEGIN Added by gluster_hci role blacklist { devnode "*" } # END Added by gluster_hci role ---------------------------------------------------------- Then I go back to the UI and I can use sd* (multpath device). thanks, On Thu, Apr 25, 2019 at 8:41 AM Alex McWhirter <alex@triadic.us> wrote:
You create the brick on top of the multipath device. Look for one that is the same size as the /dev/sd* device that you want to use.
On 2019-04-25 08:00, Strahil Nikolov wrote:
In which menu do you see it this way ?
Best Regards, Strahil Nikolov
В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero < adrianquintero@gmail.com> написа:
Strahil, this is the issue I am seeing now
[image: image.png]
The is thru the UI when I try to create a new brick.
So my concern is if I modify the filters on the OS what impact will that have after server reboots?
thanks,
On Mon, Apr 22, 2019 at 11:39 PM Strahil <hunter86_bg@yahoo.com> wrote:
I have edited my multipath.conf to exclude local disks , but you need to set '#VDSM private' as per the comments in the header of the file. Otherwise, use the /dev/mapper/multipath-device notation - as you would do with any linux.
Best Regards, Strahil NikolovOn Apr 23, 2019 01:07, adrianquintero@gmail.com wrote:
Thanks Alex, that makes more sense now while trying to follow the
instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd are locked and inidicating " multpath_member" hence not letting me create new bricks. And on the logs I see
Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb",
"vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
Same thing for sdc, sdd
Should I manually edit the filters inside the OS, what will be the impact?
thanks again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLX...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLP...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XDQFZKYIQGFSAW...
-- Adrian Quintero

You don't create the brick on the /dev/sd* device You can see where i create the brick on highlighted multipath device (see attachment), if for some reason you can't do that, you might need to run wipefs -a on it as it probably has some leftover headers from another FS On 2019-04-25 08:53, Adrian Quintero wrote:
I understand, however the "create brick" option is greyed out (not enabled), the only way I could get that option to be enabled is if I manually edit the multipathd.conf file and add ------------------------------------------------------------- # VDSM REVISION 1.3 # VDSM PRIVATE # BEGIN Added by gluster_hci role
blacklist { devnode "*" } # END Added by gluster_hci role ----------------------------------------------------------
Then I go back to the UI and I can use sd* (multpath device).
thanks,
On Thu, Apr 25, 2019 at 8:41 AM Alex McWhirter <alex@triadic.us> wrote:
You create the brick on top of the multipath device. Look for one that is the same size as the /dev/sd* device that you want to use.
On 2019-04-25 08:00, Strahil Nikolov wrote:
In which menu do you see it this way ?
Best Regards, Strahil Nikolov
В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero <adrianquintero@gmail.com> написа:
Strahil, this is the issue I am seeing now
The is thru the UI when I try to create a new brick.
So my concern is if I modify the filters on the OS what impact will that have after server reboots?
thanks,
On Mon, Apr 22, 2019 at 11:39 PM Strahil <hunter86_bg@yahoo.com> wrote: I have edited my multipath.conf to exclude local disks , but you need to set '#VDSM private' as per the comments in the header of the file. Otherwise, use the /dev/mapper/multipath-device notation - as you would do with any linux.
Best Regards, Strahil NikolovOn Apr 23, 2019 01:07, adrianquintero@gmail.com wrote:
Thanks Alex, that makes more sense now while trying to follow the instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd are locked and inidicating " multpath_member" hence not letting me create new bricks. And on the logs I see
Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5} Same thing for sdc, sdd
Should I manually edit the filters inside the OS, what will be the impact?
thanks again. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTYZLX...
-- Adrian Quintero _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3TLP... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XDQFZKYIQGFSAW...
-- Adrian Quintero
participants (7)
-
Adrian Quintero
-
Alex McWhirter
-
Fred Rolland
-
Sachidananda URS
-
Sahina Bose
-
Strahil
-
Strahil Nikolov