
On February 25, 2020 11:01:47 PM GMT+02:00, adrianquintero@gmail.com wrote:
Thanks Strahil, I made a mistake in this 3 node cluster, I use this cluster for testing, in our Prod environment we do have the blacklist but it is as follows:
# VDSM REVISION 1.8 # VDSM PRIVATE blacklist { devnode "*" }
However we did not add each individual local disk to the blacklist entries, would this still be the case where I would have to add the individual blacklist entries as you suggested? I thought 'devnode "*" ' achieved this...
From another post you mentioned something similart to: # VDSM PRIVATE blacklist { devnode "*" wwid Crucial_CT256MX100SSD1_14390D52DCF5 wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126 wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378 wwid nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-00000001 }
My production host: [root@host18 ~]# multipath -v2 -d [root@host18 ~]#
Thoughts?
Thanks,
Adrian _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UJPRAUG5VBZ4CO...
'devnode *' will blacklist all /dev/XYZ devices, which is only acceptable if you do not plan to use SAN or iSCSI . Otherwise, you should just blacklist local devices and the rest will be under mpath. Best Regards, Strahil Nikolov