One recommendation is to get rid of the multipath for your SSD.
Replica 3 volumes are quite resilient and I'm really surprised it happened to you.
For the multipath stuff , you can create something like this:
[root@ovirt1 ~]# cat /etc/multipath/conf.d/blacklist.conf
blacklist {
wwid Crucial_CT256MX100SSD1_14390D52DCF5
}
As you are running multipath already , just run the following to get the wwid of your ssd
:
multipath -v4 | grep 'got wwid of'
What were the gluster vol options you were running with ? oVirt is running the volume with
'performance.strict-o-direct' and Direct I/O , so you should not loose any data.
Best Regards,
Strahil Nikolov
В вторник, 13 октомври 2020 г., 16:35:26 Гринуич+3, Jarosław Prokopowski
<jprokopowski(a)gmail.com> написа:
Hi Nikolov,
Thanks for the very interesting answer :-)
I do not use any raid controller. I was hoping glusterfs would take care of fault
tolerance but apparently it failed.
I have one Samsung 1TB SSD drives in each server for VM storage. I see it is of type
"multipath". There is XFS filesystem over standard LVM (not thin).
Mount options are: inode64,noatime,nodiratime
SELinux was in permissive mode.
I must read more about the things you described as have never dived into it.
Please let me know if you have any suggestions :-)
Thanks a lot!
Jarek
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RBIDHY6P3KK...