A few things to consider,
what is your RAID situation per host. If you're using mdadm based soft
raid, you need to make sure your drives support power loss data
protection. This is mostly only a feature on enterprise drives.
Essenstially it ensures the drives reserve enough energy to flush the
write cache to disk on power loss. Most modern drives have a non-trivial
amount of built in write cache and losing that data on power loss will
gladly corrupt files, especially on soft raid setups.
If you're using hardware raid, make sure you have disabled drive based
write cache, and that you have a battery / capacitor connected for the
raid cards cache module.
If you're using ZFS, which isn't really supported, you need a good UPS
and to have it set up to shut systems down cleanly. ZFS will not take
power outages well. Power loss data protection is really important too,
but it's not a fixall for ZFS as it also caches writes in systems RAM
quite a bit. A dedicated cache device with power loss data protection
can help mitigate that, but really the power issues are a more pressing
concern in this situation.
As far as gluster is concerned, there is not much that can easily
corrupt data on power loss. My only thought would be if your switches
are not also battery backed, this would be an issue.
On 2020-10-08 08:15, Jarosław Prokopowski wrote:
> Hi Guys,
>
> I had a situation 2 times that due to unexpected power outage
> something went wrong and VMs on glusterfs where not recoverable.
> Gluster heal did not help and I could not start the VMs any more.
> Is there a way to make such setup bulletproof?
> Does it matter which volume type I choose - raw or qcow2? Or thin
> provision versus reallocated?
> Any other advise?
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
>
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MRM6H2YENBP...