Gluster's write-behind translator by default buffers writes for flushing to disk later, *even* when the file is opened with O_DIRECT flag. Not honoring O_DIRECT could mean a reader from another client could be READing stale data from bricks because some WRITEs may not yet be flushed to disk. performance.strict-o-direct=on is one of the options needed to truly honor O_DIRECT behavior which is what qemu uses by virtue of cache=none option being set (the other being network.remote-dio=off) on the vm(s)

-Krutika


On Mon, Feb 25, 2019 at 2:50 PM Leo David <leoalex@gmail.com> wrote:
Hello Everyone,
As per some previous posts,  this "performance.strict-o-direct=on" setting caused trouble or poor vm iops.  I've noticed that this option is still part of default setup or automatically configured with
"Optimize for virt. store" button.
In the end... is this setting a good or a bad practice for setting the vm storage volume ?
Does it depends ( like maybe other gluster performance options ) on the storage backend:
- raid type /  jbod
- raid controller cache size
I am usually using jbod disks attached to lsi hba card ( no cache ). Any gluster recommendations regarding this setup ?
Is there any documentation for best practices on configurating ovirt's gluster for different types of storage backends ?
Thank you very much !

Have a great week,

Leo

--
Best regards, Leo David
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/7FKL42JSHIKPMKLLMDPKYM4XT4V5GT4W/