Due to POSIX compliance, oVirt needs 512 byte physical sector size. If your SSD/NVME has the new standard (4096) you will need to use VDO with '--emulate512' flag (or whatever it was named). Yet, if you already got 512 physical sector size - you can skip VDO totally.
About the mount options of the bricks, you can use noatime & inode64
Also if you use SELINUX use the 'context=' mount option to tell the kernel "skip looking for SELINUX Label, it's always..."
Also, consider setting the SSD I/O scheduler to none (multique should be enabled on EL8 Hypervisors by default) which will reduce reordering of your I/O requests and speed up on fast storage. NVMEs by default use that.
On Mon, Apr 26, 2021 at 22:43, penguin pages
"...Tuning Gluster with VDO bellow is quite difficult and the overhead of using VDO could
reduce performance ...." Yup. hense creation of a dedicated data00 volume from the 1TB SSD each server had. Matched options listed in oVirt.. but still OCP would not address the drive as target for deployment. That is when I opened ticket with RH and they noted Gluster is not a supported target for OCP. Hense then off to check if we could do CEPH HCI.. nope.
"..I would try with VDO compression and dedup disabled.If your SSD has 512 byte physical..& logical size, you can skip VDO at all to check performance....." Yes.. VDO removed was/ is next test. But your note about 512 is yes.. Are their tuning parameters for Gluster with this?
"...Also FS mount options are very important for XFS...." - What options do you use / recommend? Do you have a link to said tuning manual page where I could review and knowing the base HCI volume is VDO + XFS + Gluster. But second volume for OCP will be just XFS + Gluster I would assume this may change recommendations.