
Hi all, I’m benchmarking storage performance on OLVM using fio across several scenarios (local brick, GlusterFS, NFS, both on host and as storage domains in VMs). I’m seeing significant differences in throughput and latency, especially with GlusterFS and OLVM guest disks. Summary of Results: Location IOPS Bandwidth Total IO Avg Latency Block Size Jobs CPU (sys) Notes Local Brick 1816 1816 MiB/s 109 GiB 587 µs 1M 4 45% Fastest, direct disk OLVM Guest Disk using GlusterFS Storage Domain 117 118 MiB/s 8196 MiB 12.3 ms 1M 4 3.80% Virtualization overhead GlusterFS Mounted on Host 211 211 MiB/s 12.4 GiB 18.8 ms 1M 4 3.50% Improved, but still slower NFS Mount on Host 4241 4242 MiB/s 249 GiB 613 µs 1M 4 80% Fastest network storage OLVM Guest Disk using NFS Storage domain 532 532 MiB/s 32.8 GiB 5.4 ms 1M 4 17.40% NFS on OLVM guest Observation: Both GlusterFS and NFS are on the same network VLAN, but NFS consistently outperforms GlusterFS (both on host and as a storage domain in OLVM). This suggests the network isn’t the bottleneck. The issue may be with GlusterFS configuration, protocol overhead, or OLVM’s handling of storage domains. Questions: Has anyone tuned GlusterFS or OLVM for better performance in similar scenarios? Are there recommended settings or architectural changes to improve GlusterFS throughput/latency? Any best practices for OLVM storage domain configuration (GlusterFS vs NFS)? Is this performance gap expected, or should I look deeper into my setup? Test Details: fio commands used Any advice, tuning tips, or references would be greatly appreciated!