Hi all,
I can confirm that when using libgfapi with oVirt + Gluster replica 3 (Hyperconverged)
read and write performance under a VM was 4 to 5 times better than when using fuse.
--------------------------------------------------------------------------------------------------
Tested with a VM CentOS 6 and 7 under the hyperconverged cluster HW:
--------------------------------------------------------------------------------------------------
ovirt 4.3.10 hypervisors with replica 3
- 256Gb Ram
- 32 total cores with hyperthreading
- RAID 1 (2 HDDs) for OS
- RAID 6 (9 SSDs) for Gluster , also tested with RAID 10, JBOD, all provided similar
improvements with libgfapi (4 to 5 times better), replica 3 volumes.
- 10Gbe NICs, 1 for ovirtmgmnt and 1 for Gluster
- Ran tests using fio
-------------------------------------------------------------------------------
Test results using fuse (1500 MTU) (Took about 4~5 min):
-------------------------------------------------------------------------------
[root@test3 mail]# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1
--name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw
--rwmixread=75
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.0.13
Starting 1 process
test: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m] [100.0% done] [11984K/4079K/0K /s] [2996 /1019 /0 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=8894: Mon Mar 29 10:05:35 2021
read : io=3070.5MB, bw=12286KB/s, iops=3071 , runt=255918msec <------------------
write: io=1025.6MB, bw=4103.5KB/s, iops=1025 , runt=255918msec <------------------
cpu : usr=1.84%, sys=10.50%, ctx=859129, majf=0, minf=19
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=786043/w=262533/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
READ: io=3070.5MB, aggrb=12285KB/s, minb=12285KB/s, maxb=12285KB/s, mint=255918msec,
maxt=255918msec
WRITE: io=1025.6MB, aggrb=4103KB/s, minb=4103KB/s, maxb=4103KB/s, mint=255918msec,
maxt=255918msec
Disk stats (read/write):
dm-3: ios=785305/262494, merge=0/0, ticks=492833/15794537, in_queue=16289356,
util=100.00%, aggrios=786024/262789, aggrmerge=19/45, aggrticks=492419/15811831,
aggrin_queue=16303803, aggrutil=100.00%
sda: ios=786024/262789, merge=19/45, ticks=492419/15811831, in_queue=16303803,
util=100.00%
--------------------------------------------------------------------------------------------------------------------------
Test results using fuse (9000 MTU) // Did not see much of a difference (Took about 4~5
min):
--------------------------------------------------------------------------------------------------------------------------
[root@test3 mail]# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1
--name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw
--rwmixread=75
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.0.13
Starting 1 process
Jobs: 1 (f=1): [m] [100.0% done] [14956K/4596K/0K /s] [3739 /1149 /0 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=2193: Mon Mar 29 10:22:44 2021
read : io=3070.8MB, bw=12882KB/s, iops=3220 , runt=244095msec <------------------
write: io=1025.3MB, bw=4300.1KB/s, iops=1075 , runt=244095msec <------------------
cpu : usr=1.85%, sys=10.43%, ctx=849742, majf=0, minf=21
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=786117/w=262459/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
READ: io=3070.8MB, aggrb=12882KB/s, minb=12882KB/s, maxb=12882KB/s, mint=244095msec,
maxt=244095msec
WRITE: io=1025.3MB, aggrb=4300KB/s, minb=4300KB/s, maxb=4300KB/s, mint=244095msec,
maxt=244095msec
Disk stats (read/write):
dm-3: ios=785805/262493, merge=0/0, ticks=511951/15009580, in_queue=15523355,
util=100.00%, aggrios=786105/262713, aggrmerge=18/19, aggrticks=511235/15026104,
aggrin_queue=15536995, aggrutil=100.00%
sda: ios=786105/262713, merge=18/19, ticks=511235/15026104, in_queue=15536995,
util=100.00%
--------------------------------------------------------------------------------------
Test results using LIBGFAPI (9000 MTU), took about 38 seconds
--------------------------------------------------------------------------------------
[root@vmm04 ~]# ping -I glusternet -M do -s 8972 192.168.1.6
PING 192.168.1.6 (192.168.1.6) from 192.168.1.4 glusternet: 8972(9000) bytes of data.
8980 bytes from 192.168.1.6: icmp_seq=1 ttl=64 time=0.300 ms
[root@vmm04 ~]# ping -I ovirtmgmt -M do -s 8972 192.168.0.6
PING 192.168.0.6 (192.168.0.6) from 192.168.0.4 ovirtmgmt: 8972(9000) bytes of data.
8980 bytes from 192.168.0.6: icmp_seq=1 ttl=64 time=0.171 ms
[root@test3 mail]# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1
--name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw
--rwmixread=75
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.0.13
Starting 1 process
Jobs: 1 (f=1): [m] [100.0% done] [25878K/8599K/0K /s] [6469 /2149 /0 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=2188: Mon Mar 29 10:43:00 2021
read : io=3071.2MB, bw=80703KB/s, iops=20175 , runt= 38969msec <------------------
write: io=1024.9MB, bw=26929KB/s, iops=6732 , runt= 38969msec
<------------------
cpu : usr=8.00%, sys=41.19%, ctx=374931, majf=0, minf=20
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=786224/w=262352/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
READ: io=3071.2MB, aggrb=80702KB/s, minb=80702KB/s, maxb=80702KB/s, mint=38969msec,
maxt=38969msec
WRITE: io=1024.9MB, aggrb=26929KB/s, minb=26929KB/s, maxb=26929KB/s, mint=38969msec,
maxt=38969msec
Disk stats (read/write):
dm-3: ios=784858/261925, merge=0/0, ticks=1403884/1028357, in_queue=2433435,
util=99.88%, aggrios=786155/262410, aggrmerge=70/51, aggrticks=1409868/1039790,
aggrin_queue=2449280, aggrutil=99.82%
sda: ios=786155/262410, merge=70/51, ticks=1409868/1039790, in_queue=2449280,
util=99.82%
So I do agree with Guillaume, it be worth to re-evaluate the situation :)
Regards,
Adrian