On March 6, 2020 6:02:03 PM GMT+02:00, Jayme <jaymef(a)gmail.com> wrote:
I have 3 server HCI with Gluster replica 3 storage (10GBe and SSD
disks).
Small file performance inner-vm is pretty terrible compared to a
similar
spec'ed VM using NFS mount (10GBe network, SSD disk)
VM with gluster storage:
# dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 53.9616 s, 9.5 kB/s
VM with NFS:
# dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 2.20059 s, 233 kB/s
This is a very big difference, 2 seconds to copy 1000 files on NFS VM
VS 53
seconds on the other.
Aside from enabling libgfapi is there anything I can tune on the
gluster or
VM side to improve small file performance? I have seen some guides by
Redhat in regards to small file performance but I'm not sure what/if
any of
it applies to oVirt's implementation of gluster in HCI.
You can use the rhgs-random-io tuned profile from
ftp://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/redhat-...
and try with that on your hosts.
In my case, I have modified it so it's a mixture between rhgs-random-io and the
profile for Virtualization Host.
Also,ensure that your bricks are using XFS with relatime/noatime mount option and your
scheduler for the SSDs is either 'noop' or 'none' .The default I/O
scheduler for RHEL7 is deadline which is giving preference to reads and your workload
is definitely 'write'.
Ensure that the virt settings are enabled for your gluster volumes:
'gluster volume set <volname> group virt'
Also, are you running on fully allocated disks for the VM or you started thin ?
I'm asking as creation of new shards at gluster level is a slow task.
Have you checked gluster profiling the volume? It can clarify what is going on.
Also are you comparing apples to apples ?
For example, 1 ssd mounted and exported as NFS and a replica 3 volume of the same type
of ssd ? If not, the NFS can have more iops due to multiple disks behind it, while
Gluster has to write the same thing on all nodes.
Best Regards,
Strahil Nikolov