Hi Sahina,
Thanks for your reply.

Let me share my test results with gluster v3 .
I have a 3-node hyperconverged setup with 1 Gbit/s network and SSD (sata-based) for LVM caching.
Testing the bricks  showed higher than the network performance.
1. Tested ovirt 4.2.7/4.2.8 with fuse mounts. Using 'dd if=/dev/zero of=/default/gluster/Mount point/from/ovirt bs=1M count=5000'.
Results: 56MB/s directly on the mount point, 20+-2 MB/s from a VM.
Reads on fuse mount point ->  500+ MB/s
Disabling sharing increased performance from the fuse mount point, nothing beneficial from a VM.

Converting the bricks of a volume to 'tmpfs' does not bring any performance gain for FUSE mount.

2. Tested ovirt 4.2.7/4.2.8 with gfapi - performance in VM -> approx 30 MB/s

3. Gluster native NFS (now deprecated) on ovirt 4.2.7/4.2.8 -> 120MB/s on mount point, 100+ MB/s in VM

My current setup:
Storhaug + ctdb + nfs-ganesha (ovirt 4.2.7/4.2.8)  -> 80+-2 MB/s in the VM. Reads are around the same speed.

Sadly, I didn't have the time to test performance  on gluster v5 (ovirt 4.3.0)  , but I haven't noticed any performance gain for the engine.

My suspicion with FUSE is that when a gluster node is also playing the role of a client -> it still uses network bandwidth to communicate to itself, but I could be wrong.
According to some people on the gluster lists, the FUSE performance is expected ,but my tests with disabled sharing shows better performance.

Most of the time 'gtop' does not show any spikes and iftop shows that network usage is not going over 500 Mbit/s

As I hit some issues with the deployment  on  4.2.7 , I decided to stop my tests for now.

Best Regards,
Strahil Nikolov

On Feb 25, 2019 09:17, Sahina Bose <sabose@redhat.com> wrote:
The options set on the gluster volume are tuned for data consistency and reliability.

Some of the changes that you can try
1. use gfapi - however this will not provide you HA if the server used to access the gluster volume is down. (the backup-volfile-servers are not used in case of gfapi). You can change this using the engine-config tool for your cluster level.
2. Change the remote-dio:enable to turn on client side brick caching. Ensure that you have a power backup in place, so that you don't end up with data loss in case server goes down before data is flushed.

If you're seeing issues with a particular version of glusterfs, please provide gluster profile output for us to help identify bottleneck.  (check https://docs.gluster.org/en/latest/Administrator%20Guide/Monitoring%20Workload/ on how to do this)

On Fri, Feb 22, 2019 at 1:39 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Adding Sahina

Il giorno ven 22 feb 2019 alle ore 06:51 Strahil <hunter86_bg@yahoo.com> ha scritto:

I have done some testing and it seems that storhaug + ctdb + nfs-ganesha is showing decent performance in a 3 node  hyperconverged setup.
Fuse mounts are hitting some kind of limit when mounting gluster -3.12.15  volumes.

Best Regards,
Strahil Nikolov

_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/7U5J4KYDJJS4W3BE2KEIR67NU3532XGY/


--

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA

sbonazzo@redhat.com