Dear all,

I very much appreciate all help and suggestions so far.

Today I will send the test results and current mount settings for NFS4. Our production setup is using Netapp based NFS server.

I am surprised with results from Tony’s test.
We also have one setup with Gluster based NFS, and I will run tests on those as well.

Sent from my iPhone

On 25 Sep 2019, at 14:18, Amit Bawer <abawer@redhat.com> wrote:




On Wed, Sep 25, 2019 at 2:44 PM Tony Brian Albers <tba@kb.dk> wrote:
Guys,

Just for info, this is what I'm getting on a VM that is on shared
storage via NFSv3:

--------------------------snip----------------------
[root@proj-000 ~]# time dd if=/dev/zero of=testfile bs=4096
count=1000000
1000000+0 records in
1000000+0 records out
4096000000 bytes (4.1 GB) copied, 18.0984 s, 226 MB/s

real    0m18.171s
user    0m1.077s
sys     0m4.303s
[root@proj-000 ~]#
--------------------------snip----------------------

my /etc/exports:
/data/ovirt
*(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)

and output from 'mount' on one of the hosts:

sto-001.kac.lokalnet:/data/ovirt on /rhev/data-center/mnt/sto-
001.kac.lokalnet:_data_ovirt type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,
nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=172.16.216
.41,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=172.
16.216.41)

Worth to compare mount options with the slow shared NFSv4 mount.

Window size tuning can be found at bottom of [1], although its relating to NFSv3, it could be relevant to v4 as well.


connected via single 10gbit ethernet. Storage on NFS server is 8 x 4TB
SATA disks in RAID10. NFS server is running CentOS 7.6.

Maybe you can get some inspiration from this.

/tony



On Wed, 2019-09-25 at 09:59 +0000, Vrgotic, Marko wrote:
> Dear Strahil, Amit,
>  
> Thank you for the suggestion.
> Test result with block size 4096:
> Network storage:
> avshared:
> [root@mpoludctest4udc-1 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> count=100000 oflag=dsync
> 100000+0 records in
> 100000+0 records out
> 409600000 bytes (410 MB) copied, 275.522 s, 1.5 MB/s
>  
> Local storage:
>  
> avlocal2:
> [root@mpollocalcheck22 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> count=100000 oflag=dsync
> 100000+0 records in
> 100000+0 records out
> 409600000 bytes (410 MB) copied, 53.093 s, 7.7 MB/s
> 10:38
> avlocal3:
> [root@mpollocalcheck3 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> count=100000 oflag=dsync
> 100000+0 records in
> 100000+0 records out
> 409600000 bytes (410 MB) copied, 46.0392 s, 8.9 MB/s
>  
> As Amit suggested, I am also going to execute same tests on the
> BareMetals and between BareMetal and NFS to compare results.
>  
>  
> — — —
> Met vriendelijke groet / Kind regards,
>
> Marko Vrgotic
>  
>  
>  
>  
> From: Strahil <hunter86_bg@yahoo.com>
> Date: Tuesday, 24 September 2019 at 19:10
> To: "Vrgotic, Marko" <M.Vrgotic@activevideo.com>, Amit <abawer@redhat
> .com>
> Cc: users <users@ovirt.org>
> Subject: Re: [ovirt-users] Re: Super Low VM disk IO via Shared
> Storage
>  
> Why don't you try with 4096 ?
> Most block devices have a blcok size of 4096 and anything bellow is
> slowing them down.
> Best Regards,
> Strahil Nikolov
> On Sep 24, 2019 17:40, Amit Bawer <abawer@redhat.com> wrote:
> have you reproduced performance issue when checking this directly
> with the shared storage mount, outside the VMs?
>  
> On Tue, Sep 24, 2019 at 4:53 PM Vrgotic, Marko <M.Vrgotic@activevideo
> .com> wrote:
> Dear oVirt,
>  
> I have executed some tests regarding IO disk speed on the VMs,
> running on shared storage and local storage in oVirt.
>  
> Results of the tests on local storage domains:
> avlocal2:
> [root@mpollocalcheck22 ~]#  dd if=/dev/zero of=/tmp/test2.img bs=512
> count=100000 oflag=dsync
> 100000+0 records in
> 100000+0 records out
> 51200000 bytes (51 MB) copied, 45.9756 s, 1.1 MB/s
>  
> avlocal3:
> [root@mpollocalcheck3 ~]#  dd if=/dev/zero of=/tmp/test2.img bs=512
> count=100000 oflag=dsync
> 100000+0 records in
> 100000+0 records out
> 51200000 bytes (51 MB) copied, 43.6179 s, 1.2 MB/s
>  
> Results of the test on shared storage domain:
> avshared:
> [root@mpoludctest4udc-1 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512
> count=100000 oflag=dsync
> 100000+0 records in
> 100000+0 records out
> 51200000 bytes (51 MB) copied, 283.499 s, 181 kB/s
>  
> Why is it so low? Is there anything I can do to tune, configure VDSM
> or other service to speed this up?
> Any advice is appreciated.
>  
> Shared storage is based on Netapp with 20Gbps LACP path from
> Hypervisor to Netapp volume, and set to MTU 9000. Used protocol is
> NFS4.0.
> oVirt is 4.3.4.3 SHE.
>  
>  
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-leave@ovirt.org
> Privacy Statement: <
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-leave@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/communit
> y-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/7XYSFEGAHCWXIY2JILDE24EVAC5ZVKWU/
--
Tony Albers - Systems Architect - IT Development
Royal Danish Library, Victor Albecks Vej 1, 8000 Aarhus C, Denmark
Tel: +45 2566 2383 - CVR/SE: 2898 8842 - EAN: 5798000792142