On Tue, Oct 1, 2019 at 12:49 PM Vrgotic, Marko <M.Vrgotic@activevideo.com> wrote:

Thank you very much Amit,

 

I hope the result of suggested tests allows us improve the speed for specific IO test case as well.

 

Apologies for not being more clear, but I was referring  to changing mount options for storage where SHE also runs. It cannot be put in Maintenance mode since the engine is running on it.
What to do in this case? Its clear that I need to power it down, but where can I then change the settings?


You can see similar question about changing mnt_options of hosted engine and answer here [1]
[1] https://lists.ovirt.org/pipermail/users/2018-January/086265.html

 

Kindly awaiting your reply.

 

— — —
Met vriendelijke groet / Kind regards,

Marko Vrgotic

 

 

 

From: Amit Bawer <abawer@redhat.com>
Date: Saturday, 28 September 2019 at 20:25
To: "Vrgotic, Marko" <M.Vrgotic@activevideo.com>
Cc: Tony Brian Albers <tba@kb.dk>, "hunter86_bg@yahoo.com" <hunter86_bg@yahoo.com>, "users@ovirt.org" <users@ovirt.org>
Subject: Re: [ovirt-users] Re: Super Low VM disk IO via Shared Storage

 

 

 

On Fri, Sep 27, 2019 at 4:02 PM Vrgotic, Marko <M.Vrgotic@activevideo.com> wrote:

Hi oVirt gurus,

 

Thank s to Tony, who pointed me into discovery process, the performance of the IO seems greatly dependent on the flags.

 

[root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=512 count=100000

100000+0 records in

100000+0 records out

51200000 bytes (51 MB) copied, 0.108962 s, 470 MB/s

[root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=512 count=100000 oflag=dsync

100000+0 records in

100000+0 records out

51200000 bytes (51 MB) copied, 322.314 s, 159 kB/s

 

Dsync flag tells dd to ignore all buffers, cache except certain kernel buffers and write data physically to the disc, before writing further. According to number of sites I looked at, this is the way to test Server Latency in regards to IO operations. Difference in performance is huge, as you can see (below I have added results from tests with 4k and 8k block)

 

Still, certain software component we run tests with writes data in this/similar way, which is why I got this complaint in the first place.

 

Here is my current NFS mount settings:

rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=172.17.28.11,local_lock=none,addr=172.17.28.5

 

If you have any suggestions on possible NFS tuning options, to try to increase performance, I would highly appreciate it. 

Can someone tell me how to change NFS mount options in oVirt for already existing/used storage?

 

Taking into account your network configured MTU [1] and Linux version [2], you can tune wsize, rsize mount options.

Editing mount options can be done from Storage->Domains->Manage Domain menu.

 

 

 

Test results with 4096 and 8192 byte size.

[root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=4096 count=100000

100000+0 records in

100000+0 records out

409600000 bytes (410 MB) copied, 1.49831 s, 273 MB/s

[root@lgu215-admin ~]# dd if=/dev/zero of=/tmp/test1.img bs=4096 count=100000 oflag=dsync

100000+0 records in

100000+0 records out

409600000 bytes (410 MB) copied, 349.041 s, 1.2 MB/s

 

[root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=8192 count=100000

100000+0 records in

100000+0 records out

819200000 bytes (819 MB) copied, 11.6553 s, 70.3 MB/s

[root@lgu215-admin ~]# dd if=/dev/zero of=/tmp/test1.img bs=8192 count=100000 oflag=dsync

100000+0 records in

100000+0 records out

819200000 bytes (819 MB) copied, 393.035 s, 2.1 MB/s

 

 

From: "Vrgotic, Marko" <M.Vrgotic@activevideo.com>
Date: Thursday, 26 September 2019 at 09:51
To: Amit Bawer <abawer@redhat.com>
Cc: Tony Brian Albers <tba@kb.dk>, "hunter86_bg@yahoo.com" <hunter86_bg@yahoo.com>, "users@ovirt.org" <users@ovirt.org>
Subject: Re: [ovirt-users] Re: Super Low VM disk IO via Shared Storage

 

Dear all,

 

I very much appreciate all help and suggestions so far.

 

Today I will send the test results and current mount settings for NFS4. Our production setup is using Netapp based NFS server.

 

I am surprised with results from Tony’s test.

We also have one setup with Gluster based NFS, and I will run tests on those as well.

Sent from my iPhone

 

On 25 Sep 2019, at 14:18, Amit Bawer <abawer@redhat.com> wrote:

 

 

On Wed, Sep 25, 2019 at 2:44 PM Tony Brian Albers <tba@kb.dk> wrote:

Guys,

Just for info, this is what I'm getting on a VM that is on shared
storage via NFSv3:

--------------------------snip----------------------
[root@proj-000 ~]# time dd if=/dev/zero of=testfile bs=4096
count=1000000
1000000+0 records in
1000000+0 records out
4096000000 bytes (4.1 GB) copied, 18.0984 s, 226 MB/s

real    0m18.171s
user    0m1.077s
sys     0m4.303s
[root@proj-000 ~]#
--------------------------snip----------------------

my /etc/exports:
/data/ovirt
*(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)

and output from 'mount' on one of the hosts:

sto-001.kac.lokalnet:/data/ovirt on /rhev/data-center/mnt/sto-
001.kac.lokalnet:_data_ovirt type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,
nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=172.16.216
.41,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=172.
16.216.41)

 

Worth to compare mount options with the slow shared NFSv4 mount.

 

Window size tuning can be found at bottom of [1], although its relating to NFSv3, it could be relevant to v4 as well.

 


connected via single 10gbit ethernet. Storage on NFS server is 8 x 4TB
SATA disks in RAID10. NFS server is running CentOS 7.6.

Maybe you can get some inspiration from this.

/tony



On Wed, 2019-09-25 at 09:59 +0000, Vrgotic, Marko wrote:
> Dear Strahil, Amit,
>  
> Thank you for the suggestion.
> Test result with block size 4096:
> Network storage:
> avshared:
> [root@mpoludctest4udc-1 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> count=100000 oflag=dsync
> 100000+0 records in
> 100000+0 records out
> 409600000 bytes (410 MB) copied, 275.522 s, 1.5 MB/s
>  
> Local storage:
>  
> avlocal2:
> [root@mpollocalcheck22 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> count=100000 oflag=dsync
> 100000+0 records in
> 100000+0 records out
> 409600000 bytes (410 MB) copied, 53.093 s, 7.7 MB/s
> 10:38
> avlocal3:
> [root@mpollocalcheck3 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> count=100000 oflag=dsync
> 100000+0 records in
> 100000+0 records out
> 409600000 bytes (410 MB) copied, 46.0392 s, 8.9 MB/s
>  
> As Amit suggested, I am also going to execute same tests on the
> BareMetals and between BareMetal and NFS to compare results.
>  
>  
> — — —
> Met vriendelijke groet / Kind regards,
>
> Marko Vrgotic
>  
>  
>  
>  
> From: Strahil <hunter86_bg@yahoo.com>
> Date: Tuesday, 24 September 2019 at 19:10
> To: "Vrgotic, Marko" <M.Vrgotic@activevideo.com>, Amit <abawer@redhat
> .com>
> Cc: users <users@ovirt.org>
> Subject: Re: [ovirt-users] Re: Super Low VM disk IO via Shared
> Storage
>  
> Why don't you try with 4096 ?
> Most block devices have a blcok size of 4096 and anything bellow is
> slowing them down.
> Best Regards,
> Strahil Nikolov
> On Sep 24, 2019 17:40, Amit Bawer <abawer@redhat.com> wrote:
> have you reproduced performance issue when checking this directly
> with the shared storage mount, outside the VMs?
>  
> On Tue, Sep 24, 2019 at 4:53 PM Vrgotic, Marko <M.Vrgotic@activevideo
> .com> wrote:
> Dear oVirt,
>  
> I have executed some tests regarding IO disk speed on the VMs,
> running on shared storage and local storage in oVirt.
>  
> Results of the tests on local storage domains:
> avlocal2:
> [root@mpollocalcheck22 ~]#  dd if=/dev/zero of=/tmp/test2.img bs=512
> count=100000 oflag=dsync
> 100000+0 records in
> 100000+0 records out
> 51200000 bytes (51 MB) copied, 45.9756 s, 1.1 MB/s
>  
> avlocal3:
> [root@mpollocalcheck3 ~]#  dd if=/dev/zero of=/tmp/test2.img bs=512
> count=100000 oflag=dsync
> 100000+0 records in
> 100000+0 records out
> 51200000 bytes (51 MB) copied, 43.6179 s, 1.2 MB/s
>  
> Results of the test on shared storage domain:
> avshared:
> [root@mpoludctest4udc-1 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512
> count=100000 oflag=dsync
> 100000+0 records in
> 100000+0 records out
> 51200000 bytes (51 MB) copied, 283.499 s, 181 kB/s
>  
> Why is it so low? Is there anything I can do to tune, configure VDSM
> or other service to speed this up?
> Any advice is appreciated.
>  
> Shared storage is based on Netapp with 20Gbps LACP path from
> Hypervisor to Netapp volume, and set to MTU 9000. Used protocol is
> NFS4.0.
> oVirt is 4.3.4.3 SHE.
>  
>  
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-leave@ovirt.org
> Privacy Statement: <
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-leave@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/communit
> y-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/7XYSFEGAHCWXIY2JILDE24EVAC5ZVKWU/
--
Tony Albers - Systems Architect - IT Development
Royal Danish Library, Victor Albecks Vej 1, 8000 Aarhus C, Denmark
Tel: +45 2566 2383 - CVR/SE: 2898 8842 - EAN: 5798000792142