Why don't you try with 4096 ?
Most block devices have a blcok size of 4096 and anything bellow is slowing them down.

Best Regards,
Strahil Nikolov

On Sep 24, 2019 17:40, Amit Bawer <abawer@redhat.com> wrote:
have you reproduced performance issue when checking this directly with the shared storage mount, outside the VMs?

On Tue, Sep 24, 2019 at 4:53 PM Vrgotic, Marko <M.Vrgotic@activevideo.com> wrote:

Dear oVirt,

 

I have executed some tests regarding IO disk speed on the VMs, running on shared storage and local storage in oVirt.

 

Results of the tests on local storage domains:

avlocal2:

[root@mpollocalcheck22 ~]#  dd if=/dev/zero of=/tmp/test2.img bs=512 count=100000 oflag=dsync

100000+0 records in

100000+0 records out

51200000 bytes (51 MB) copied, 45.9756 s, 1.1 MB/s

 

avlocal3:

[root@mpollocalcheck3 ~]#  dd if=/dev/zero of=/tmp/test2.img bs=512 count=100000 oflag=dsync

100000+0 records in

100000+0 records out

51200000 bytes (51 MB) copied, 43.6179 s, 1.2 MB/s

 

Results of the test on shared storage domain:

avshared:

[root@mpoludctest4udc-1 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512 count=100000 oflag=dsync

100000+0 records in

100000+0 records out

51200000 bytes (51 MB) copied, 283.499 s, 181 kB/s

 

Why is it so low? Is there anything I can do to tune, configure VDSM or other service to speed this up?

Any advice is appreciated.

 

Shared storage is based on Netapp with 20Gbps LACP path from Hypervisor to Netapp volume, and set to MTU 9000. Used protocol is NFS4.0.

oVirt is 4.3.4.3 SHE.

 

 

_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: <