On Tue, Jul 6, 2021 at 2:52 PM Nir Soffer <nsoffer(a)redhat.com> wrote:
Too bad.
You can evaluate how ovirt 4.4. will work with this appliance using
this dd command:
dd if=/dev/zero bs=8M count=38400 of=/path/to/new/disk
oflag=direct conv=fsync
We don't use dd for this, but the operation is the same on NFS < 4.2.
I confirm I'm able to saturate the 1Gb/s link. tried creating a 10Gb file
on the StoreOnce appliance
# time dd if=/dev/zero bs=8M count=1280 of=/rhev/data-center/mnt/
172.16.1.137\:_nas_EXPORT-DOMAIN/ansible_ova/test.img oflag=direct
conv=fsync
1280+0 records in
1280+0 records out
10737418240 bytes (11 GB) copied, 98.0172 s, 110 MB/s
real 1m38.035s
user 0m0.003s
sys 0m2.366s
So are you saying that after upgrading to 4.4.6 (or just released 4.4.7) I
should be able to export with this speed? Or anyway I do need NFS v4.2?
BTW: is there any capping put in place by oVirt to the export phase (the
qemu-img command in practice)? Designed for example not to perturbate the
activity of hypervisor?Or do you think that if I have a 10Gb/s network
backend and powerful disks on oVirt and powerful NFS server processing
power I should have much more speed?
Based on the 50 MiB/s rate you reported earlier, I guess you have a
1Gbit network to
this appliance, so zeroing can do up to 128 MiB/s, which will take
about 40 minutes
for 300G.
Using NFS 4.2, fallocate will complete in less than a second.
I can sort of confirm this also for 4.3.10.
I have a test CentOS 7.4 VM configured as NFS server and, if I configure it
as an export domain using the default autonegotiate option, it is
(strangely enough) mounted as NFS v4.1 and the initial fallocate takes some
minutes (55Gb disk).
If I reconfigure it forcing NFS v4.2, it does it and the initial fallocate
is immediate, in the sense that "ls -l" on the export domain becomes quite
immediately the size of the virtual disk.
Thanks,
Gianluca