On Mon, Jul 5, 2021 at 5:06 PM Nir Soffer <nsoffer(a)redhat.com> wrote:
qemu-img is busy in posix_fallocate(), wiring one byte to every 4k block.
If you add -tt -T (as I suggested), we can see how much time each write
takes,
which may explain why this takes so much time.
strace -f -p 14342 --tt -T
It seems I missed part of your suggestion... i didn't get the "-tt
-T" (or
I didn't see it...)
With it I get this during the export (in networking of host console 4
mbit/s....):
# strace -f -p 25243 -tt -T
strace: Process 25243 attached with 2 threads
[pid 25243] 09:17:32.503907 ppoll([{fd=9, events=POLLIN|POLLERR|POLLHUP}],
1, NULL, NULL, 8 <unfinished ...>
[pid 25244] 09:17:32.694207 pwrite64(12, "\0", 1, 3773509631) = 1
<0.000059>
[pid 25244] 09:17:32.694412 pwrite64(12, "\0", 1, 3773513727) = 1
<0.000078>
[pid 25244] 09:17:32.694608 pwrite64(12, "\0", 1, 3773517823) = 1
<0.000056>
[pid 25244] 09:17:32.694729 pwrite64(12, "\0", 1, 3773521919) = 1
<0.000024>
[pid 25244] 09:17:32.694796 pwrite64(12, "\0", 1, 3773526015) = 1
<0.000020>
[pid 25244] 09:17:32.694855 pwrite64(12, "\0", 1, 3773530111) = 1
<0.000015>
[pid 25244] 09:17:32.694908 pwrite64(12, "\0", 1, 3773534207) = 1
<0.000014>
[pid 25244] 09:17:32.694950 pwrite64(12, "\0", 1, 3773538303) = 1
<0.000016>
[pid 25244] 09:17:32.694993 pwrite64(12, "\0", 1, 3773542399) = 1
<0.200032>
[pid 25244] 09:17:32.895140 pwrite64(12, "\0", 1, 3773546495) = 1
<0.000034>
[pid 25244] 09:17:32.895227 pwrite64(12, "\0", 1, 3773550591) = 1
<0.000029>
[pid 25244] 09:17:32.895296 pwrite64(12, "\0", 1, 3773554687) = 1
<0.000024>
[pid 25244] 09:17:32.895353 pwrite64(12, "\0", 1, 3773558783) = 1
<0.000016>
[pid 25244] 09:17:32.895400 pwrite64(12, "\0", 1, 3773562879) = 1
<0.000015>
[pid 25244] 09:17:32.895443 pwrite64(12, "\0", 1, 3773566975) = 1
<0.000015>
[pid 25244] 09:17:32.895485 pwrite64(12, "\0", 1, 3773571071) = 1
<0.000015>
[pid 25244] 09:17:32.895527 pwrite64(12, "\0", 1, 3773575167) = 1
<0.000017>
[pid 25244] 09:17:32.895570 pwrite64(12, "\0", 1, 3773579263) = 1
<0.199493>
[pid 25244] 09:17:33.095147 pwrite64(12, "\0", 1, 3773583359) = 1
<0.000031>
[pid 25244] 09:17:33.095262 pwrite64(12, "\0", 1, 3773587455) = 1
<0.000061>
[pid 25244] 09:17:33.095378 pwrite64(12, "\0", 1, 3773591551) = 1
<0.000027>
[pid 25244] 09:17:33.095445 pwrite64(12, "\0", 1, 3773595647) = 1
<0.000021>
[pid 25244] 09:17:33.095498 pwrite64(12, "\0", 1, 3773599743) = 1
<0.000016>
[pid 25244] 09:17:33.095542 pwrite64(12, "\0", 1, 3773603839) = 1
<0.000014>
. . .
BTW: it seems my NAS appliance doesn't support 4.2 version of NFS, because
if I force it, I then get an error in mount and in engine.log this error
for both nodes as they try to mount:
2021-07-05 17:01:56,082+02 ERROR
[org.ovirt.engine.core.bll.storage.connection.FileStorageHelper]
(EE-ManagedThreadFactory-engine-Thread-2554190) [642eb6be] The connection
with details '172.16.1.137:/nas/EXPORT-DOMAIN' failed because of error code
'477' and error message is: problem while trying to mount target
and in vdsm.log:
MountError: (32, ';mount.nfs: Protocol not supported\n')
With NFSv3 I get apparently the same command:
vdsm 19702 3036 7 17:15 ? 00:00:02 /usr/bin/qemu-img convert
-p -t none -T none -f raw
/rhev/data-center/mnt/blockSD/679c0725-75fb-4af7-bff1-7c447c5d789c/images/530b3e7f-4ce4-4051-9cac-1112f5f9e8b5/d2a89b5e-7d62-4695-96d8-b762ce52b379
-O raw -o preallocation=falloc /rhev/data-center/mnt/172.16.1.137:
_nas_EXPORT-DOMAIN/20433d5d-9d82-4079-9252-0e746ce54106/images/530b3e7f-4ce4-4051-9cac-1112f5f9e8b5/d2a89b5e-7d62-4695-96d8-b762ce52b379
The file size seems bigger but anyway very low throughput as with NFS v4.
Gianluca