Hi,
it took some time to answer due to some other stuff, but now I had the
time to look into it.
Am 21.08.2018 um 17:02 schrieb Michal Skrivanek:
[...]
> Hi Bernhard,
>
> With the latest version of the ovirt-imageio and the v2v we are
> performing quite nicely, and without specifying
the difference is that with the integrated v2v you don’t use any of
that. It’s going through the vCenter server which is the major slowdown.
With 10MB/s I do not expect the bottleneck is on our side in any way.
After all the integrated v2v is writing locally directly to the target
prepared volume so it’s probably even faster than imageio.
the “new” virt-v2v -o rhv-upload method is not integrated in GUI, but
supports VDDK and SSH methods of access which both should be faster
you could try to use that, but you’d need to use it on cmdline
I first tried the
ssh way which already improved the speed. Afterwards I
did some more experiments and ended up using vmfs-tools to mount the
vmware datastore directly and I see transfer speeds of ~50-60MB/sec when
transferring to an ovirt-export domain now. This seems to be the maximum
the used system can handle when using the fuse-vmfs-way. That would be
fast enough in my case (and is a huge improvement).
However I cannot use the rhv-upload method because my storage domain is
iSCSI and I get the error that sparse filetypes are not allowed (like
being described at
https://bugzilla.redhat.com/show_bug.cgi?id=1600547
). The solution from the Bug does also not help, because then instantly
I get the error message that I'd need to use -oa sparse when using
rhv-upload. This happens with the development version 1.39.9 of
libguestfs and with the git master branch. Do you have some advice how
to fix this / which version to use?
Regards
Bernhard
https://github.com/oVirt/ovirt-ansible-v2v-conversion-host/ might
help
to use it a bit more nicely
Thanks,
michal
> number I can tell you that weakest link is the read rate from the
> vmware data store. In our lab
> I can say that we roughly peek ~40 MiB/sec reading a single vm and the
> rest of our components(after the read from vmds)
> have no problem dealing with that - i.e buffering -> converting ->
> writing to imageio -> writing to storage
>
> So, in short, examine the read-rate from vm datastore, let us know,
> and please specify the versions you are using.
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
> To unsubscribe send an email to users-leave(a)ovirt.org
> <mailto:users-leave@ovirt.org>
> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
>
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KQZPRX4M7V7...
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
> To unsubscribe send an email to users-leave(a)ovirt.org
> <mailto:users-leave@ovirt.org>
> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
>
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WYKFJPEIPGV...
--
Dipl.-Inf. Bernhard Dick
Auf dem Anger 24
DE-46485 Wesel
www.BernhardDick.de
jabber: bernhard(a)jabber.bdick.de
Tel : +49.2812068620
Mobil : +49.1747607927
FAX : +49.2812068621
USt-IdNr.: DE274728845