Slow vm transfer speed from vmware esxi 5

Hi, currently I'm trying to move VMs from our vsphere 5 environment to oVirt. While the io performance on oVirt and on the esxi platform is quite well (about 100MByte/sec on a 1GBit storage link) the transfer speed using the integrated v2v-feature is very slow (only 10MByte/sec). That would result in transfer time of >24h for some machines. Do you have any ideas how I can improve the transfer speed? Regards Bernhard Dick

On Sat, 18 Aug 2018 at 18:45 Bernhard Dick <bernhard@bdick.de> wrote:
Hi,
currently I'm trying to move VMs from our vsphere 5 environment to oVirt. While the io performance on oVirt and on the esxi platform is quite well (about 100MByte/sec on a 1GBit storage link) the transfer speed using the integrated v2v-feature is very slow (only 10MByte/sec). That would result in transfer time of >24h for some machines. Do you have any ideas how I can improve the transfer speed?
Regards
Bernhard Dick
Hi Bernhard, With the latest version of the ovirt-imageio and the v2v we are performing quite nicely, and without specifying number I can tell you that weakest link is the read rate from the vmware data store. In our lab I can say that we roughly peek ~40 MiB/sec reading a single vm and the rest of our components(after the read from vmds) have no problem dealing with that - i.e buffering -> converting -> writing to imageio -> writing to storage So, in short, examine the read-rate from vm datastore, let us know, and please specify the versions you are using. _______________________________________________
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KQZPRX4M7V74FS...

On 19 Aug 2018, at 09:59, Roy Golan <rgolan@redhat.com> wrote:
On Sat, 18 Aug 2018 at 18:45 Bernhard Dick <bernhard@bdick.de <mailto:bernhard@bdick.de>> wrote: Hi,
currently I'm trying to move VMs from our vsphere 5 environment to oVirt. While the io performance on oVirt and on the esxi platform is quite well (about 100MByte/sec on a 1GBit storage link) the transfer speed using the integrated v2v-feature is very slow (only 10MByte/sec). That would result in transfer time of >24h for some machines. Do you have any ideas how I can improve the transfer speed?
Regards Bernhard Dick
Hi Bernhard,
With the latest version of the ovirt-imageio and the v2v we are performing quite nicely, and without specifying
the difference is that with the integrated v2v you don’t use any of that. It’s going through the vCenter server which is the major slowdown. With 10MB/s I do not expect the bottleneck is on our side in any way. After all the integrated v2v is writing locally directly to the target prepared volume so it’s probably even faster than imageio. the “new” virt-v2v -o rhv-upload method is not integrated in GUI, but supports VDDK and SSH methods of access which both should be faster you could try to use that, but you’d need to use it on cmdline https://github.com/oVirt/ovirt-ansible-v2v-conversion-host/ might help to use it a bit more nicely Thanks, michal
number I can tell you that weakest link is the read rate from the vmware data store. In our lab I can say that we roughly peek ~40 MiB/sec reading a single vm and the rest of our components(after the read from vmds) have no problem dealing with that - i.e buffering -> converting -> writing to imageio -> writing to storage
So, in short, examine the read-rate from vm datastore, let us know, and please specify the versions you are using.
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KQZPRX4M7V74FS... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/KQZPRX4M7V74FSYIY5LRUPC46CCJ2DCR/> _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WYKFJPEIPGVUT7...

[...]
Hi Bernhard,
With the latest version of the ovirt-imageio and the v2v we are performing quite nicely, and without specifying
the difference is that with the integrated v2v you don’t use any of that. It’s going through the vCenter server which is the major slowdown. With 10MB/s I do not expect the bottleneck is on our side in any way. After all the integrated v2v is writing locally directly to the target prepared volume so it’s probably even faster than imageio.VMWare as the bottleneck was also my idea. I just had the time to take a look into it, locally the ovirt and the vmware hosts can access their storage with about 100MB/sec. I also see that on the receiving side the
Hi, Am 21.08.2018 um 17:02 schrieb Michal Skrivanek: linux IO writes in 100MB/sec bursts and then waits some time before a new write happens.
the “new” virt-v2v -o rhv-upload method is not integrated in GUI, but supports VDDK and SSH methods of access which both should be faster you could try to use that, but you’d need to use it on cmdline https://github.com/oVirt/ovirt-ansible-v2v-conversion-host/ might help to use it a bit more nicely Thanks, I will take a look into it.
Regards Bernhard
Thanks, michal
number I can tell you that weakest link is the read rate from the vmware data store. In our lab I can say that we roughly peek ~40 MiB/sec reading a single vm and the rest of our components(after the read from vmds) have no problem dealing with that - i.e buffering -> converting -> writing to imageio -> writing to storage
So, in short, examine the read-rate from vm datastore, let us know, and please specify the versions you are using.
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KQZPRX4M7V74FS...
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WYKFJPEIPGVUT7...

[...]
Hi Bernhard,
With the latest version of the ovirt-imageio and the v2v we are performing quite nicely, and without specifying
the difference is that with the integrated v2v you don’t use any of that. It’s going through the vCenter server which is the major slowdown. With 10MB/s I do not expect the bottleneck is on our side in any way. After all the integrated v2v is writing locally directly to the target prepared volume so it’s probably even faster than imageio.
the “new” virt-v2v -o rhv-upload method is not integrated in GUI, but supports VDDK and SSH methods of access which both should be faster you could try to use that, but you’d need to use it on cmdline I first tried the ssh way which already improved the speed. Afterwards I did some more experiments and ended up using vmfs-tools to mount the vmware datastore directly and I see transfer speeds of ~50-60MB/sec when
Hi, it took some time to answer due to some other stuff, but now I had the time to look into it. Am 21.08.2018 um 17:02 schrieb Michal Skrivanek: transferring to an ovirt-export domain now. This seems to be the maximum the used system can handle when using the fuse-vmfs-way. That would be fast enough in my case (and is a huge improvement). However I cannot use the rhv-upload method because my storage domain is iSCSI and I get the error that sparse filetypes are not allowed (like being described at https://bugzilla.redhat.com/show_bug.cgi?id=1600547 ). The solution from the Bug does also not help, because then instantly I get the error message that I'd need to use -oa sparse when using rhv-upload. This happens with the development version 1.39.9 of libguestfs and with the git master branch. Do you have some advice how to fix this / which version to use? Regards Bernhard
https://github.com/oVirt/ovirt-ansible-v2v-conversion-host/ might help to use it a bit more nicely
Thanks, michal
number I can tell you that weakest link is the read rate from the vmware data store. In our lab I can say that we roughly peek ~40 MiB/sec reading a single vm and the rest of our components(after the read from vmds) have no problem dealing with that - i.e buffering -> converting -> writing to imageio -> writing to storage
So, in short, examine the read-rate from vm datastore, let us know, and please specify the versions you are using.
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KQZPRX4M7V74FS...
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WYKFJPEIPGVUT7...
-- Dipl.-Inf. Bernhard Dick Auf dem Anger 24 DE-46485 Wesel www.BernhardDick.de jabber: bernhard@jabber.bdick.de Tel : +49.2812068620 Mobil : +49.1747607927 FAX : +49.2812068621 USt-IdNr.: DE274728845

On Fri, Sep 14, 2018 at 7:21 PM Bernhard Dick <bernhard@bdick.de> wrote:
Hi,
it took some time to answer due to some other stuff, but now I had the time to look into it.
[...]
Hi Bernhard,
With the latest version of the ovirt-imageio and the v2v we are performing quite nicely, and without specifying
the difference is that with the integrated v2v you don’t use any of that. It’s going through the vCenter server which is the major slowdown. With 10MB/s I do not expect the bottleneck is on our side in any way. After all the integrated v2v is writing locally directly to the target prepared volume so it’s probably even faster than imageio.
the “new” virt-v2v -o rhv-upload method is not integrated in GUI, but supports VDDK and SSH methods of access which both should be faster you could try to use that, but you’d need to use it on cmdline I first tried the ssh way which already improved the speed. Afterwards I did some more experiments and ended up using vmfs-tools to mount the vmware datastore directly and I see transfer speeds of ~50-60MB/sec when
Am 21.08.2018 um 17:02 schrieb Michal Skrivanek: transferring to an ovirt-export domain now. This seems to be the maximum the used system can handle when using the fuse-vmfs-way. That would be fast enough in my case (and is a huge improvement).
However I cannot use the rhv-upload method because my storage domain is iSCSI and I get the error that sparse filetypes are not allowed (like being described at https://bugzilla.redhat.com/show_bug.cgi?id=1600547 ). The solution from the Bug does also not help, because then instantly I get the error message that I'd need to use -oa sparse when using rhv-upload. This happens with the development version 1.39.9 of libguestfs and with the git master branch. Do you have some advice how to fix this / which version to use?
I used to disable the limit enforcing "sparse" in libguestfs upstream source, but lately the simple check at the python plugin level was moved to to the ocaml code, and I did not have time to understand it yet. If you want to remove the limit, try to look here: https://github.com/libguestfs/libguestfs/blob/51a9c874d3f0a9c4780f2cd3ee7072... On RHEL, there is no such limit, and you can import vms to any kind of storage. Richard, can we remove the limit on sparse format? I don't see how this limit helps anyone. oVirt support several combinations: file: - raw sparse - raw preallocated - qcow2 sparse (unsupported in v2v) block: - raw preallocated - qcow2 sparse (unsupported in v2v) It seems that oVirt SDK is does not have a good way to select the format yet, so virt-v2v cannot select the format for the user. This means the user need to select the format. Nir Regards
Bernhard
https://github.com/oVirt/ovirt-ansible-v2v-conversion-host/ might help to use it a bit more nicely
Thanks, michal
number I can tell you that weakest link is the read rate from the vmware data store. In our lab I can say that we roughly peek ~40 MiB/sec reading a single vm and the rest of our components(after the read from vmds) have no problem dealing with that - i.e buffering -> converting -> writing to imageio -> writing to storage
So, in short, examine the read-rate from vm datastore, let us know, and please specify the versions you are using.
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KQZPRX4M7V74FS...
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WYKFJPEIPGVUT7...
-- Dipl.-Inf. Bernhard Dick Auf dem Anger 24 DE-46485 Wesel www.BernhardDick.de
jabber: bernhard@jabber.bdick.de
Tel : +49.2812068620 Mobil : +49.1747607927 FAX : +49.2812068621 USt-IdNr.: DE274728845 _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/CSIMWXZL744WEM...

On Sun, Sep 16, 2018 at 07:30:09PM +0300, Nir Soffer wrote:
I used to disable the limit enforcing "sparse" in libguestfs upstream source, but lately the simple check at the python plugin level was moved to to the ocaml code, and I did not have time to understand it yet.
If you want to remove the limit, try to look here: https://github.com/libguestfs/libguestfs/blob/51a9c874d3f0a9c4780f2cd3ee7072...
On RHEL, there is no such limit, and you can import vms to any kind of storage.
Richard, can we remove the limit on sparse format? I don't see how this limit helps anyone.
We already remove it downstream in all RHEL and LP builds. Here is the commit which does that: https://github.com/libguestfs/libguestfs/commit/aa5608a922bd35db28f555e53aea... We could remove it upstream, but AIUI it causes conversions to break with no easy way for users to understand what -oa modes are supported by what backends. To fix it properly we need a way for oVirt / imageio / whatever to describe what modes are possible for the current backend.
oVirt support several combinations:
file: - raw sparse - raw preallocated - qcow2 sparse (unsupported in v2v)
block: - raw preallocated - qcow2 sparse (unsupported in v2v)
It seems that oVirt SDK is does not have a good way to select the format yet, so virt-v2v cannot select the format for the user. This means the user need to select the format.
Right. There are two open bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1600547 https://bugzilla.redhat.com/show_bug.cgi?id=1574734 Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-builder quickly builds VMs from scratch http://libguestfs.org/virt-builder.1.html

On Sun, Sep 16, 2018 at 07:30:09PM +0300, Nir Soffer wrote:
I used to disable the limit enforcing "sparse" in libguestfs upstream source, but lately the simple check at the python plugin level was moved to to the ocaml code, and I did not have time to understand it yet.
If you want to remove the limit, try to look here: https://github.com/libguestfs/libguestfs/blob/51a9c874d3f0a9c4780f2cd3ee7072...
On RHEL, there is no such limit, and you can import vms to any kind of storage.
Richard, can we remove the limit on sparse format? I don't see how this limit helps anyone.
We already remove it downstream in all RHEL and LP builds. Here is the commit which does that:
https://github.com/libguestfs/libguestfs/commit/aa5608a922bd35db28f555e53aea...
Am 17.09.2018 um 09:53 schrieb Richard W.M. Jones: thanks for pointing to the commit. I was lazy and recompiled virt-v2v from the rhel-branch now and it worked fine. Regards Bernhard
We could remove it upstream, but AIUI it causes conversions to break with no easy way for users to understand what -oa modes are supported by what backends. To fix it properly we need a way for oVirt / imageio / whatever to describe what modes are possible for the current backend.
oVirt support several combinations:
file: - raw sparse - raw preallocated - qcow2 sparse (unsupported in v2v)
block: - raw preallocated - qcow2 sparse (unsupported in v2v)
It seems that oVirt SDK is does not have a good way to select the format yet, so virt-v2v cannot select the format for the user. This means the user need to select the format.
Right.
There are two open bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1600547 https://bugzilla.redhat.com/show_bug.cgi?id=1574734
Rich.
participants (5)
-
Bernhard Dick
-
Michal Skrivanek
-
Nir Soffer
-
Richard W.M. Jones
-
Roy Golan