[Users] Migrating from KVM to oVirt 3.1 fails - corrupt OVF

I'm attempting to fine-tune the process of getting my KVM/Libvirt managed VMs over into my new oVirt infrastructure, and the virt-v2v import is failing in the WUI with "Failed to read VM 'dh-imager01' OVF, it may be corrupted". I've attached both engine and vdsm logs that are a snapshot from when I ran the virt-v2v command until I saw the failure under Events. virt-v2v command used... # virt-v2v -i libvirtxml -o rhev -os dc-vmarchitect.tamu.edu:/exportdomain dh-imager01.xml dh-imager01_sys.qcow2: 100% [===========================================================================================================================================================================================================]D 0h00m37s virt-v2v: dh-imager01 configured with virtio drivers. The xml has been modified numerous times based on past mailing list comments to have VNC and Network information removed, but still the same failure. I've attached the latest XML that was used in the log's failure as dh-imager01.xml. I've also tried passing hte bridge device (ovirtmgmt) in the above command with same failure results. Node and Engine are both CentOS 6.2, with vdsm-4.10.0-4 and ovirt-engine-3.1 respectively. Please let me know what other configuration information could be helpful to debug / troubleshoot this. Are there any other methods besides a virt-v2v migration that can allow me to use my previous KVM VMs within oVirt? Thanks - Trey

On 07/18/2012 06:00 PM, Trey Dockendorf wrote:
I'm attempting to fine-tune the process of getting my KVM/Libvirt managed VMs over into my new oVirt infrastructure, and the virt-v2v import is failing in the WUI with "Failed to read VM 'dh-imager01' OVF, it may be corrupted". I've attached both engine and vdsm logs that are a snapshot from when I ran the virt-v2v command until I saw the failure under Events.
matt - any thoughts?
virt-v2v command used...
# virt-v2v -i libvirtxml -o rhev -os dc-vmarchitect.tamu.edu:/exportdomain dh-imager01.xml dh-imager01_sys.qcow2: 100% [===========================================================================================================================================================================================================]D 0h00m37s virt-v2v: dh-imager01 configured with virtio drivers.
The xml has been modified numerous times based on past mailing list comments to have VNC and Network information removed, but still the same failure. I've attached the latest XML that was used in the log's failure as dh-imager01.xml. I've also tried passing hte bridge device (ovirtmgmt) in the above command with same failure results.
Node and Engine are both CentOS 6.2, with vdsm-4.10.0-4 and ovirt-engine-3.1 respectively.
Please let me know what other configuration information could be helpful to debug / troubleshoot this.
Are there any other methods besides a virt-v2v migration that can allow me to use my previous KVM VMs within oVirt?
Thanks - Trey
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 18/07/12 23:52, Itamar Heim wrote:
On 07/18/2012 06:00 PM, Trey Dockendorf wrote:
I'm attempting to fine-tune the process of getting my KVM/Libvirt managed VMs over into my new oVirt infrastructure, and the virt-v2v import is failing in the WUI with "Failed to read VM 'dh-imager01' OVF, it may be corrupted". I've attached both engine and vdsm logs that are a snapshot from when I ran the virt-v2v command until I saw the failure under Events.
matt - any thoughts?
Nothing springs to mind immediately, but it sounds like v2v is producing an invalid OVF. If somebody can diagnose what the problem with the OVF is I can fix v2v. Matt
virt-v2v command used...
# virt-v2v -i libvirtxml -o rhev -os dc-vmarchitect.tamu.edu:/exportdomain dh-imager01.xml dh-imager01_sys.qcow2: 100% [===========================================================================================================================================================================================================]D
0h00m37s virt-v2v: dh-imager01 configured with virtio drivers.
The xml has been modified numerous times based on past mailing list comments to have VNC and Network information removed, but still the same failure. I've attached the latest XML that was used in the log's failure as dh-imager01.xml. I've also tried passing hte bridge device (ovirtmgmt) in the above command with same failure results.
Node and Engine are both CentOS 6.2, with vdsm-4.10.0-4 and ovirt-engine-3.1 respectively.
Please let me know what other configuration information could be helpful to debug / troubleshoot this.
Are there any other methods besides a virt-v2v migration that can allow me to use my previous KVM VMs within oVirt?
Thanks - Trey
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Matthew Booth, RHCA, RHCSS Red Hat Engineering, Virtualisation Team GPG ID: D33C3490 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

On Thu, Jul 19, 2012 at 4:00 AM, Matthew Booth <mbooth@redhat.com> wrote:
On 18/07/12 23:52, Itamar Heim wrote:
On 07/18/2012 06:00 PM, Trey Dockendorf wrote:
I'm attempting to fine-tune the process of getting my KVM/Libvirt managed VMs over into my new oVirt infrastructure, and the virt-v2v import is failing in the WUI with "Failed to read VM 'dh-imager01' OVF, it may be corrupted". I've attached both engine and vdsm logs that are a snapshot from when I ran the virt-v2v command until I saw the failure under Events.
matt - any thoughts?
Nothing springs to mind immediately, but it sounds like v2v is producing an invalid OVF. If somebody can diagnose what the problem with the OVF is I can fix v2v.
Matt
virt-v2v command used...
# virt-v2v -i libvirtxml -o rhev -os dc-vmarchitect.tamu.edu:/exportdomain dh-imager01.xml dh-imager01_sys.qcow2: 100%
[===========================================================================================================================================================================================================]D
0h00m37s virt-v2v: dh-imager01 configured with virtio drivers.
The xml has been modified numerous times based on past mailing list comments to have VNC and Network information removed, but still the same failure. I've attached the latest XML that was used in the log's failure as dh-imager01.xml. I've also tried passing hte bridge device (ovirtmgmt) in the above command with same failure results.
Node and Engine are both CentOS 6.2, with vdsm-4.10.0-4 and ovirt-engine-3.1 respectively.
Please let me know what other configuration information could be helpful to debug / troubleshoot this.
Are there any other methods besides a virt-v2v migration that can allow me to use my previous KVM VMs within oVirt?
Thanks - Trey
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Matthew Booth, RHCA, RHCSS Red Hat Engineering, Virtualisation Team
GPG ID: D33C3490 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
Attached is the virt-v2v generated ovf that's in my NFS export domain Any other means to get KVM/libvirt/virt-manager based VMs into oVirt? Possibly something as crude as provisioning new VMs with oVirt then replacing the virtual hard drives? Thanks - Trey

On 07/20/2012 02:08 AM, Trey Dockendorf wrote:
On Thu, Jul 19, 2012 at 4:00 AM, Matthew Booth <mbooth@redhat.com> wrote:
On 18/07/12 23:52, Itamar Heim wrote:
On 07/18/2012 06:00 PM, Trey Dockendorf wrote:
I'm attempting to fine-tune the process of getting my KVM/Libvirt managed VMs over into my new oVirt infrastructure, and the virt-v2v import is failing in the WUI with "Failed to read VM 'dh-imager01' OVF, it may be corrupted". I've attached both engine and vdsm logs that are a snapshot from when I ran the virt-v2v command until I saw the failure under Events.
matt - any thoughts?
Nothing springs to mind immediately, but it sounds like v2v is producing an invalid OVF. If somebody can diagnose what the problem with the OVF is I can fix v2v.
Matt
virt-v2v command used...
# virt-v2v -i libvirtxml -o rhev -os dc-vmarchitect.tamu.edu:/exportdomain dh-imager01.xml dh-imager01_sys.qcow2: 100%
[===========================================================================================================================================================================================================]D
0h00m37s virt-v2v: dh-imager01 configured with virtio drivers.
The xml has been modified numerous times based on past mailing list comments to have VNC and Network information removed, but still the same failure. I've attached the latest XML that was used in the log's failure as dh-imager01.xml. I've also tried passing hte bridge device (ovirtmgmt) in the above command with same failure results.
Node and Engine are both CentOS 6.2, with vdsm-4.10.0-4 and ovirt-engine-3.1 respectively.
Please let me know what other configuration information could be helpful to debug / troubleshoot this.
Are there any other methods besides a virt-v2v migration that can allow me to use my previous KVM VMs within oVirt?
Thanks - Trey
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Matthew Booth, RHCA, RHCSS Red Hat Engineering, Virtualisation Team
GPG ID: D33C3490 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
Attached is the virt-v2v generated ovf that's in my NFS export domain
Any other means to get KVM/libvirt/virt-manager based VMs into oVirt? Possibly something as crude as provisioning new VMs with oVirt then replacing the virtual hard drives?
this would work - just create the VM on an NFS storage domain with a disk the same size as origin, and copy over the disk you had. a bit trickier for iscsi, so i'd do this with nfs.
Thanks - Trey

On Fri, Jul 20, 2012 at 3:52 AM, Itamar Heim <iheim@redhat.com> wrote:
On 07/20/2012 02:08 AM, Trey Dockendorf wrote:
On Thu, Jul 19, 2012 at 4:00 AM, Matthew Booth <mbooth@redhat.com> wrote:
On 18/07/12 23:52, Itamar Heim wrote:
On 07/18/2012 06:00 PM, Trey Dockendorf wrote:
I'm attempting to fine-tune the process of getting my KVM/Libvirt managed VMs over into my new oVirt infrastructure, and the virt-v2v import is failing in the WUI with "Failed to read VM 'dh-imager01' OVF, it may be corrupted". I've attached both engine and vdsm logs that are a snapshot from when I ran the virt-v2v command until I saw the failure under Events.
matt - any thoughts?
Nothing springs to mind immediately, but it sounds like v2v is producing an invalid OVF. If somebody can diagnose what the problem with the OVF is I can fix v2v.
Matt
virt-v2v command used...
# virt-v2v -i libvirtxml -o rhev -os dc-vmarchitect.tamu.edu:/exportdomain dh-imager01.xml dh-imager01_sys.qcow2: 100%
[===========================================================================================================================================================================================================]D
0h00m37s virt-v2v: dh-imager01 configured with virtio drivers.
The xml has been modified numerous times based on past mailing list comments to have VNC and Network information removed, but still the same failure. I've attached the latest XML that was used in the log's failure as dh-imager01.xml. I've also tried passing hte bridge device (ovirtmgmt) in the above command with same failure results.
Node and Engine are both CentOS 6.2, with vdsm-4.10.0-4 and ovirt-engine-3.1 respectively.
Please let me know what other configuration information could be helpful to debug / troubleshoot this.
Are there any other methods besides a virt-v2v migration that can allow me to use my previous KVM VMs within oVirt?
Thanks - Trey
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Matthew Booth, RHCA, RHCSS Red Hat Engineering, Virtualisation Team
GPG ID: D33C3490 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
Attached is the virt-v2v generated ovf that's in my NFS export domain
Any other means to get KVM/libvirt/virt-manager based VMs into oVirt? Possibly something as crude as provisioning new VMs with oVirt then replacing the virtual hard drives?
this would work - just create the VM on an NFS storage domain with a disk the same size as origin, and copy over the disk you had. a bit trickier for iscsi, so i'd do this with nfs.
Thanks - Trey
Why is it trickier with iSCSI? Currently the only Data Center I have functioning in oVirt only has iSCSI storage available. Thanks - Trey

On 07/20/2012 07:21 PM, Trey Dockendorf wrote:
On Fri, Jul 20, 2012 at 3:52 AM, Itamar Heim <iheim@redhat.com> wrote:
On 07/20/2012 02:08 AM, Trey Dockendorf wrote:
On Thu, Jul 19, 2012 at 4:00 AM, Matthew Booth <mbooth@redhat.com> wrote:
On 18/07/12 23:52, Itamar Heim wrote:
On 07/18/2012 06:00 PM, Trey Dockendorf wrote:
I'm attempting to fine-tune the process of getting my KVM/Libvirt managed VMs over into my new oVirt infrastructure, and the virt-v2v import is failing in the WUI with "Failed to read VM 'dh-imager01' OVF, it may be corrupted". I've attached both engine and vdsm logs that are a snapshot from when I ran the virt-v2v command until I saw the failure under Events.
matt - any thoughts?
Nothing springs to mind immediately, but it sounds like v2v is producing an invalid OVF. If somebody can diagnose what the problem with the OVF is I can fix v2v.
Matt
virt-v2v command used...
# virt-v2v -i libvirtxml -o rhev -os dc-vmarchitect.tamu.edu:/exportdomain dh-imager01.xml dh-imager01_sys.qcow2: 100%
[===========================================================================================================================================================================================================]D
0h00m37s virt-v2v: dh-imager01 configured with virtio drivers.
The xml has been modified numerous times based on past mailing list comments to have VNC and Network information removed, but still the same failure. I've attached the latest XML that was used in the log's failure as dh-imager01.xml. I've also tried passing hte bridge device (ovirtmgmt) in the above command with same failure results.
Node and Engine are both CentOS 6.2, with vdsm-4.10.0-4 and ovirt-engine-3.1 respectively.
Please let me know what other configuration information could be helpful to debug / troubleshoot this.
Are there any other methods besides a virt-v2v migration that can allow me to use my previous KVM VMs within oVirt?
Thanks - Trey
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Matthew Booth, RHCA, RHCSS Red Hat Engineering, Virtualisation Team
GPG ID: D33C3490 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
Attached is the virt-v2v generated ovf that's in my NFS export domain
Any other means to get KVM/libvirt/virt-manager based VMs into oVirt? Possibly something as crude as provisioning new VMs with oVirt then replacing the virtual hard drives?
this would work - just create the VM on an NFS storage domain with a disk the same size as origin, and copy over the disk you had. a bit trickier for iscsi, so i'd do this with nfs.
Thanks - Trey
Why is it trickier with iSCSI? Currently the only Data Center I have functioning in oVirt only has iSCSI storage available.
with iscsi, you will have to create the disks as pre-allocated, and use DD to overwrite them. NFS doesn't have to be pre-allocated. and since you are using pre-allocated, you need to use the RAW format iirc

On Fri, Jul 20, 2012 at 11:32 AM, Itamar Heim <iheim@redhat.com> wrote:
On 07/20/2012 07:21 PM, Trey Dockendorf wrote:
On Fri, Jul 20, 2012 at 3:52 AM, Itamar Heim <iheim@redhat.com> wrote:
On 07/20/2012 02:08 AM, Trey Dockendorf wrote:
On Thu, Jul 19, 2012 at 4:00 AM, Matthew Booth <mbooth@redhat.com> wrote:
On 18/07/12 23:52, Itamar Heim wrote:
On 07/18/2012 06:00 PM, Trey Dockendorf wrote: > > > > I'm attempting to fine-tune the process of getting my KVM/Libvirt > managed VMs over into my new oVirt infrastructure, and the virt-v2v > import is failing in the WUI with "Failed to read VM 'dh-imager01' > OVF, it may be corrupted". I've attached both engine and vdsm logs > that are a snapshot from when I ran the virt-v2v command until I saw > the failure under Events.
matt - any thoughts?
Nothing springs to mind immediately, but it sounds like v2v is producing an invalid OVF. If somebody can diagnose what the problem with the OVF is I can fix v2v.
Matt
> > virt-v2v command used... > > # virt-v2v -i libvirtxml -o rhev -os > dc-vmarchitect.tamu.edu:/exportdomain dh-imager01.xml > dh-imager01_sys.qcow2: 100% > > > > [===========================================================================================================================================================================================================]D > > 0h00m37s > virt-v2v: dh-imager01 configured with virtio drivers. > > The xml has been modified numerous times based on past mailing list > comments to have VNC and Network information removed, but still the > same failure. I've attached the latest XML that was used in the > log's > failure as dh-imager01.xml. I've also tried passing hte bridge > device > (ovirtmgmt) in the above command with same failure results. > > Node and Engine are both CentOS 6.2, with vdsm-4.10.0-4 and > ovirt-engine-3.1 respectively. > > Please let me know what other configuration information could be > helpful to debug / troubleshoot this. > > Are there any other methods besides a virt-v2v migration that can > allow me to use my previous KVM VMs within oVirt? > > > Thanks > - Trey > > > > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users >
-- Matthew Booth, RHCA, RHCSS Red Hat Engineering, Virtualisation Team
GPG ID: D33C3490 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
Attached is the virt-v2v generated ovf that's in my NFS export domain
Any other means to get KVM/libvirt/virt-manager based VMs into oVirt? Possibly something as crude as provisioning new VMs with oVirt then replacing the virtual hard drives?
this would work - just create the VM on an NFS storage domain with a disk the same size as origin, and copy over the disk you had. a bit trickier for iscsi, so i'd do this with nfs.
Thanks - Trey
Why is it trickier with iSCSI? Currently the only Data Center I have functioning in oVirt only has iSCSI storage available.
with iscsi, you will have to create the disks as pre-allocated, and use DD to overwrite them. NFS doesn't have to be pre-allocated. and since you are using pre-allocated, you need to use the RAW format iirc
Currently most of my KVM VMs are qcow2, so converting them to raw would not be a problem. However, why is DD necessary? Why can't I overwrite the <image_name>.img with my *.img file ? Since I've used mostly qcow2 in my time with KVM/libvirt I may lack some understanding of how to correctly handle raw images. Would a qcow2 image with preallocation=metadata be possible on an iSCSI data store? Thanks - Trey

On 07/20/2012 09:19 PM, Trey Dockendorf wrote:
On Fri, Jul 20, 2012 at 11:32 AM, Itamar Heim <iheim@redhat.com> wrote:
On 07/20/2012 07:21 PM, Trey Dockendorf wrote:
On Fri, Jul 20, 2012 at 3:52 AM, Itamar Heim <iheim@redhat.com> wrote:
On 07/20/2012 02:08 AM, Trey Dockendorf wrote:
On Thu, Jul 19, 2012 at 4:00 AM, Matthew Booth <mbooth@redhat.com> wrote:
On 18/07/12 23:52, Itamar Heim wrote: > > > > On 07/18/2012 06:00 PM, Trey Dockendorf wrote: >> >> >> >> I'm attempting to fine-tune the process of getting my KVM/Libvirt >> managed VMs over into my new oVirt infrastructure, and the virt-v2v >> import is failing in the WUI with "Failed to read VM 'dh-imager01' >> OVF, it may be corrupted". I've attached both engine and vdsm logs >> that are a snapshot from when I ran the virt-v2v command until I saw >> the failure under Events. > > > > > matt - any thoughts?
Nothing springs to mind immediately, but it sounds like v2v is producing an invalid OVF. If somebody can diagnose what the problem with the OVF is I can fix v2v.
Matt
> >> >> virt-v2v command used... >> >> # virt-v2v -i libvirtxml -o rhev -os >> dc-vmarchitect.tamu.edu:/exportdomain dh-imager01.xml >> dh-imager01_sys.qcow2: 100% >> >> >> >> [===========================================================================================================================================================================================================]D >> >> 0h00m37s >> virt-v2v: dh-imager01 configured with virtio drivers. >> >> The xml has been modified numerous times based on past mailing list >> comments to have VNC and Network information removed, but still the >> same failure. I've attached the latest XML that was used in the >> log's >> failure as dh-imager01.xml. I've also tried passing hte bridge >> device >> (ovirtmgmt) in the above command with same failure results. >> >> Node and Engine are both CentOS 6.2, with vdsm-4.10.0-4 and >> ovirt-engine-3.1 respectively. >> >> Please let me know what other configuration information could be >> helpful to debug / troubleshoot this. >> >> Are there any other methods besides a virt-v2v migration that can >> allow me to use my previous KVM VMs within oVirt? >> >> >> Thanks >> - Trey >> >> >> >> _______________________________________________ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > >
-- Matthew Booth, RHCA, RHCSS Red Hat Engineering, Virtualisation Team
GPG ID: D33C3490 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
Attached is the virt-v2v generated ovf that's in my NFS export domain
Any other means to get KVM/libvirt/virt-manager based VMs into oVirt? Possibly something as crude as provisioning new VMs with oVirt then replacing the virtual hard drives?
this would work - just create the VM on an NFS storage domain with a disk the same size as origin, and copy over the disk you had. a bit trickier for iscsi, so i'd do this with nfs.
Thanks - Trey
Why is it trickier with iSCSI? Currently the only Data Center I have functioning in oVirt only has iSCSI storage available.
with iscsi, you will have to create the disks as pre-allocated, and use DD to overwrite them. NFS doesn't have to be pre-allocated. and since you are using pre-allocated, you need to use the RAW format iirc
Currently most of my KVM VMs are qcow2, so converting them to raw would not be a problem. However, why is DD necessary? Why can't I overwrite the <image_name>.img with my *.img file ? Since I've used mostly qcow2 in my time with KVM/libvirt I may lack some understanding of how to correctly handle raw images.
in both cases there aren't any .img files. you can convert you qcow2 to raw before copying them over to iscsi or nfs using qemu-img convert. it is not necessary, but will save you failing on small details between the two. using the export domain is safest, even though it doubles the amount of IO
Would a qcow2 image with preallocation=metadata be possible on an iSCSI data store?
ayal?

----- Original Message -----
On 07/20/2012 09:19 PM, Trey Dockendorf wrote:
On Fri, Jul 20, 2012 at 11:32 AM, Itamar Heim <iheim@redhat.com> wrote:
On 07/20/2012 07:21 PM, Trey Dockendorf wrote:
On Fri, Jul 20, 2012 at 3:52 AM, Itamar Heim <iheim@redhat.com> wrote:
On 07/20/2012 02:08 AM, Trey Dockendorf wrote:
On Thu, Jul 19, 2012 at 4:00 AM, Matthew Booth <mbooth@redhat.com> wrote: > > > On 18/07/12 23:52, Itamar Heim wrote: >> >> >> >> On 07/18/2012 06:00 PM, Trey Dockendorf wrote: >>> >>> >>> >>> I'm attempting to fine-tune the process of getting my >>> KVM/Libvirt >>> managed VMs over into my new oVirt infrastructure, and the >>> virt-v2v >>> import is failing in the WUI with "Failed to read VM >>> 'dh-imager01' >>> OVF, it may be corrupted". I've attached both engine and >>> vdsm logs >>> that are a snapshot from when I ran the virt-v2v command >>> until I saw >>> the failure under Events. >> >> >> >> >> matt - any thoughts? > > > > > Nothing springs to mind immediately, but it sounds like v2v is > producing > an > invalid OVF. If somebody can diagnose what the problem with > the OVF is > I > can > fix v2v. > > Matt > > >> >>> >>> virt-v2v command used... >>> >>> # virt-v2v -i libvirtxml -o rhev -os >>> dc-vmarchitect.tamu.edu:/exportdomain dh-imager01.xml >>> dh-imager01_sys.qcow2: 100% >>> >>> >>> >>> [===========================================================================================================================================================================================================]D >>> >>> 0h00m37s >>> virt-v2v: dh-imager01 configured with virtio drivers. >>> >>> The xml has been modified numerous times based on past >>> mailing list >>> comments to have VNC and Network information removed, but >>> still the >>> same failure. I've attached the latest XML that was used in >>> the >>> log's >>> failure as dh-imager01.xml. I've also tried passing hte >>> bridge >>> device >>> (ovirtmgmt) in the above command with same failure results. >>> >>> Node and Engine are both CentOS 6.2, with vdsm-4.10.0-4 and >>> ovirt-engine-3.1 respectively. >>> >>> Please let me know what other configuration information >>> could be >>> helpful to debug / troubleshoot this. >>> >>> Are there any other methods besides a virt-v2v migration >>> that can >>> allow me to use my previous KVM VMs within oVirt? >>> >>> >>> Thanks >>> - Trey >>> >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users@ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >> >> > > > -- > Matthew Booth, RHCA, RHCSS > Red Hat Engineering, Virtualisation Team > > GPG ID: D33C3490 > GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490 > >
Attached is the virt-v2v generated ovf that's in my NFS export domain
Any other means to get KVM/libvirt/virt-manager based VMs into oVirt? Possibly something as crude as provisioning new VMs with oVirt then replacing the virtual hard drives?
this would work - just create the VM on an NFS storage domain with a disk the same size as origin, and copy over the disk you had. a bit trickier for iscsi, so i'd do this with nfs.
Thanks - Trey
Why is it trickier with iSCSI? Currently the only Data Center I have functioning in oVirt only has iSCSI storage available.
with iscsi, you will have to create the disks as pre-allocated, and use DD to overwrite them. NFS doesn't have to be pre-allocated. and since you are using pre-allocated, you need to use the RAW format iirc
Currently most of my KVM VMs are qcow2, so converting them to raw would not be a problem. However, why is DD necessary? Why can't I overwrite the <image_name>.img with my *.img file ? Since I've used mostly qcow2 in my time with KVM/libvirt I may lack some understanding of how to correctly handle raw images.
in both cases there aren't any .img files. you can convert you qcow2 to raw before copying them over to iscsi or nfs using qemu-img convert. it is not necessary, but will save you failing on small details between the two. using the export domain is safest, even though it doubles the amount of IO
Would a qcow2 image with preallocation=metadata be possible on an iSCSI data store?
ayal?
nope. metadata preallocation means that each logical block has a corresponding physical block. With files this is fine as you can seek wherever you want and the file will remain sparse. With block devices this makes little sense as the second the guest accesses a block which is mapped to an unallocated physical block we'd have to allocate all the area up to that point. (btw, qemu-img will fail if you try to create such an image on a block device)

On 2012-7-22 19:51, Ayal Baron wrote:
----- Original Message -----
On Fri, Jul 20, 2012 at 11:32 AM, Itamar Heim <iheim@redhat.com> wrote:
On 07/20/2012 07:21 PM, Trey Dockendorf wrote:
On Fri, Jul 20, 2012 at 3:52 AM, Itamar Heim <iheim@redhat.com> wrote:
On 07/20/2012 02:08 AM, Trey Dockendorf wrote: > > On Thu, Jul 19, 2012 at 4:00 AM, Matthew Booth > <mbooth@redhat.com> > wrote: >> >> On 18/07/12 23:52, Itamar Heim wrote: >>> >>> >>> On 07/18/2012 06:00 PM, Trey Dockendorf wrote: >>>> >>>> >>>> I'm attempting to fine-tune the process of getting my >>>> KVM/Libvirt >>>> managed VMs over into my new oVirt infrastructure, and the >>>> virt-v2v >>>> import is failing in the WUI with "Failed to read VM >>>> 'dh-imager01' >>>> OVF, it may be corrupted". I've attached both engine and >>>> vdsm logs >>>> that are a snapshot from when I ran the virt-v2v command >>>> until I saw >>>> the failure under Events. >>> >>> >>> >>> matt - any thoughts? >> >> >> >> Nothing springs to mind immediately, but it sounds like v2v is >> producing >> an >> invalid OVF. If somebody can diagnose what the problem with >> the OVF is >> I >> can >> fix v2v. >> >> Matt >> >> >>>> virt-v2v command used... >>>> >>>> # virt-v2v -i libvirtxml -o rhev -os >>>> dc-vmarchitect.tamu.edu:/exportdomain dh-imager01.xml >>>> dh-imager01_sys.qcow2: 100% >>>> >>>> >>>> >>>> [===========================================================================================================================================================================================================]D >>>> >>>> 0h00m37s >>>> virt-v2v: dh-imager01 configured with virtio drivers. >>>> >>>> The xml has been modified numerous times based on past >>>> mailing list >>>> comments to have VNC and Network information removed, but >>>> still the >>>> same failure. I've attached the latest XML that was used in >>>> the >>>> log's >>>> failure as dh-imager01.xml. I've also tried passing hte >>>> bridge >>>> device >>>> (ovirtmgmt) in the above command with same failure results. >>>> >>>> Node and Engine are both CentOS 6.2, with vdsm-4.10.0-4 and >>>> ovirt-engine-3.1 respectively. >>>> >>>> Please let me know what other configuration information >>>> could be >>>> helpful to debug / troubleshoot this. >>>> >>>> Are there any other methods besides a virt-v2v migration >>>> that can >>>> allow me to use my previous KVM VMs within oVirt? >>>> >>>> >>>> Thanks >>>> - Trey >>>> >>>> >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users@ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>> >> >> -- >> Matthew Booth, RHCA, RHCSS >> Red Hat Engineering, Virtualisation Team >> >> GPG ID: D33C3490 >> GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490 >> >> > Attached is the virt-v2v generated ovf that's in my NFS export > domain > > Any other means to get KVM/libvirt/virt-manager based VMs into > oVirt? > Possibly something as crude as provisioning new VMs with oVirt > then > replacing the virtual hard drives? > this would work - just create the VM on an NFS storage domain with a disk the same size as origin, and copy over the disk you had. a bit trickier for iscsi, so i'd do this with nfs.
> Thanks > - Trey >
Why is it trickier with iSCSI? Currently the only Data Center I have functioning in oVirt only has iSCSI storage available.
with iscsi, you will have to create the disks as pre-allocated, and use DD to overwrite them. NFS doesn't have to be pre-allocated. and since you are using pre-allocated, you need to use the RAW format iirc
Currently most of my KVM VMs are qcow2, so converting them to raw would not be a problem. However, why is DD necessary? Why can't I overwrite the <image_name>.img with my *.img file ? Since I've used mostly qcow2 in my time with KVM/libvirt I may lack some understanding of how to correctly handle raw images. in both cases there aren't any .img files. you can convert you qcow2 to raw before copying them over to iscsi or nfs using qemu-img convert. it is not necessary, but will save you failing on small details between
On 07/20/2012 09:19 PM, Trey Dockendorf wrote: the two. using the export domain is safest, even though it doubles the amount of IO
Would a qcow2 image with preallocation=metadata be possible on an iSCSI data store? ayal?
nope. metadata preallocation means that each logical block has a corresponding physical block.
Ayal, by saying "logical block" and physical block here, what do they stand for in linux systems? I guess, physical block is "the scsi lun disk", logical block is "lvm disk"? right?
With files this is fine as you can seek wherever you want and the file will remain sparse. With block devices this makes little sense as the second the guest accesses a block which is mapped to an unallocated physical block we'd have to allocate all the area up to that point. (btw, qemu-img will fail if you try to create such an image on a block device) _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Shu Ming <shuming@linux.vnet.ibm.com> IBM China Systems and Technology Laboratory

<SNIP>
Would a qcow2 image with preallocation=metadata be possible on an iSCSI data store? ayal?
nope. metadata preallocation means that each logical block has a corresponding physical block. Ayal, by saying "logical block" and physical block here, what do they stand for in linux systems? I guess, physical block is "the scsi lun disk", logical block is "lvm disk"? right?
No, guest writing to block X, qcow maps X to Y on underlying device (e.g. LV) X is logical in example above. Y is 'physical' *Warning*, following explanation is a bit convoluted ;) Metadata preallocation means that all qcow clusters are already preset with every X mapped to a Y. Now on block storage, if guest writes to an X where X is mapped to Y which is beyond device size (because it's thinly provisioned), we would need to extend device to at least Y if not beyond. Worst case is if the guest I/O is to a block which is mapped to offset = size of virtual disk, which would force us to preallocate the entire disk at this point for a single block.
With files this is fine as you can seek wherever you want and the file will remain sparse. With block devices this makes little sense as the second the guest accesses a block which is mapped to an unallocated physical block we'd have to allocate all the area up to that point. (btw, qemu-img will fail if you try to create such an image on a block device) _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Shu Ming <shuming@linux.vnet.ibm.com> IBM China Systems and Technology Laboratory

On Sun, Jul 22, 2012 at 3:20 PM, Ayal Baron <abaron@redhat.com> wrote:
<SNIP>
Would a qcow2 image with preallocation=metadata be possible on an iSCSI data store? ayal?
nope. metadata preallocation means that each logical block has a corresponding physical block. Ayal, by saying "logical block" and physical block here, what do they stand for in linux systems? I guess, physical block is "the scsi lun disk", logical block is "lvm disk"? right?
No, guest writing to block X, qcow maps X to Y on underlying device (e.g. LV) X is logical in example above. Y is 'physical'
*Warning*, following explanation is a bit convoluted ;)
Metadata preallocation means that all qcow clusters are already preset with every X mapped to a Y. Now on block storage, if guest writes to an X where X is mapped to Y which is beyond device size (because it's thinly provisioned), we would need to extend device to at least Y if not beyond. Worst case is if the guest I/O is to a block which is mapped to offset = size of virtual disk, which would force us to preallocate the entire disk at this point for a single block.
With files this is fine as you can seek wherever you want and the file will remain sparse. With block devices this makes little sense as the second the guest accesses a block which is mapped to an unallocated physical block we'd have to allocate all the area up to that point. (btw, qemu-img will fail if you try to create such an image on a block device) _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Shu Ming <shuming@linux.vnet.ibm.com> IBM China Systems and Technology Laboratory
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Wanted to followup on this previous issue and report that after upgrading to the stable release of 3.1, the import works. What's strange, is the latest attempt was using latest virt-v2v in EL6, virt-v2v-0.8.7-6 , and when I went into the storage domain to view the Imports it showed one of my past imports in the list that had failed previously. However it was fixed, it now works! For all those wondering what steps I took to import a KVM VM into oVirt... # virsh dumpxml dh-imager01 > dh-imager01.xml (No editing of XML required now) # virt-v2v -b ovirtmgmt -i libvirtxml -o rhev -os dc-engine.tamu.edu:/exportdomain dh-imager01.xml * From in the Engine Web interface go to the Export Domain's entry under Storage Tab * Select the VM Import tab * Restore imported VM Thanks - Trey

Wasn't there an issue with dates in the OVF that caused this a few weeks ago? ----- Original Message -----
From: "Itamar Heim" <iheim@redhat.com> To: "Trey Dockendorf" <treydock@gmail.com>, "Matthew Booth" <mbooth@redhat.com> Cc: "users" <users@ovirt.org> Sent: Wednesday, July 18, 2012 6:52:10 PM Subject: Re: [Users] Migrating from KVM to oVirt 3.1 fails - corrupt OVF
On 07/18/2012 06:00 PM, Trey Dockendorf wrote:
I'm attempting to fine-tune the process of getting my KVM/Libvirt managed VMs over into my new oVirt infrastructure, and the virt-v2v import is failing in the WUI with "Failed to read VM 'dh-imager01' OVF, it may be corrupted". I've attached both engine and vdsm logs that are a snapshot from when I ran the virt-v2v command until I saw the failure under Events.
matt - any thoughts?
virt-v2v command used...
# virt-v2v -i libvirtxml -o rhev -os dc-vmarchitect.tamu.edu:/exportdomain dh-imager01.xml dh-imager01_sys.qcow2: 100% [===========================================================================================================================================================================================================]D 0h00m37s virt-v2v: dh-imager01 configured with virtio drivers.
The xml has been modified numerous times based on past mailing list comments to have VNC and Network information removed, but still the same failure. I've attached the latest XML that was used in the log's failure as dh-imager01.xml. I've also tried passing hte bridge device (ovirtmgmt) in the above command with same failure results.
Node and Engine are both CentOS 6.2, with vdsm-4.10.0-4 and ovirt-engine-3.1 respectively.
Please let me know what other configuration information could be helpful to debug / troubleshoot this.
Are there any other methods besides a virt-v2v migration that can allow me to use my previous KVM VMs within oVirt?
Thanks - Trey
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 19.07.12 07:57, Andrew Cathrow wrote:
Wasn't there an issue with dates in the OVF that caused this a few weeks ago? No, it was related the to export-date and creation-date and it was not exception, we did had exception regarding storage domain, but that was regarding snapshots storage id.
----- Original Message -----
From: "Itamar Heim" <iheim@redhat.com> To: "Trey Dockendorf" <treydock@gmail.com>, "Matthew Booth" <mbooth@redhat.com> Cc: "users" <users@ovirt.org> Sent: Wednesday, July 18, 2012 6:52:10 PM Subject: Re: [Users] Migrating from KVM to oVirt 3.1 fails - corrupt OVF
On 07/18/2012 06:00 PM, Trey Dockendorf wrote:
I'm attempting to fine-tune the process of getting my KVM/Libvirt managed VMs over into my new oVirt infrastructure, and the virt-v2v import is failing in the WUI with "Failed to read VM 'dh-imager01' OVF, it may be corrupted". I've attached both engine and vdsm logs that are a snapshot from when I ran the virt-v2v command until I saw the failure under Events.
matt - any thoughts?
virt-v2v command used...
# virt-v2v -i libvirtxml -o rhev -os dc-vmarchitect.tamu.edu:/exportdomain dh-imager01.xml dh-imager01_sys.qcow2: 100% [===========================================================================================================================================================================================================]D 0h00m37s virt-v2v: dh-imager01 configured with virtio drivers.
The xml has been modified numerous times based on past mailing list comments to have VNC and Network information removed, but still the same failure. I've attached the latest XML that was used in the log's failure as dh-imager01.xml. I've also tried passing hte bridge device (ovirtmgmt) in the above command with same failure results.
Node and Engine are both CentOS 6.2, with vdsm-4.10.0-4 and ovirt-engine-3.1 respectively.
Please let me know what other configuration information could be helpful to debug / troubleshoot this.
Are there any other methods besides a virt-v2v migration that can allow me to use my previous KVM VMs within oVirt?
Thanks - Trey
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (7)
-
Andrew Cathrow
-
Ayal Baron
-
Itamar Heim
-
Matthew Booth
-
Shahar Havivi
-
Shu Ming
-
Trey Dockendorf