[ovirt-users] problem importing ova vm

Arik Hadas ahadas at redhat.com
Wed Feb 21 16:35:05 UTC 2018


On Wed, Feb 21, 2018 at 6:03 PM, Jiří Sléžka <jiri.slezka at slu.cz> wrote:

> On 02/21/2018 03:43 PM, Jiří Sléžka wrote:
> > On 02/20/2018 11:09 PM, Arik Hadas wrote:
> >>
> >>
> >> On Tue, Feb 20, 2018 at 6:37 PM, Jiří Sléžka <jiri.slezka at slu.cz
> >> <mailto:jiri.slezka at slu.cz>> wrote:
> >>
> >>     On 02/20/2018 03:48 PM, Arik Hadas wrote:
> >>     >
> >>     >
> >>     > On Tue, Feb 20, 2018 at 3:49 PM, Jiří Sléžka <jiri.slezka at slu.cz
> <mailto:jiri.slezka at slu.cz>
> >>     > <mailto:jiri.slezka at slu.cz <mailto:jiri.slezka at slu.cz>>> wrote:
> >>     >
> >>     >     Hi Arik,
> >>     >
> >>     >     On 02/20/2018 01:22 PM, Arik Hadas wrote:
> >>     >     >
> >>     >     >
> >>     >     > On Tue, Feb 20, 2018 at 2:03 PM, Jiří Sléžka <
> jiri.slezka at slu.cz <mailto:jiri.slezka at slu.cz>
> >>     <mailto:jiri.slezka at slu.cz <mailto:jiri.slezka at slu.cz>>
> >>     >     > <mailto:jiri.slezka at slu.cz <mailto:jiri.slezka at slu.cz>
> >>     <mailto:jiri.slezka at slu.cz <mailto:jiri.slezka at slu.cz>>>> wrote:
> >>     >     >
> >>     >     >     Hi,
> >>     >     >
> >>     >     >
> >>     >     > Hi Jiří,
> >>     >     >
> >>     >     >
> >>     >     >
> >>     >     >     I would like to try import some ova files into our oVirt
> >>     instance [1]
> >>     >     >     [2] but I facing problems.
> >>     >     >
> >>     >     >     I have downloaded all ova images into one of hosts
> >>     (ovirt01) into
> >>     >     >     direcory /ova
> >>     >     >
> >>     >     >     ll /ova/
> >>     >     >     total 6532872
> >>     >     >     -rw-r--r--. 1 vdsm kvm 1160387072 Feb 16 16:21
> >>     HAAS-hpcowrie.ovf
> >>     >     >     -rw-r--r--. 1 vdsm kvm 1111785984 Feb 16 16:22
> >>     HAAS-hpdio.ova
> >>     >     >     -rw-r--r--. 1 vdsm kvm  846736896 Feb 16 16:22
> >>     HAAS-hpjdwpd.ova
> >>     >     >     -rw-r--r--. 1 vdsm kvm  891043328 Feb 16 16:23
> >>     HAAS-hptelnetd.ova
> >>     >     >     -rw-r--r--. 1 vdsm kvm  908222464 Feb 16 16:23
> >>     HAAS-hpuchotcp.ova
> >>     >     >     -rw-r--r--. 1 vdsm kvm  880643072 Feb 16 16:24
> >>     HAAS-hpuchoudp.ova
> >>     >     >     -rw-r--r--. 1 vdsm kvm  890833920 Feb 16 16:24
> >>     HAAS-hpuchoweb.ova
> >>     >     >
> >>     >     >     Then I tried to import them - from host ovirt01 and
> >>     directory /ova but
> >>     >     >     spinner spins infinitly and nothing is happen.
> >>     >     >
> >>     >     >
> >>     >     > And does it work when you provide a path to the actual ova
> >>     file, i.e.,
> >>     >     > /ova/HAAS-hpdio.ova, rather than to the directory?
> >>     >
> >>     >     this time it ends with "Failed to load VM configuration from
> >>     OVA file:
> >>     >     /ova/HAAS-hpdio.ova" error.
> >>     >
> >>     >
> >>     > Note that the logic that is applied on a specified folder is "try
> >>     > fetching an 'ova folder' out of the destination folder" rather
> than
> >>     > "list all the ova files inside the specified folder". It seems
> >>     that you
> >>     > expected the former output since there are no disks in that
> >>     folder, right?
> >>
> >>     yes, It would be more user friendly to list all ova files and then
> >>     select which one to import (like listing all vms in vmware import)
> >>
> >>     Maybe description of path field in manager should be "Path to ova
> file"
> >>     instead of "Path" :-)
> >>
> >>
> >> Sorry, I obviously meant 'latter' rather than 'former' before..
> >> Yeah, I agree that would be better, at least until listing the OVA files
> >> in the folder is implemented (that was the original plan, btw) - could
> >> you please file a bug?
> >
> > yes, sure
> >
> >
> >>     >     >     I cannot see anything relevant in vdsm log of host
> ovirt01.
> >>     >     >
> >>     >     >     In the engine.log of our standalone ovirt manager is
> just this
> >>     >     >     relevant line
> >>     >     >
> >>     >     >     2018-02-20 12:35:04,289+01 INFO
> >>     >     >     [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor]
> (default
> >>     >     >     task-31) [458990a7-b054-491a-904e-5c4fe44892c4]
> Executing Ansible
> >>     >     >     command: ANSIBLE_STDOUT_CALLBACK=ovaqueryplugin
> >>     >     >     [/usr/bin/ansible-playbook,
> >>     >     >     --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa,
> >>     >     >     --inventory=/tmp/ansible-inventory8237874608161160784,
> >>     >     >     --extra-vars=ovirt_query_ova_path=/ova,
> >>     >     >     /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml]
> [Logfile:
> >>     >     >     /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-
> 20180220123504-ovirt01.net
> >>     <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>
> >>     >     <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net
> >>     <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>>
> >>     >     >     <http://20180220123504-ovirt01.net
> >>     <http://20180220123504-ovirt01.net>
> >>     >     <http://20180220123504-ovirt01.net
> >>     <http://20180220123504-ovirt01.net>>>.slu.cz.log]
> >>     >     >
> >>     >     >     also there are two ansible processes which are still
> running
> >>     >     (and makes
> >>     >     >     heavy load on system (load 9+ and growing, it looks
> like it
> >>     >     eats all the
> >>     >     >     memory and system starts swapping))
> >>     >     >
> >>     >     >     ovirt    32087  3.3  0.0 332252  5980 ?        Sl
> >>      12:35   0:41
> >>     >     >     /usr/bin/python2 /usr/bin/ansible-playbook
> >>     >     >     --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa
> >>     >     >     --inventory=/tmp/ansible-inventory8237874608161160784
> >>     >     >     --extra-vars=ovirt_query_ova_path=/ova
> >>     >     >     /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml
> >>     >     >     ovirt    32099 57.5 78.9 15972880 11215312 ?   R
> >>     12:35  11:52
> >>     >     >     /usr/bin/python2 /usr/bin/ansible-playbook
> >>     >     >     --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa
> >>     >     >     --inventory=/tmp/ansible-inventory8237874608161160784
> >>     >     >     --extra-vars=ovirt_query_ova_path=/ova
> >>     >     >     /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml
> >>     >     >
> >>     >     >     playbook looks like
> >>     >     >
> >>     >     >     - hosts: all
> >>     >     >       remote_user: root
> >>     >     >       gather_facts: no
> >>     >     >
> >>     >     >       roles:
> >>     >     >         - ovirt-ova-query
> >>     >     >
> >>     >     >     and it looks like it only runs query_ova.py but on all
> >>     hosts?
> >>     >     >
> >>     >     >
> >>     >     > No, the engine provides ansible the host to run on when it
> >>     >     executes the
> >>     >     > playbook.
> >>     >     > It would only be executed on the selected host.
> >>     >     >
> >>     >     >
> >>     >     >
> >>     >     >     How does this work? ...or should it work?
> >>     >     >
> >>     >     >
> >>     >     > It should, especially that part of querying the OVA and is
> >>     supposed to
> >>     >     > be really quick.
> >>     >     > Can you please share the engine log and
> >>     >     >
> >>     >
> >>      /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-
> 20180220123504-ovirt01.net
> >>     <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>
> >>     >     <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net
> >>     <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>>
> >>     >     > <http://20180220123504-ovirt01.net
> >>     <http://20180220123504-ovirt01.net>
> >>     >     <http://20180220123504-ovirt01.net
> >>     <http://20180220123504-ovirt01.net>>>.slu.cz.log ?
> >>     >
> >>     >     engine log is here:
> >>     >
> >>     >     https://pastebin.com/nWWM3UUq
> >>     >
> >>     >
> >>     > Thanks.
> >>     > Alright, so now the configuration is fetched but its processing
> fails.
> >>     > We fixed many issues in this area recently, but it appears that
> >>     > something is wrong with the actual size of the disk within the
> ovf file
> >>     > that resides inside this ova file.
> >>     > Can you please share that ovf file that resides
> inside /ova/HAAS-hpdio.ova?
> >>
> >>     file HAAS-hpdio.ova
> >>     HAAS-hpdio.ova: POSIX tar archive (GNU)
> >>
> >>     [root at ovirt01 backup]# tar xvf HAAS-hpdio.ova
> >>     HAAS-hpdio.ovf
> >>     HAAS-hpdio-disk001.vmdk
> >>
> >>     file HAAS-hpdio.ovf is here:
> >>
> >>     https://pastebin.com/80qAU0wB
> >>
> >>
> >> Thanks again.
> >> So that seems to be a VM that was exported from Virtual Box, right?
> >> They don't do anything that violates the OVF specification but they do
> >> some non-common things that we don't anticipate:
> >
> > yes, it is most likely ova from VirtualBox
> >
> >> First, they don't specify the actual size of the disk and the current
> >> code in oVirt relies on that property.
> >> There is a workaround for this though: you can extract an OVA file, edit
> >> its OVF configuration - adding ovf:populatedSize="X" (and change
> >> ovf:capacity as I'll describe next) to the Disk element inside the
> >> DiskSection and pack the OVA again (tar cvf <ovf_file> <disk_file) where
> >> X is either:
> >> 1. the actual size of the vmdk file + some buffer (iirc, we used to take
> >> 15% of extra space for the conversion)
> >> 2. if you're using a file storage or you don't mind consuming more
> >> storage space on your block storage, simply set X to the virtual size of
> >> the disk (in bytes) as indicated by the ovf:capacity filed, e.g.,
> >> ovf:populatedSize="21474836480" in the case of HAAS-hpdio.ova.
> >>
> >> Second, the virtual size (indicated by ovf:capacity) is specified in
> >> bytes. The specification says that the default unit of allocation shall
> >> be bytes, but practically every OVA file that I've ever saw specified it
> >> in GB and the current code in oVirt kind of assumes that this is the
> >> case without checking the ovf:capacityAllocationUnits attribute that
> >> could indicate the real unit of allocation [1].
> >> Anyway, long story short, the virtual size of the disk should currently
> >> be specified in GB, e.g., ovf:populatedSize="20" in the case of
> >> HAAS-hpdio.ova.
> >
> > wow, thanks for this excellent explanation. I have changed this in ovf
> file
> >
> > ...
> > <Disk ovf:capacity="20" ovf:diskId="vmdisk2" ovf:populatedSize="20" ...
> > ...
> >
> > then I was able to import this mofified ova file (HAAS-hpdio_new.ova).
> > Interesting thing is that the vm was shown in vm list for while (with
> > state down with lock and status was initializing). After while this vm
> > disapeared :-o
> >
> > I am going to test it again and collect some logs...
>
> there are interesting logs in /var/log/vdsm/import/ at the host used for
> import
>
> http://mirror.slu.cz/tmp/ovirt-import.tar.bz2
>
> first of them describes situation where I chose thick provisioning,
> second situation with thin provisioning
>
> interesting part is I believe
>
> libguestfs: command: run: qemu-img
> libguestfs: command: run: \ create
> libguestfs: command: run: \ -f qcow2
> libguestfs: command: run: \ -o preallocation=off,compat=0.10
> libguestfs: command: run: \
> /rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-a570-
> f37fa986a772/images/d44e1890-3e42-420b-939c-dac1290e19af/
> 9edcccbc-b244-4b94-acd3-3c8ee12bbbec
> libguestfs: command: run: \ 21474836480
> Formatting
> '/rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-
> a570-f37fa986a772/images/d44e1890-3e42-420b-939c-
> dac1290e19af/9edcccbc-b244-4b94-acd3-3c8ee12bbbec',
> fmt=qcow2 size=21474836480 compat=0.10 encryption=off cluster_size=65536
> preallocation=off lazy_refcounts=off refcount_bits=16
> libguestfs: trace: vdsm_disk_create: disk_create = 0
> qemu-img 'convert' '-p' '-n' '-f' 'qcow2' '-O' 'qcow2'
> '/var/tmp/v2vovl2dccbd.qcow2'
> '/rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-
> a570-f37fa986a772/images/d44e1890-3e42-420b-939c-
> dac1290e19af/9edcccbc-b244-4b94-acd3-3c8ee12bbbec'
> qemu-img: error while writing sector 1000960: No space left on device
>
> virt-v2v: error: qemu-img command failed, see earlier errors
>
>
>
Sorry again, I made a mistake in:
 "Anyway, long story short, the virtual size of the disk should currently
 be specified in GB, e.g., ovf:populatedSize="20" in the case of
 HAAS-hpdio.ova."
I should have write ovf:capacity="20".
So if you wish the actual size of the disk to be 20GB (which means the disk
is preallocated), the disk element should be set with:
<Disk ovf:capacity="20" ovf:diskId="vmdisk2" ovf:populatedSize="21474836480"
...


>
> >
> >> That should do it. If not, please share the OVA file and I will examine
> >> it in my environment.
> >
> > original file is at
> >
> > https://haas.cesnet.cz/downloads/release-01/HAAS-hpdio.ova
> >
> >>
> >> [1] https://github.com/oVirt/ovirt-engine/blob/master/
> backend/manager/modules/utils/src/main/java/org/ovirt/
> engine/core/utils/ovf/OvfOvaReader.java#L220
> >>
> >>
> >>
> >>     >     file
> >>     >
> >>      /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-
> 20180220123504-ovirt01.net
> >>     <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>
> >>     >     <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net
> >>     <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>>
> >>     >     in the fact does not exists (nor folder
> /var/log/ovirt-engine/ova/)
> >>     >
> >>     >
> >>     > This issue is also resolved in 4.2.2.
> >>     > In the meantime, please create the  /var/log/ovirt-engine/ova/
> folder
> >>     > manually and make sure its permissions match the ones of the other
> >>     > folders in  /var/log/ovirt-engine.
> >>
> >>     ok, done. After another try there is this log file
> >>
> >>     /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-
> 20180220173005-ovirt01.net
> >>     <http://20180220173005-ovirt01.net>.slu.cz.log
> >>
> >>     https://pastebin.com/M5J44qur
> >>
> >>
> >> Is it the log of the execution of the ansible playbook that was provided
> >> with a path to the /ova folder?
> >> I'm interested in that in order to see how comes that its execution
> >> never completed.
> >
> > well, I dont think so, it is log from import with full path to ova file
> >
> >
> >
> >>
> >>
> >>
> >>
> >>     >     Cheers,
> >>     >
> >>     >     Jiri Slezka
> >>     >
> >>     >     >
> >>     >     >
> >>     >     >
> >>     >     >     I am using latest 4.2.1.7-1.el7.centos version
> >>     >     >
> >>     >     >     Cheers,
> >>     >     >     Jiri Slezka
> >>     >     >
> >>     >     >
> >>     >     >     [1] https://haas.cesnet.cz/#!index.md
> >>     <https://haas.cesnet.cz/#!index.md>
> >>     <https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!
> index.md>>
> >>     >     >     <https://haas.cesnet.cz/#!index.md <
> https://haas.cesnet.cz/#!index.md>
> >>     >     <https://haas.cesnet.cz/#!index.md
> >>     <https://haas.cesnet.cz/#!index.md>>> - Cesnet HAAS
> >>     >     >     [2] https://haas.cesnet.cz/downloads/release-01/
> >>     <https://haas.cesnet.cz/downloads/release-01/>
> >>     >     <https://haas.cesnet.cz/downloads/release-01/
> >>     <https://haas.cesnet.cz/downloads/release-01/>>
> >>     >     >     <https://haas.cesnet.cz/downloads/release-01/
> >>     <https://haas.cesnet.cz/downloads/release-01/>
> >>     >     <https://haas.cesnet.cz/downloads/release-01/
> >>     <https://haas.cesnet.cz/downloads/release-01/>>> - Image repository
> >>     >     >
> >>     >     >
> >>     >     >     _______________________________________________
> >>     >     >     Users mailing list
> >>     >     >     Users at ovirt.org <mailto:Users at ovirt.org> <mailto:
> Users at ovirt.org
> >>     <mailto:Users at ovirt.org>>
> >>     >     <mailto:Users at ovirt.org <mailto:Users at ovirt.org>
> >>     <mailto:Users at ovirt.org <mailto:Users at ovirt.org>>>
> >>     >     >     http://lists.ovirt.org/mailman/listinfo/users
> >>     <http://lists.ovirt.org/mailman/listinfo/users>
> >>     >     <http://lists.ovirt.org/mailman/listinfo/users
> >>     <http://lists.ovirt.org/mailman/listinfo/users>>
> >>     >     >     <http://lists.ovirt.org/mailman/listinfo/users
> >>     <http://lists.ovirt.org/mailman/listinfo/users>
> >>     >     <http://lists.ovirt.org/mailman/listinfo/users
> >>     <http://lists.ovirt.org/mailman/listinfo/users>>>
> >>     >     >
> >>     >     >
> >>     >
> >>     >
> >>     >
> >>     >     _______________________________________________
> >>     >     Users mailing list
> >>     >     Users at ovirt.org <mailto:Users at ovirt.org>
> >>     <mailto:Users at ovirt.org <mailto:Users at ovirt.org>>
> >>     >     http://lists.ovirt.org/mailman/listinfo/users
> >>     <http://lists.ovirt.org/mailman/listinfo/users>
> >>     >     <http://lists.ovirt.org/mailman/listinfo/users
> >>     <http://lists.ovirt.org/mailman/listinfo/users>>
> >>     >
> >>     >
> >>
> >>
> >>
> >>     _______________________________________________
> >>     Users mailing list
> >>     Users at ovirt.org <mailto:Users at ovirt.org>
> >>     http://lists.ovirt.org/mailman/listinfo/users
> >>     <http://lists.ovirt.org/mailman/listinfo/users>
> >>
> >>
> >
> >
> >
> >
> > _______________________________________________
> > Users mailing list
> > Users at ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20180221/68086191/attachment.html>


More information about the Users mailing list