
This is a cryptographically signed message in MIME format. --------------ms090103000405010003010308 Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: quoted-printable Hi, I would like to try import some ova files into our oVirt instance [1] [2] but I facing problems. I have downloaded all ova images into one of hosts (ovirt01) into direcory /ova ll /ova/ total 6532872 -rw-r--r--. 1 vdsm kvm 1160387072 Feb 16 16:21 HAAS-hpcowrie.ovf -rw-r--r--. 1 vdsm kvm 1111785984 Feb 16 16:22 HAAS-hpdio.ova -rw-r--r--. 1 vdsm kvm 846736896 Feb 16 16:22 HAAS-hpjdwpd.ova -rw-r--r--. 1 vdsm kvm 891043328 Feb 16 16:23 HAAS-hptelnetd.ova -rw-r--r--. 1 vdsm kvm 908222464 Feb 16 16:23 HAAS-hpuchotcp.ova -rw-r--r--. 1 vdsm kvm 880643072 Feb 16 16:24 HAAS-hpuchoudp.ova -rw-r--r--. 1 vdsm kvm 890833920 Feb 16 16:24 HAAS-hpuchoweb.ova Then I tried to import them - from host ovirt01 and directory /ova but spinner spins infinitly and nothing is happen. I cannot see anything relevant in vdsm log of host ovirt01. In the engine.log of our standalone ovirt manager is just this relevant l= ine 2018-02-20 12:35:04,289+01 INFO [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (default task-31) [458990a7-b054-491a-904e-5c4fe44892c4] Executing Ansible command: ANSIBLE_STDOUT_CALLBACK=3Dovaqueryplugin [/usr/bin/ansible-playbook, --private-key=3D/etc/pki/ovirt-engine/keys/engine_id_rsa, --inventory=3D/tmp/ansible-inventory8237874608161160784, --extra-vars=3Dovirt_query_ova_path=3D/ova, /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml] [Logfile: /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123504-ovirt01.= net.slu.cz.log] also there are two ansible processes which are still running (and makes heavy load on system (load 9+ and growing, it looks like it eats all the memory and system starts swapping)) ovirt 32087 3.3 0.0 332252 5980 ? Sl 12:35 0:41 /usr/bin/python2 /usr/bin/ansible-playbook --private-key=3D/etc/pki/ovirt-engine/keys/engine_id_rsa --inventory=3D/tmp/ansible-inventory8237874608161160784 --extra-vars=3Dovirt_query_ova_path=3D/ova /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml ovirt 32099 57.5 78.9 15972880 11215312 ? R 12:35 11:52 /usr/bin/python2 /usr/bin/ansible-playbook --private-key=3D/etc/pki/ovirt-engine/keys/engine_id_rsa --inventory=3D/tmp/ansible-inventory8237874608161160784 --extra-vars=3Dovirt_query_ova_path=3D/ova /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml playbook looks like - hosts: all remote_user: root gather_facts: no roles: - ovirt-ova-query and it looks like it only runs query_ova.py but on all hosts? How does this work? ...or should it work? I am using latest 4.2.1.7-1.el7.centos version Cheers, Jiri Slezka [1] https://haas.cesnet.cz/#!index.md - Cesnet HAAS [2] https://haas.cesnet.cz/downloads/release-01/ - Image repository --------------ms090103000405010003010308 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCC Cn8wggUJMIID8aADAgECAhACt8ndrdK9CetZxFyQDGB4MA0GCSqGSIb3DQEBCwUAMGUxCzAJ BgNVBAYTAlVTMRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2Vy dC5jb20xJDAiBgNVBAMTG0RpZ2lDZXJ0IEFzc3VyZWQgSUQgUm9vdCBDQTAeFw0xNDExMTgx MjAwMDBaFw0yNDExMTgxMjAwMDBaMHIxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1I b2xsYW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzANBgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMd VEVSRU5BIGVTY2llbmNlIFBlcnNvbmFsIENBIDMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw ggEKAoIBAQCwp9Jj5Aej1xPkS1GV3LvBdemFmkUR//nSzBodqsU3dv2BCRD30r4gt5oRsYty qDGF2nnItxV1SkwVoDxFeRzOIHYNYvBRHaiGvCQjEXzPRTocOSVfWpmq/zAL/QOEqpJogeM+ 0IBGiJcAENJshl7UcfjYbBnN5qStk74f52VWFf/aiF7MVJnsUr3oriQvXYOzs8N/NXyyQyim atBbumJVCNszF1X+XHCGfPNvxlNFW9ktv7azK0baminfLcsh6ubCdINZc+Nof2lU387NCDgg oh3KsYVcZTSuhh7qp6MjxE5VqOZod1hpXXzDOkjK+DAMC57iZXssncp24eaN08VlAgMBAAGj ggGmMIIBojASBgNVHRMBAf8ECDAGAQH/AgEAMA4GA1UdDwEB/wQEAwIBhjB5BggrBgEFBQcB AQRtMGswJAYIKwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNvbTBDBggrBgEFBQcw AoY3aHR0cDovL2NhY2VydHMuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0QXNzdXJlZElEUm9vdENB LmNydDCBgQYDVR0fBHoweDA6oDigNoY0aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL0RpZ2lD ZXJ0QXNzdXJlZElEUm9vdENBLmNybDA6oDigNoY0aHR0cDovL2NybDQuZGlnaWNlcnQuY29t L0RpZ2lDZXJ0QXNzdXJlZElEUm9vdENBLmNybDA9BgNVHSAENjA0MDIGBFUdIAAwKjAoBggr BgEFBQcCARYcaHR0cHM6Ly93d3cuZGlnaWNlcnQuY29tL0NQUzAdBgNVHQ4EFgQUjJ8RLubj egSlHlWLRggEpu2XcKYwHwYDVR0jBBgwFoAUReuir/SSy4IxLVGLp6chnfNtyA8wDQYJKoZI hvcNAQELBQADggEBAI5HEV91Oen8WHFCoJkeu2Av+b/kWTV2qH/YNI1Xsbou2hHKhh4IyNkF OxA/TUiuK2qQnQ5hAS0TIrs9SJ1Ke+DjXd/cTBiw7lCYSW5hkzigFV+iSivninpItafWqYBS WxITl1KHBS9YBskhEqO5GLliDMPiAgjqUBQ/H1qZMlZNQIuFu0UaFUQuZUpJFr4+0zpzPxsB iWU2muAoGItwbaP55EYshM7+v/J+x6kIhAJt5Dng8fOmOvR9F6Vw2/E0EZ6oQ8g1fdhwM101 S1OI6J1tUil1r7ES/svNqVWVb7YkUEBcPo8ppfHnTI/uxsn2tslsWefsOGJxNYUUSMAb9Eow ggVuMIIEVqADAgECAhAKebGg8bOvnIyfOWAn4bpzMA0GCSqGSIb3DQEBCwUAMHIxCzAJBgNV BAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzAN BgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMdVEVSRU5BIGVTY2llbmNlIFBlcnNvbmFsIENBIDMw HhcNMTcxMTE2MDAwMDAwWhcNMTgxMjE1MTIwMDAwWjCBlDETMBEGCgmSJomT8ixkARkWA29y ZzEWMBQGCgmSJomT8ixkARkWBnRlcmVuYTETMBEGCgmSJomT8ixkARkWA3RjczELMAkGA1UE BhMCQ1oxJTAjBgNVBAoTHFNpbGVzaWFuIFVuaXZlcnNpdHkgaW4gT3BhdmExHDAaBgNVBAMT E0ppcmkgU2xlemthIHNsZTAwMDEwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC/ VwOD1hlYL6l7GzxNqV1ne7/iMF/gHvPfTwejsC2s9sby7It82qXPRBVA2s1Cjb1A3ucpdlDN MXM83Lvh881XfkxhS2YLLyiZDmlSzAqfoMLxQ2/E0m1UugttzGJF7/10pEwj0FJFhnIVwA/E 8svCcbhxwO9BBpUz8JG1C6fTd0qyzJtNXVyH+WuHQbU2jgu2JJ7miiEKE1Fis0hFf1rKxTzX aVGyXiQLOn7TZDfPtXrJEG7eWYlFUP58edyuJELpWHTPHn8xJKYTy8Qq5BgFNyCRQT/6imsh tZlDBZSEeqyoSNtLsC57ZrjqgtLCEQFK9EX27dOy0/u95zS0OIWdAgMBAAGjggHbMIIB1zAf BgNVHSMEGDAWgBSMnxEu5uN6BKUeVYtGCASm7ZdwpjAdBgNVHQ4EFgQUF1mSlcyDz9wWit9V jCz+zJ9CrpswDAYDVR0TAQH/BAIwADAdBgNVHREEFjAUgRJqaXJpLnNsZXprYUBzbHUuY3ow DgYDVR0PAQH/BAQDAgSwMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcDBDA0BgNVHSAE LTArMAwGCiqGSIb3TAUCAgEwDAYKYIZIAYb9bAQfATANBgsqhkiG90wFAgMDAzCBhQYDVR0f BH4wfDA8oDqgOIY2aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL1RFUkVOQWVTY2llbmNlUGVy c29uYWxDQTMuY3JsMDygOqA4hjZodHRwOi8vY3JsNC5kaWdpY2VydC5jb20vVEVSRU5BZVNj aWVuY2VQZXJzb25hbENBMy5jcmwwewYIKwYBBQUHAQEEbzBtMCQGCCsGAQUFBzABhhhodHRw Oi8vb2NzcC5kaWdpY2VydC5jb20wRQYIKwYBBQUHMAKGOWh0dHA6Ly9jYWNlcnRzLmRpZ2lj ZXJ0LmNvbS9URVJFTkFlU2NpZW5jZVBlcnNvbmFsQ0EzLmNydDANBgkqhkiG9w0BAQsFAAOC AQEADtFRxKphkcHVdWjR/+i1+cdHfkbicraHlU5Mpw8EX6nemKu4GGAWfzH+Y7p6ImZwUHWf /SSbrX+57xaFUBOr3jktQm1GRmGUZESEmsUDB8UZXzdQC79/tO9MzRhvEBXuQhdxdoO64Efx VqtYAB2ydqz7yWh56ioSwaQZEXo5rO1kZuAcmVz8Smd1r/Mur/h8Y+qbrsJng1GS25aMhFts UV6z9zXuHFkT9Ck8SLdCEDzjzYNjXIDB5n+QOmPXnXrZMlGiI/aOqa5k5Sv6xCIPdH2kbpyd M1YiH/ChmU9gWJvy0Jq42KGLvWBvuHEzcb3f473Fvn4GWsXu0zDS2oh2/TGCA8MwggO/AgEB MIGGMHIxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIwEAYDVQQHEwlB bXN0ZXJkYW0xDzANBgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMdVEVSRU5BIGVTY2llbmNlIFBl cnNvbmFsIENBIDMCEAp5saDxs6+cjJ85YCfhunMwDQYJYIZIAWUDBAIBBQCgggINMBgGCSqG SIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTE4MDIyMDEyMDM0OFowLwYJ KoZIhvcNAQkEMSIEII2LjyC7rXObGemxBJUIoxLHCrOWU/xPm2Dpt2MsoGA9MGwGCSqGSIb3 DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG9w0DBzAOBggqhkiG 9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcNAwICASgwgZcGCSsG AQQBgjcQBDGBiTCBhjByMQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDES MBAGA1UEBxMJQW1zdGVyZGFtMQ8wDQYDVQQKEwZURVJFTkExJjAkBgNVBAMTHVRFUkVOQSBl U2NpZW5jZSBQZXJzb25hbCBDQSAzAhAKebGg8bOvnIyfOWAn4bpzMIGZBgsqhkiG9w0BCRAC CzGBiaCBhjByMQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDESMBAGA1UE BxMJQW1zdGVyZGFtMQ8wDQYDVQQKEwZURVJFTkExJjAkBgNVBAMTHVRFUkVOQSBlU2NpZW5j ZSBQZXJzb25hbCBDQSAzAhAKebGg8bOvnIyfOWAn4bpzMA0GCSqGSIb3DQEBAQUABIIBAAJ8 KoPVvQtdLInZYs3n2n3Ncx3fXA76vAzigzbdFLU7xAnh9RVzyWWnPzuxaqrwGgsvs/zD+2/9 EIj+hKAMEzDl3OHRRmU+W5wayIGLSAbthJawUjoKlu4u/hxMaDT0vclsLZg76Cskq32eQwwM H6DrSaT1o4ALy43Mz0NlJzBg2zW/QCBMgF8XXGPMFsYuElj40oI153nkv41eRi8yAiBIYdtI euOqd4hbJINGEa9EcqEAln2tVeRbWAB/6Tl0zF3sexw+xUEq3Zco5qirwoxzEJvWYheq3F75 M/0P40ONurLk1VihuX8x9W6pQfwtRov3xtc2iY3j0FCxEHbCuqAAAAAAAAA= --------------ms090103000405010003010308--

On Tue, Feb 20, 2018 at 2:03 PM, Jiří Sléžka <jiri.slezka@slu.cz> wrote:
Hi,
Hi Jiří,
I would like to try import some ova files into our oVirt instance [1] [2] but I facing problems.
I have downloaded all ova images into one of hosts (ovirt01) into direcory /ova
ll /ova/ total 6532872 -rw-r--r--. 1 vdsm kvm 1160387072 Feb 16 16:21 HAAS-hpcowrie.ovf -rw-r--r--. 1 vdsm kvm 1111785984 Feb 16 16:22 HAAS-hpdio.ova -rw-r--r--. 1 vdsm kvm 846736896 Feb 16 16:22 HAAS-hpjdwpd.ova -rw-r--r--. 1 vdsm kvm 891043328 Feb 16 16:23 HAAS-hptelnetd.ova -rw-r--r--. 1 vdsm kvm 908222464 Feb 16 16:23 HAAS-hpuchotcp.ova -rw-r--r--. 1 vdsm kvm 880643072 Feb 16 16:24 HAAS-hpuchoudp.ova -rw-r--r--. 1 vdsm kvm 890833920 Feb 16 16:24 HAAS-hpuchoweb.ova
Then I tried to import them - from host ovirt01 and directory /ova but spinner spins infinitly and nothing is happen.
And does it work when you provide a path to the actual ova file, i.e., /ova/HAAS-hpdio.ova, rather than to the directory?
I cannot see anything relevant in vdsm log of host ovirt01.
In the engine.log of our standalone ovirt manager is just this relevant line
2018-02-20 12:35:04,289+01 INFO [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (default task-31) [458990a7-b054-491a-904e-5c4fe44892c4] Executing Ansible command: ANSIBLE_STDOUT_CALLBACK=ovaqueryplugin [/usr/bin/ansible-playbook, --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa, --inventory=/tmp/ansible-inventory8237874608161160784, --extra-vars=ovirt_query_ova_path=/ova, /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml] [Logfile: /var/log/ovirt-engine/ova/ovirt-query-ova-ansible- 20180220123504-ovirt01.net.slu.cz.log]
also there are two ansible processes which are still running (and makes heavy load on system (load 9+ and growing, it looks like it eats all the memory and system starts swapping))
ovirt 32087 3.3 0.0 332252 5980 ? Sl 12:35 0:41 /usr/bin/python2 /usr/bin/ansible-playbook --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa --inventory=/tmp/ansible-inventory8237874608161160784 --extra-vars=ovirt_query_ova_path=/ova /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml ovirt 32099 57.5 78.9 15972880 11215312 ? R 12:35 11:52 /usr/bin/python2 /usr/bin/ansible-playbook --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa --inventory=/tmp/ansible-inventory8237874608161160784 --extra-vars=ovirt_query_ova_path=/ova /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml
playbook looks like
- hosts: all remote_user: root gather_facts: no
roles: - ovirt-ova-query
and it looks like it only runs query_ova.py but on all hosts?
No, the engine provides ansible the host to run on when it executes the playbook. It would only be executed on the selected host.
How does this work? ...or should it work?
It should, especially that part of querying the OVA and is supposed to be really quick. Can you please share the engine log and /var/log/ovirt-engine/ova/ ovirt-query-ova-ansible-20180220123504-ovirt01.net.slu.cz.log ?
I am using latest 4.2.1.7-1.el7.centos version
Cheers, Jiri Slezka
[1] https://haas.cesnet.cz/#!index.md - Cesnet HAAS [2] https://haas.cesnet.cz/downloads/release-01/ - Image repository
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This is a cryptographically signed message in MIME format. --------------ms080808020605040004040501 Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: quoted-printable Hi Arik, On 02/20/2018 01:22 PM, Arik Hadas wrote:
=20 =20 On Tue, Feb 20, 2018 at 2:03 PM, Ji=C5=99=C3=AD Sl=C3=A9=C5=BEka <jiri.= slezka@slu.cz <mailto:jiri.slezka@slu.cz>> wrote: =20 Hi, =20 =20 Hi Ji=C5=99=C3=AD, =C2=A0 =20 =20 I would like to try import some ova files into our oVirt instance [= 1] [2] but I facing problems. =20 I have downloaded all ova images into one of hosts (ovirt01) into direcory /ova =20 ll /ova/ total 6532872 -rw-r--r--. 1 vdsm kvm 1160387072 Feb 16 16:21 HAAS-hpcowrie.ovf -rw-r--r--. 1 vdsm kvm 1111785984 Feb 16 16:22 HAAS-hpdio.ova -rw-r--r--. 1 vdsm kvm=C2=A0 846736896 Feb 16 16:22 HAAS-hpjdwpd.ov= a -rw-r--r--. 1 vdsm kvm=C2=A0 891043328 Feb 16 16:23 HAAS-hptelnetd.= ova -rw-r--r--. 1 vdsm kvm=C2=A0 908222464 Feb 16 16:23 HAAS-hpuchotcp.= ova -rw-r--r--. 1 vdsm kvm=C2=A0 880643072 Feb 16 16:24 HAAS-hpuchoudp.= ova -rw-r--r--. 1 vdsm kvm=C2=A0 890833920 Feb 16 16:24 HAAS-hpuchoweb.= ova =20 Then I tried to import them - from host ovirt01 and directory /ova = but spinner spins infinitly and nothing is happen. =20 =20 And does it work when you provide a path to the actual ova file, i.e., /ova/HAAS-hpdio.ova, rather than to the directory?
this time it ends with "Failed to load VM configuration from OVA file: /ova/HAAS-hpdio.ova" error.
I cannot see anything relevant in vdsm log of host ovirt01. =20 In the engine.log of our standalone ovirt manager is just this relevant line =20 2018-02-20 12:35:04,289+01 INFO [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (defau=
lt
task-31) [458990a7-b054-491a-904e-5c4fe44892c4] Executing Ansible command: ANSIBLE_STDOUT_CALLBACK=3Dovaqueryplugin [/usr/bin/ansible-playbook, --private-key=3D/etc/pki/ovirt-engine/keys/engine_id_rsa, --inventory=3D/tmp/ansible-inventory8237874608161160784, --extra-vars=3Dovirt_query_ova_path=3D/ova, /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml] [Logfile: /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123504-ov=
irt01.net
<http://20180220123504-ovirt01.net>.slu.cz.log] =20 also there are two ansible processes which are still running (and m=
akes
heavy load on system (load 9+ and growing, it looks like it eats al=
l the
memory and system starts swapping)) =20 ovirt=C2=A0 =C2=A0 32087=C2=A0 3.3=C2=A0 0.0 332252=C2=A0 5980 ?=C2=
=A0 =C2=A0 =C2=A0 =C2=A0 Sl=C2=A0 =C2=A012:35=C2=A0 =C2=A00:41
/usr/bin/python2 /usr/bin/ansible-playbook --private-key=3D/etc/pki/ovirt-engine/keys/engine_id_rsa --inventory=3D/tmp/ansible-inventory8237874608161160784 --extra-vars=3Dovirt_query_ova_path=3D/ova /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml ovirt=C2=A0 =C2=A0 32099 57.5 78.9 15972880 11215312 ?=C2=A0 =C2=A0=
R=C2=A0 =C2=A0 12:35=C2=A0 11:52
/usr/bin/python2 /usr/bin/ansible-playbook --private-key=3D/etc/pki/ovirt-engine/keys/engine_id_rsa --inventory=3D/tmp/ansible-inventory8237874608161160784 --extra-vars=3Dovirt_query_ova_path=3D/ova /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml =20 playbook looks like =20 - hosts: all =C2=A0 remote_user: root =C2=A0 gather_facts: no =20 =C2=A0 roles: =C2=A0 =C2=A0 - ovirt-ova-query =20 and it looks like it only runs query_ova.py but on all hosts? =20 =20 No, the engine provides ansible the host to run on when it executes the=
playbook. It would only be executed on the selected host. =C2=A0 =20 =20 How does this work? ...or should it work? =20 =20 It should, especially that part of querying the OVA and is supposed to be really quick. Can you please share the engine log and /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123504-ovirt0= 1.net <http://20180220123504-ovirt01.net>.slu.cz.log ?
engine log is here: https://pastebin.com/nWWM3UUq file /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123504-ovirt01.= net in the fact does not exists (nor folder /var/log/ovirt-engine/ova/) Cheers, Jiri Slezka
=C2=A0 =20 =20 I am using latest 4.2.1.7-1.el7.centos version =20 Cheers, Jiri Slezka =20 =20 [1] https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!index.md> - Cesnet HAAS [2] https://haas.cesnet.cz/downloads/release-01/ <https://haas.cesnet.cz/downloads/release-01/> - Image repository =20 =20 _______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users> =20 =20
--------------ms080808020605040004040501 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCC Cn8wggUJMIID8aADAgECAhACt8ndrdK9CetZxFyQDGB4MA0GCSqGSIb3DQEBCwUAMGUxCzAJ BgNVBAYTAlVTMRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2Vy dC5jb20xJDAiBgNVBAMTG0RpZ2lDZXJ0IEFzc3VyZWQgSUQgUm9vdCBDQTAeFw0xNDExMTgx MjAwMDBaFw0yNDExMTgxMjAwMDBaMHIxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1I b2xsYW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzANBgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMd VEVSRU5BIGVTY2llbmNlIFBlcnNvbmFsIENBIDMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw ggEKAoIBAQCwp9Jj5Aej1xPkS1GV3LvBdemFmkUR//nSzBodqsU3dv2BCRD30r4gt5oRsYty qDGF2nnItxV1SkwVoDxFeRzOIHYNYvBRHaiGvCQjEXzPRTocOSVfWpmq/zAL/QOEqpJogeM+ 0IBGiJcAENJshl7UcfjYbBnN5qStk74f52VWFf/aiF7MVJnsUr3oriQvXYOzs8N/NXyyQyim atBbumJVCNszF1X+XHCGfPNvxlNFW9ktv7azK0baminfLcsh6ubCdINZc+Nof2lU387NCDgg oh3KsYVcZTSuhh7qp6MjxE5VqOZod1hpXXzDOkjK+DAMC57iZXssncp24eaN08VlAgMBAAGj ggGmMIIBojASBgNVHRMBAf8ECDAGAQH/AgEAMA4GA1UdDwEB/wQEAwIBhjB5BggrBgEFBQcB AQRtMGswJAYIKwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNvbTBDBggrBgEFBQcw AoY3aHR0cDovL2NhY2VydHMuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0QXNzdXJlZElEUm9vdENB LmNydDCBgQYDVR0fBHoweDA6oDigNoY0aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL0RpZ2lD ZXJ0QXNzdXJlZElEUm9vdENBLmNybDA6oDigNoY0aHR0cDovL2NybDQuZGlnaWNlcnQuY29t L0RpZ2lDZXJ0QXNzdXJlZElEUm9vdENBLmNybDA9BgNVHSAENjA0MDIGBFUdIAAwKjAoBggr BgEFBQcCARYcaHR0cHM6Ly93d3cuZGlnaWNlcnQuY29tL0NQUzAdBgNVHQ4EFgQUjJ8RLubj egSlHlWLRggEpu2XcKYwHwYDVR0jBBgwFoAUReuir/SSy4IxLVGLp6chnfNtyA8wDQYJKoZI hvcNAQELBQADggEBAI5HEV91Oen8WHFCoJkeu2Av+b/kWTV2qH/YNI1Xsbou2hHKhh4IyNkF OxA/TUiuK2qQnQ5hAS0TIrs9SJ1Ke+DjXd/cTBiw7lCYSW5hkzigFV+iSivninpItafWqYBS WxITl1KHBS9YBskhEqO5GLliDMPiAgjqUBQ/H1qZMlZNQIuFu0UaFUQuZUpJFr4+0zpzPxsB iWU2muAoGItwbaP55EYshM7+v/J+x6kIhAJt5Dng8fOmOvR9F6Vw2/E0EZ6oQ8g1fdhwM101 S1OI6J1tUil1r7ES/svNqVWVb7YkUEBcPo8ppfHnTI/uxsn2tslsWefsOGJxNYUUSMAb9Eow ggVuMIIEVqADAgECAhAKebGg8bOvnIyfOWAn4bpzMA0GCSqGSIb3DQEBCwUAMHIxCzAJBgNV BAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzAN BgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMdVEVSRU5BIGVTY2llbmNlIFBlcnNvbmFsIENBIDMw HhcNMTcxMTE2MDAwMDAwWhcNMTgxMjE1MTIwMDAwWjCBlDETMBEGCgmSJomT8ixkARkWA29y ZzEWMBQGCgmSJomT8ixkARkWBnRlcmVuYTETMBEGCgmSJomT8ixkARkWA3RjczELMAkGA1UE BhMCQ1oxJTAjBgNVBAoTHFNpbGVzaWFuIFVuaXZlcnNpdHkgaW4gT3BhdmExHDAaBgNVBAMT E0ppcmkgU2xlemthIHNsZTAwMDEwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC/ VwOD1hlYL6l7GzxNqV1ne7/iMF/gHvPfTwejsC2s9sby7It82qXPRBVA2s1Cjb1A3ucpdlDN MXM83Lvh881XfkxhS2YLLyiZDmlSzAqfoMLxQ2/E0m1UugttzGJF7/10pEwj0FJFhnIVwA/E 8svCcbhxwO9BBpUz8JG1C6fTd0qyzJtNXVyH+WuHQbU2jgu2JJ7miiEKE1Fis0hFf1rKxTzX aVGyXiQLOn7TZDfPtXrJEG7eWYlFUP58edyuJELpWHTPHn8xJKYTy8Qq5BgFNyCRQT/6imsh tZlDBZSEeqyoSNtLsC57ZrjqgtLCEQFK9EX27dOy0/u95zS0OIWdAgMBAAGjggHbMIIB1zAf BgNVHSMEGDAWgBSMnxEu5uN6BKUeVYtGCASm7ZdwpjAdBgNVHQ4EFgQUF1mSlcyDz9wWit9V jCz+zJ9CrpswDAYDVR0TAQH/BAIwADAdBgNVHREEFjAUgRJqaXJpLnNsZXprYUBzbHUuY3ow DgYDVR0PAQH/BAQDAgSwMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcDBDA0BgNVHSAE LTArMAwGCiqGSIb3TAUCAgEwDAYKYIZIAYb9bAQfATANBgsqhkiG90wFAgMDAzCBhQYDVR0f BH4wfDA8oDqgOIY2aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL1RFUkVOQWVTY2llbmNlUGVy c29uYWxDQTMuY3JsMDygOqA4hjZodHRwOi8vY3JsNC5kaWdpY2VydC5jb20vVEVSRU5BZVNj aWVuY2VQZXJzb25hbENBMy5jcmwwewYIKwYBBQUHAQEEbzBtMCQGCCsGAQUFBzABhhhodHRw Oi8vb2NzcC5kaWdpY2VydC5jb20wRQYIKwYBBQUHMAKGOWh0dHA6Ly9jYWNlcnRzLmRpZ2lj ZXJ0LmNvbS9URVJFTkFlU2NpZW5jZVBlcnNvbmFsQ0EzLmNydDANBgkqhkiG9w0BAQsFAAOC AQEADtFRxKphkcHVdWjR/+i1+cdHfkbicraHlU5Mpw8EX6nemKu4GGAWfzH+Y7p6ImZwUHWf /SSbrX+57xaFUBOr3jktQm1GRmGUZESEmsUDB8UZXzdQC79/tO9MzRhvEBXuQhdxdoO64Efx VqtYAB2ydqz7yWh56ioSwaQZEXo5rO1kZuAcmVz8Smd1r/Mur/h8Y+qbrsJng1GS25aMhFts UV6z9zXuHFkT9Ck8SLdCEDzjzYNjXIDB5n+QOmPXnXrZMlGiI/aOqa5k5Sv6xCIPdH2kbpyd M1YiH/ChmU9gWJvy0Jq42KGLvWBvuHEzcb3f473Fvn4GWsXu0zDS2oh2/TGCA8MwggO/AgEB MIGGMHIxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIwEAYDVQQHEwlB bXN0ZXJkYW0xDzANBgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMdVEVSRU5BIGVTY2llbmNlIFBl cnNvbmFsIENBIDMCEAp5saDxs6+cjJ85YCfhunMwDQYJYIZIAWUDBAIBBQCgggINMBgGCSqG SIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTE4MDIyMDEzNDkwOFowLwYJ KoZIhvcNAQkEMSIEICa7D2AafKstYjEYHdh9W78dzTfTOeiUvQNHUrKiMPhTMGwGCSqGSIb3 DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG9w0DBzAOBggqhkiG 9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcNAwICASgwgZcGCSsG AQQBgjcQBDGBiTCBhjByMQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDES MBAGA1UEBxMJQW1zdGVyZGFtMQ8wDQYDVQQKEwZURVJFTkExJjAkBgNVBAMTHVRFUkVOQSBl U2NpZW5jZSBQZXJzb25hbCBDQSAzAhAKebGg8bOvnIyfOWAn4bpzMIGZBgsqhkiG9w0BCRAC CzGBiaCBhjByMQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDESMBAGA1UE BxMJQW1zdGVyZGFtMQ8wDQYDVQQKEwZURVJFTkExJjAkBgNVBAMTHVRFUkVOQSBlU2NpZW5j ZSBQZXJzb25hbCBDQSAzAhAKebGg8bOvnIyfOWAn4bpzMA0GCSqGSIb3DQEBAQUABIIBALQ/ XGfAk6JRvCwHNOy+Uraz/9Sp2uVPB6NhUF9foKLKW6owkRn6e7V59VIQinDKBIq/C7smCkGc uKhdl47ZnluBEF3lP2+n24gjIjl3qAEn7tp7jxlE62kacJcyqX5N7sVIi28y6hBBMHbDcbJd Y9uhtU/lPSci9YJYwSZt6JfgHsUkltM0NVTBliNJADMh/4g6hagV3gQTcnTLsufvKUNQSZ1o E52f0Q6IW5HM5U5x4FAq8l4hTLos90qNuvj349/yCCOd08RILqInc11DsEVLUwb9j94kDvqd aBDLW/JTb2keLEmXY3kOOR0AFWHKdt1Jb2tknMwRfkaBT6EYagAAAAAAAAA= --------------ms080808020605040004040501--

On Tue, Feb 20, 2018 at 3:49 PM, Jiří Sléžka <jiri.slezka@slu.cz> wrote:
Hi Arik,
On 02/20/2018 01:22 PM, Arik Hadas wrote:
On Tue, Feb 20, 2018 at 2:03 PM, Jiří Sléžka <jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>> wrote:
Hi,
Hi Jiří,
I would like to try import some ova files into our oVirt instance [1] [2] but I facing problems.
I have downloaded all ova images into one of hosts (ovirt01) into direcory /ova
ll /ova/ total 6532872 -rw-r--r--. 1 vdsm kvm 1160387072 Feb 16 16:21 HAAS-hpcowrie.ovf -rw-r--r--. 1 vdsm kvm 1111785984 Feb 16 16:22 HAAS-hpdio.ova -rw-r--r--. 1 vdsm kvm 846736896 Feb 16 16:22 HAAS-hpjdwpd.ova -rw-r--r--. 1 vdsm kvm 891043328 Feb 16 16:23 HAAS-hptelnetd.ova -rw-r--r--. 1 vdsm kvm 908222464 Feb 16 16:23 HAAS-hpuchotcp.ova -rw-r--r--. 1 vdsm kvm 880643072 Feb 16 16:24 HAAS-hpuchoudp.ova -rw-r--r--. 1 vdsm kvm 890833920 Feb 16 16:24 HAAS-hpuchoweb.ova
Then I tried to import them - from host ovirt01 and directory /ova
but
spinner spins infinitly and nothing is happen.
And does it work when you provide a path to the actual ova file, i.e., /ova/HAAS-hpdio.ova, rather than to the directory?
this time it ends with "Failed to load VM configuration from OVA file: /ova/HAAS-hpdio.ova" error.
Note that the logic that is applied on a specified folder is "try fetching an 'ova folder' out of the destination folder" rather than "list all the ova files inside the specified folder". It seems that you expected the former output since there are no disks in that folder, right?
I cannot see anything relevant in vdsm log of host ovirt01.
In the engine.log of our standalone ovirt manager is just this relevant line
2018-02-20 12:35:04,289+01 INFO [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor]
(default
task-31) [458990a7-b054-491a-904e-5c4fe44892c4] Executing Ansible command: ANSIBLE_STDOUT_CALLBACK=ovaqueryplugin [/usr/bin/ansible-playbook, --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa, --inventory=/tmp/ansible-inventory8237874608161160784, --extra-vars=ovirt_query_ova_path=/ova, /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml] [Logfile: /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-
20180220123504-ovirt01.net
<http://20180220123504-ovirt01.net>.slu.cz.log]
also there are two ansible processes which are still running (and
makes
heavy load on system (load 9+ and growing, it looks like it eats all
the
memory and system starts swapping))
ovirt 32087 3.3 0.0 332252 5980 ? Sl 12:35 0:41 /usr/bin/python2 /usr/bin/ansible-playbook --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa --inventory=/tmp/ansible-inventory8237874608161160784 --extra-vars=ovirt_query_ova_path=/ova /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml ovirt 32099 57.5 78.9 15972880 11215312 ? R 12:35 11:52 /usr/bin/python2 /usr/bin/ansible-playbook --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa --inventory=/tmp/ansible-inventory8237874608161160784 --extra-vars=ovirt_query_ova_path=/ova /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml
playbook looks like
- hosts: all remote_user: root gather_facts: no
roles: - ovirt-ova-query
and it looks like it only runs query_ova.py but on all hosts?
No, the engine provides ansible the host to run on when it executes the playbook. It would only be executed on the selected host.
How does this work? ...or should it work?
It should, especially that part of querying the OVA and is supposed to be really quick. Can you please share the engine log and /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-
20180220123504-ovirt01.net
engine log is here:
Thanks. Alright, so now the configuration is fetched but its processing fails. We fixed many issues in this area recently, but it appears that something is wrong with the actual size of the disk within the ovf file that resides inside this ova file. Can you please share that ovf file that resides inside /ova/HAAS-hpdio.ova?
file /var/log/ovirt-engine/ova/ovirt-query-ova-ansible- 20180220123504-ovirt01.net in the fact does not exists (nor folder /var/log/ovirt-engine/ova/)
This issue is also resolved in 4.2.2. In the meantime, please create the /var/log/ovirt-engine/ova/ folder manually and make sure its permissions match the ones of the other folders in /var/log/ovirt-engine.
Cheers,
Jiri Slezka
I am using latest 4.2.1.7-1.el7.centos version
Cheers, Jiri Slezka
[1] https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!index.md> - Cesnet HAAS [2] https://haas.cesnet.cz/downloads/release-01/ <https://haas.cesnet.cz/downloads/release-01/> - Image repository
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This is a cryptographically signed message in MIME format. --------------ms000605090504020609060003 Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: quoted-printable On 02/20/2018 03:48 PM, Arik Hadas wrote:
=20 =20 On Tue, Feb 20, 2018 at 3:49 PM, Ji=C5=99=C3=AD Sl=C3=A9=C5=BEka <jiri.= slezka@slu.cz <mailto:jiri.slezka@slu.cz>> wrote: =20 Hi Arik, =20 On 02/20/2018 01:22 PM, Arik Hadas wrote: > > > On Tue, Feb 20, 2018 at 2:03 PM, Ji=C5=99=C3=AD Sl=C3=A9=C5=BEka = <jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz> > <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>>> wrote: > >=C2=A0 =C2=A0 =C2=A0Hi, > > > Hi Ji=C5=99=C3=AD, > =C2=A0 > > >=C2=A0 =C2=A0 =C2=A0I would like to try import some ova files into= our oVirt instance [1] >=C2=A0 =C2=A0 =C2=A0[2] but I facing problems. > >=C2=A0 =C2=A0 =C2=A0I have downloaded all ova images into one of h= osts (ovirt01) into >=C2=A0 =C2=A0 =C2=A0direcory /ova > >=C2=A0 =C2=A0 =C2=A0ll /ova/ >=C2=A0 =C2=A0 =C2=A0total 6532872 >=C2=A0 =C2=A0 =C2=A0-rw-r--r--. 1 vdsm kvm 1160387072 Feb 16 16:21= HAAS-hpcowrie.ovf >=C2=A0 =C2=A0 =C2=A0-rw-r--r--. 1 vdsm kvm 1111785984 Feb 16 16:22= HAAS-hpdio.ova >=C2=A0 =C2=A0 =C2=A0-rw-r--r--. 1 vdsm kvm=C2=A0 846736896 Feb 16 = 16:22 HAAS-hpjdwpd.ova >=C2=A0 =C2=A0 =C2=A0-rw-r--r--. 1 vdsm kvm=C2=A0 891043328 Feb 16 = 16:23 HAAS-hptelnetd.ova >=C2=A0 =C2=A0 =C2=A0-rw-r--r--. 1 vdsm kvm=C2=A0 908222464 Feb 16 = 16:23 HAAS-hpuchotcp.ova >=C2=A0 =C2=A0 =C2=A0-rw-r--r--. 1 vdsm kvm=C2=A0 880643072 Feb 16 = 16:24 HAAS-hpuchoudp.ova >=C2=A0 =C2=A0 =C2=A0-rw-r--r--. 1 vdsm kvm=C2=A0 890833920 Feb 16 = 16:24 HAAS-hpuchoweb.ova > >=C2=A0 =C2=A0 =C2=A0Then I tried to import them - from host ovirt0= 1 and directory /ova but >=C2=A0 =C2=A0 =C2=A0spinner spins infinitly and nothing is happen.=
> > > And does it work when you provide a path to the actual ova file, =
i.e.,
> /ova/HAAS-hpdio.ova, rather than to the directory? =20 this time it ends with "Failed to load VM configuration from OVA fi=
le:
/ova/HAAS-hpdio.ova" error.=C2=A0 =20 =20 Note that the logic that is applied on a specified folder is "try fetching an 'ova folder' out of the destination folder" rather than "list all the ova files inside the specified folder". It seems that you=
expected the former output since there are no disks in that folder, rig= ht?
yes, It would be more user friendly to list all ova files and then select which one to import (like listing all vms in vmware import) Maybe description of path field in manager should be "Path to ova file" instead of "Path" :-)
>=C2=A0 =C2=A0 =C2=A0I cannot see anything relevant in vdsm log of =
host ovirt01.
> >=C2=A0 =C2=A0 =C2=A0In the engine.log of our standalone ovirt mana=
ger is just this
>=C2=A0 =C2=A0 =C2=A0relevant line > >=C2=A0 =C2=A0 =C2=A02018-02-20 12:35:04,289+01 INFO >=C2=A0 =C2=A0 =C2=A0[org.ovirt.engine.core.common.utils.ansible.An=
sibleExecutor] (default
>=C2=A0 =C2=A0 =C2=A0task-31) [458990a7-b054-491a-904e-5c4fe44892c4=
] Executing Ansible
>=C2=A0 =C2=A0 =C2=A0command: ANSIBLE_STDOUT_CALLBACK=3Dovaqueryplu=
gin
>=C2=A0 =C2=A0 =C2=A0[/usr/bin/ansible-playbook, >=C2=A0 =C2=A0 =C2=A0--private-key=3D/etc/pki/ovirt-engine/keys/eng=
ine_id_rsa,
>=C2=A0 =C2=A0 =C2=A0--inventory=3D/tmp/ansible-inventory8237874608=
161160784,
>=C2=A0 =C2=A0 =C2=A0--extra-vars=3Dovirt_query_ova_path=3D/ova, >=C2=A0 =C2=A0 =C2=A0/usr/share/ovirt-engine/playbooks/ovirt-ova-qu=
ery.yml] [Logfile:
>=C2=A0 =C2=A0 =C2=A0/var/log/ovirt-engine/ova/ovirt-query-ova-ansi=
ble-20180220123504-ovirt01.net
<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> >=C2=A0 =C2=A0 =C2=A0<http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net>>.slu.cz.log] > >=C2=A0 =C2=A0 =C2=A0also there are two ansible processes which are=
still running
(and makes >=C2=A0 =C2=A0 =C2=A0heavy load on system (load 9+ and growing, it =
looks like it
eats all the >=C2=A0 =C2=A0 =C2=A0memory and system starts swapping)) > >=C2=A0 =C2=A0 =C2=A0ovirt=C2=A0 =C2=A0 32087=C2=A0 3.3=C2=A0 0.0 3=
32252=C2=A0 5980 ?=C2=A0 =C2=A0 =C2=A0 =C2=A0 Sl=C2=A0 =C2=A012:35=C2=A0 = =C2=A00:41
>=C2=A0 =C2=A0 =C2=A0/usr/bin/python2 /usr/bin/ansible-playbook >=C2=A0 =C2=A0 =C2=A0--private-key=3D/etc/pki/ovirt-engine/keys/eng=
ine_id_rsa
>=C2=A0 =C2=A0 =C2=A0--inventory=3D/tmp/ansible-inventory8237874608=
161160784
>=C2=A0 =C2=A0 =C2=A0--extra-vars=3Dovirt_query_ova_path=3D/ova >=C2=A0 =C2=A0 =C2=A0/usr/share/ovirt-engine/playbooks/ovirt-ova-qu=
ery.yml
>=C2=A0 =C2=A0 =C2=A0ovirt=C2=A0 =C2=A0 32099 57.5 78.9 15972880 11=
215312 ?=C2=A0 =C2=A0R=C2=A0 =C2=A0 12:35=C2=A0 11:52
>=C2=A0 =C2=A0 =C2=A0/usr/bin/python2 /usr/bin/ansible-playbook >=C2=A0 =C2=A0 =C2=A0--private-key=3D/etc/pki/ovirt-engine/keys/eng=
ine_id_rsa
>=C2=A0 =C2=A0 =C2=A0--inventory=3D/tmp/ansible-inventory8237874608=
161160784
>=C2=A0 =C2=A0 =C2=A0--extra-vars=3Dovirt_query_ova_path=3D/ova >=C2=A0 =C2=A0 =C2=A0/usr/share/ovirt-engine/playbooks/ovirt-ova-qu=
ery.yml
> >=C2=A0 =C2=A0 =C2=A0playbook looks like > >=C2=A0 =C2=A0 =C2=A0- hosts: all >=C2=A0 =C2=A0 =C2=A0=C2=A0 remote_user: root >=C2=A0 =C2=A0 =C2=A0=C2=A0 gather_facts: no > >=C2=A0 =C2=A0 =C2=A0=C2=A0 roles: >=C2=A0 =C2=A0 =C2=A0=C2=A0 =C2=A0 - ovirt-ova-query > >=C2=A0 =C2=A0 =C2=A0and it looks like it only runs query_ova.py bu=
t on all hosts?
> > > No, the engine provides ansible the host to run on when it executes the > playbook. > It would only be executed on the selected host. > =C2=A0 > > >=C2=A0 =C2=A0 =C2=A0How does this work? ...or should it work? > > > It should, especially that part of querying the OVA and is suppos=
ed to
> be really quick. > Can you please share the engine log and > /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123504-ov=
irt01.net
<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> > <http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net>>.slu.cz.log ? =20 engine log is here: =20 https://pastebin.com/nWWM3UUq =20 =20 Thanks. Alright, so now the configuration is fetched but its processing fails. We fixed many issues in this area recently, but it appears that something is wrong with the actual size of the disk within the ovf file=
that resides inside this ova file. Can you please share that ovf file that resides inside=C2=A0/ova/HAAS-h=
pdio.ova? file HAAS-hpdio.ova HAAS-hpdio.ova: POSIX tar archive (GNU) [root@ovirt01 backup]# tar xvf HAAS-hpdio.ova HAAS-hpdio.ovf HAAS-hpdio-disk001.vmdk file HAAS-hpdio.ovf is here: https://pastebin.com/80qAU0wB
file /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123504-ov=
irt01.net
<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> in the fact does not exists (nor folder /var/log/ovirt-engine/ova/)=
=20 =20 This issue is also resolved in 4.2.2. In the meantime, please create the =C2=A0/var/log/ovirt-engine/ova/ fol= der manually and make sure its permissions match the ones of the other folders in =C2=A0/var/log/ovirt-engine.
ok, done. After another try there is this log file /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220173005-ovirt01.= net.slu.cz.log https://pastebin.com/M5J44qur
Cheers, =20 Jiri Slezka =20 > =C2=A0 > > >=C2=A0 =C2=A0 =C2=A0I am using latest 4.2.1.7-1.el7.centos version=
> >=C2=A0 =C2=A0 =C2=A0Cheers, >=C2=A0 =C2=A0 =C2=A0Jiri Slezka > > >=C2=A0 =C2=A0 =C2=A0[1] https://haas.cesnet.cz/#!index.md <https:/=
/haas.cesnet.cz/#!index.md>
>=C2=A0 =C2=A0 =C2=A0<https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!index.md>> - Cesnet HAAS >=C2=A0 =C2=A0 =C2=A0[2] https://haas.cesnet.cz/downloads/release-0=
1/
<https://haas.cesnet.cz/downloads/release-01/> >=C2=A0 =C2=A0 =C2=A0<https://haas.cesnet.cz/downloads/release-01/ <https://haas.cesnet.cz/downloads/release-01/>> - Image repository > > >=C2=A0 =C2=A0 =C2=A0______________________________________________=
_
>=C2=A0 =C2=A0 =C2=A0Users mailing list >=C2=A0 =C2=A0 =C2=A0Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> >=C2=A0 =C2=A0 =C2=A0http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users> >=C2=A0 =C2=A0 =C2=A0<http://lists.ovirt.org/mailman/listinfo/users=
<http://lists.ovirt.org/mailman/listinfo/users>> > > =20 =20 =20 _______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users> =20 =20
--------------ms000605090504020609060003 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCC Cn8wggUJMIID8aADAgECAhACt8ndrdK9CetZxFyQDGB4MA0GCSqGSIb3DQEBCwUAMGUxCzAJ BgNVBAYTAlVTMRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2Vy dC5jb20xJDAiBgNVBAMTG0RpZ2lDZXJ0IEFzc3VyZWQgSUQgUm9vdCBDQTAeFw0xNDExMTgx MjAwMDBaFw0yNDExMTgxMjAwMDBaMHIxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1I b2xsYW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzANBgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMd VEVSRU5BIGVTY2llbmNlIFBlcnNvbmFsIENBIDMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw ggEKAoIBAQCwp9Jj5Aej1xPkS1GV3LvBdemFmkUR//nSzBodqsU3dv2BCRD30r4gt5oRsYty qDGF2nnItxV1SkwVoDxFeRzOIHYNYvBRHaiGvCQjEXzPRTocOSVfWpmq/zAL/QOEqpJogeM+ 0IBGiJcAENJshl7UcfjYbBnN5qStk74f52VWFf/aiF7MVJnsUr3oriQvXYOzs8N/NXyyQyim atBbumJVCNszF1X+XHCGfPNvxlNFW9ktv7azK0baminfLcsh6ubCdINZc+Nof2lU387NCDgg oh3KsYVcZTSuhh7qp6MjxE5VqOZod1hpXXzDOkjK+DAMC57iZXssncp24eaN08VlAgMBAAGj ggGmMIIBojASBgNVHRMBAf8ECDAGAQH/AgEAMA4GA1UdDwEB/wQEAwIBhjB5BggrBgEFBQcB AQRtMGswJAYIKwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNvbTBDBggrBgEFBQcw AoY3aHR0cDovL2NhY2VydHMuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0QXNzdXJlZElEUm9vdENB LmNydDCBgQYDVR0fBHoweDA6oDigNoY0aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL0RpZ2lD ZXJ0QXNzdXJlZElEUm9vdENBLmNybDA6oDigNoY0aHR0cDovL2NybDQuZGlnaWNlcnQuY29t L0RpZ2lDZXJ0QXNzdXJlZElEUm9vdENBLmNybDA9BgNVHSAENjA0MDIGBFUdIAAwKjAoBggr BgEFBQcCARYcaHR0cHM6Ly93d3cuZGlnaWNlcnQuY29tL0NQUzAdBgNVHQ4EFgQUjJ8RLubj egSlHlWLRggEpu2XcKYwHwYDVR0jBBgwFoAUReuir/SSy4IxLVGLp6chnfNtyA8wDQYJKoZI hvcNAQELBQADggEBAI5HEV91Oen8WHFCoJkeu2Av+b/kWTV2qH/YNI1Xsbou2hHKhh4IyNkF OxA/TUiuK2qQnQ5hAS0TIrs9SJ1Ke+DjXd/cTBiw7lCYSW5hkzigFV+iSivninpItafWqYBS WxITl1KHBS9YBskhEqO5GLliDMPiAgjqUBQ/H1qZMlZNQIuFu0UaFUQuZUpJFr4+0zpzPxsB iWU2muAoGItwbaP55EYshM7+v/J+x6kIhAJt5Dng8fOmOvR9F6Vw2/E0EZ6oQ8g1fdhwM101 S1OI6J1tUil1r7ES/svNqVWVb7YkUEBcPo8ppfHnTI/uxsn2tslsWefsOGJxNYUUSMAb9Eow ggVuMIIEVqADAgECAhAKebGg8bOvnIyfOWAn4bpzMA0GCSqGSIb3DQEBCwUAMHIxCzAJBgNV BAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzAN BgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMdVEVSRU5BIGVTY2llbmNlIFBlcnNvbmFsIENBIDMw HhcNMTcxMTE2MDAwMDAwWhcNMTgxMjE1MTIwMDAwWjCBlDETMBEGCgmSJomT8ixkARkWA29y ZzEWMBQGCgmSJomT8ixkARkWBnRlcmVuYTETMBEGCgmSJomT8ixkARkWA3RjczELMAkGA1UE BhMCQ1oxJTAjBgNVBAoTHFNpbGVzaWFuIFVuaXZlcnNpdHkgaW4gT3BhdmExHDAaBgNVBAMT E0ppcmkgU2xlemthIHNsZTAwMDEwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC/ VwOD1hlYL6l7GzxNqV1ne7/iMF/gHvPfTwejsC2s9sby7It82qXPRBVA2s1Cjb1A3ucpdlDN MXM83Lvh881XfkxhS2YLLyiZDmlSzAqfoMLxQ2/E0m1UugttzGJF7/10pEwj0FJFhnIVwA/E 8svCcbhxwO9BBpUz8JG1C6fTd0qyzJtNXVyH+WuHQbU2jgu2JJ7miiEKE1Fis0hFf1rKxTzX aVGyXiQLOn7TZDfPtXrJEG7eWYlFUP58edyuJELpWHTPHn8xJKYTy8Qq5BgFNyCRQT/6imsh tZlDBZSEeqyoSNtLsC57ZrjqgtLCEQFK9EX27dOy0/u95zS0OIWdAgMBAAGjggHbMIIB1zAf BgNVHSMEGDAWgBSMnxEu5uN6BKUeVYtGCASm7ZdwpjAdBgNVHQ4EFgQUF1mSlcyDz9wWit9V jCz+zJ9CrpswDAYDVR0TAQH/BAIwADAdBgNVHREEFjAUgRJqaXJpLnNsZXprYUBzbHUuY3ow DgYDVR0PAQH/BAQDAgSwMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcDBDA0BgNVHSAE LTArMAwGCiqGSIb3TAUCAgEwDAYKYIZIAYb9bAQfATANBgsqhkiG90wFAgMDAzCBhQYDVR0f BH4wfDA8oDqgOIY2aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL1RFUkVOQWVTY2llbmNlUGVy c29uYWxDQTMuY3JsMDygOqA4hjZodHRwOi8vY3JsNC5kaWdpY2VydC5jb20vVEVSRU5BZVNj aWVuY2VQZXJzb25hbENBMy5jcmwwewYIKwYBBQUHAQEEbzBtMCQGCCsGAQUFBzABhhhodHRw Oi8vb2NzcC5kaWdpY2VydC5jb20wRQYIKwYBBQUHMAKGOWh0dHA6Ly9jYWNlcnRzLmRpZ2lj ZXJ0LmNvbS9URVJFTkFlU2NpZW5jZVBlcnNvbmFsQ0EzLmNydDANBgkqhkiG9w0BAQsFAAOC AQEADtFRxKphkcHVdWjR/+i1+cdHfkbicraHlU5Mpw8EX6nemKu4GGAWfzH+Y7p6ImZwUHWf /SSbrX+57xaFUBOr3jktQm1GRmGUZESEmsUDB8UZXzdQC79/tO9MzRhvEBXuQhdxdoO64Efx VqtYAB2ydqz7yWh56ioSwaQZEXo5rO1kZuAcmVz8Smd1r/Mur/h8Y+qbrsJng1GS25aMhFts UV6z9zXuHFkT9Ck8SLdCEDzjzYNjXIDB5n+QOmPXnXrZMlGiI/aOqa5k5Sv6xCIPdH2kbpyd M1YiH/ChmU9gWJvy0Jq42KGLvWBvuHEzcb3f473Fvn4GWsXu0zDS2oh2/TGCA8MwggO/AgEB MIGGMHIxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIwEAYDVQQHEwlB bXN0ZXJkYW0xDzANBgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMdVEVSRU5BIGVTY2llbmNlIFBl cnNvbmFsIENBIDMCEAp5saDxs6+cjJ85YCfhunMwDQYJYIZIAWUDBAIBBQCgggINMBgGCSqG SIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTE4MDIyMDE2Mzc0MFowLwYJ KoZIhvcNAQkEMSIEIHHmcF7p8nJEu2XDrcC/2M+pv4d0zJf1U7WottasK1xVMGwGCSqGSIb3 DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG9w0DBzAOBggqhkiG 9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcNAwICASgwgZcGCSsG AQQBgjcQBDGBiTCBhjByMQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDES MBAGA1UEBxMJQW1zdGVyZGFtMQ8wDQYDVQQKEwZURVJFTkExJjAkBgNVBAMTHVRFUkVOQSBl U2NpZW5jZSBQZXJzb25hbCBDQSAzAhAKebGg8bOvnIyfOWAn4bpzMIGZBgsqhkiG9w0BCRAC CzGBiaCBhjByMQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDESMBAGA1UE BxMJQW1zdGVyZGFtMQ8wDQYDVQQKEwZURVJFTkExJjAkBgNVBAMTHVRFUkVOQSBlU2NpZW5j ZSBQZXJzb25hbCBDQSAzAhAKebGg8bOvnIyfOWAn4bpzMA0GCSqGSIb3DQEBAQUABIIBAEGL Z+BncfWVrQpCBR7zjJEjEYFAkOUtZ1ZFS9vjWNPGE74TM2PsvQn2vyr6epKgWpQN7Pex4SzW TZrugjZZCPOT8VxKsqr1TXad/r5Wd1mPNjspGfFxGnouIEi/CKWywOFgHH3vfa0WpyKruNsy ifJe5XXU7RNHjkR3damVHuOm1P7VdhjrQFyBW4Rz9G0FclpRztLs4l8JIk5Q0VmIsy7JwWHb 9AJoseZi9vgP+D8jrCjme7TNu8P9hGmLjGcbp19L/vYxD4CYw3sKzXcETDIBpRbzlchd8iSj UzWN3FuU5FqCQ+Z8adcZatfTZdulr2jWAQtTEoyhgttjZ9F7Lk8AAAAAAAA= --------------ms000605090504020609060003--

On Tue, Feb 20, 2018 at 6:37 PM, Jiří Sléžka <jiri.slezka@slu.cz> wrote:
On 02/20/2018 03:48 PM, Arik Hadas wrote:
On Tue, Feb 20, 2018 at 3:49 PM, Jiří Sléžka <jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>> wrote:
Hi Arik,
On 02/20/2018 01:22 PM, Arik Hadas wrote: > > > On Tue, Feb 20, 2018 at 2:03 PM, Jiří Sléžka <jiri.slezka@slu.cz
<mailto:jiri.slezka@slu.cz>
> <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>>> wrote: > > Hi, > > > Hi Jiří, > > > > I would like to try import some ova files into our oVirt
instance [1]
> [2] but I facing problems. > > I have downloaded all ova images into one of hosts (ovirt01)
into
> direcory /ova > > ll /ova/ > total 6532872 > -rw-r--r--. 1 vdsm kvm 1160387072 Feb 16 16:21
HAAS-hpcowrie.ovf
> -rw-r--r--. 1 vdsm kvm 1111785984 Feb 16 16:22 HAAS-hpdio.ova > -rw-r--r--. 1 vdsm kvm 846736896 Feb 16 16:22 HAAS-hpjdwpd.ova > -rw-r--r--. 1 vdsm kvm 891043328 Feb 16 16:23
HAAS-hptelnetd.ova
> -rw-r--r--. 1 vdsm kvm 908222464 Feb 16 16:23
HAAS-hpuchotcp.ova
> -rw-r--r--. 1 vdsm kvm 880643072 Feb 16 16:24
HAAS-hpuchoudp.ova
> -rw-r--r--. 1 vdsm kvm 890833920 Feb 16 16:24
HAAS-hpuchoweb.ova
> > Then I tried to import them - from host ovirt01 and directory
/ova but
> spinner spins infinitly and nothing is happen. > > > And does it work when you provide a path to the actual ova file,
i.e.,
> /ova/HAAS-hpdio.ova, rather than to the directory?
this time it ends with "Failed to load VM configuration from OVA
file:
/ova/HAAS-hpdio.ova" error.
Note that the logic that is applied on a specified folder is "try fetching an 'ova folder' out of the destination folder" rather than "list all the ova files inside the specified folder". It seems that you expected the former output since there are no disks in that folder,
right?
yes, It would be more user friendly to list all ova files and then select which one to import (like listing all vms in vmware import)
Maybe description of path field in manager should be "Path to ova file" instead of "Path" :-)
Sorry, I obviously meant 'latter' rather than 'former' before.. Yeah, I agree that would be better, at least until listing the OVA files in the folder is implemented (that was the original plan, btw) - could you please file a bug?
> I cannot see anything relevant in vdsm log of host ovirt01. > > In the engine.log of our standalone ovirt manager is just this > relevant line > > 2018-02-20 12:35:04,289+01 INFO > [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor]
(default
> task-31) [458990a7-b054-491a-904e-5c4fe44892c4] Executing
Ansible
> command: ANSIBLE_STDOUT_CALLBACK=ovaqueryplugin > [/usr/bin/ansible-playbook, > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa, > --inventory=/tmp/ansible-inventory8237874608161160784, > --extra-vars=ovirt_query_ova_path=/ova, > /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml]
[Logfile:
> /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-
20180220123504-ovirt01.net
<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> > <http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net>>.slu.cz.log] > > also there are two ansible processes which are still running (and makes > heavy load on system (load 9+ and growing, it looks like it eats all the > memory and system starts swapping)) > > ovirt 32087 3.3 0.0 332252 5980 ? Sl 12:35
0:41
> /usr/bin/python2 /usr/bin/ansible-playbook > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa > --inventory=/tmp/ansible-inventory8237874608161160784 > --extra-vars=ovirt_query_ova_path=/ova > /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml > ovirt 32099 57.5 78.9 15972880 11215312 ? R 12:35
11:52
> /usr/bin/python2 /usr/bin/ansible-playbook > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa > --inventory=/tmp/ansible-inventory8237874608161160784 > --extra-vars=ovirt_query_ova_path=/ova > /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml > > playbook looks like > > - hosts: all > remote_user: root > gather_facts: no > > roles: > - ovirt-ova-query > > and it looks like it only runs query_ova.py but on all hosts? > > > No, the engine provides ansible the host to run on when it executes the > playbook. > It would only be executed on the selected host. > > > > How does this work? ...or should it work? > > > It should, especially that part of querying the OVA and is
supposed to
> be really quick. > Can you please share the engine log and > /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-
20180220123504-ovirt01.net
<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> > <http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net>>.slu.cz.log ?
engine log is here:
Thanks. Alright, so now the configuration is fetched but its processing fails. We fixed many issues in this area recently, but it appears that something is wrong with the actual size of the disk within the ovf file that resides inside this ova file. Can you please share that ovf file that resides
inside /ova/HAAS-hpdio.ova?
file HAAS-hpdio.ova HAAS-hpdio.ova: POSIX tar archive (GNU)
[root@ovirt01 backup]# tar xvf HAAS-hpdio.ova HAAS-hpdio.ovf HAAS-hpdio-disk001.vmdk
file HAAS-hpdio.ovf is here:
Thanks again. So that seems to be a VM that was exported from Virtual Box, right? They don't do anything that violates the OVF specification but they do some non-common things that we don't anticipate: First, they don't specify the actual size of the disk and the current code in oVirt relies on that property. There is a workaround for this though: you can extract an OVA file, edit its OVF configuration - adding ovf:populatedSize="X" (and change ovf:capacity as I'll describe next) to the Disk element inside the DiskSection and pack the OVA again (tar cvf <ovf_file> <disk_file) where X is either: 1. the actual size of the vmdk file + some buffer (iirc, we used to take 15% of extra space for the conversion) 2. if you're using a file storage or you don't mind consuming more storage space on your block storage, simply set X to the virtual size of the disk (in bytes) as indicated by the ovf:capacity filed, e.g., ovf:populatedSize="21474836480" in the case of HAAS-hpdio.ova. Second, the virtual size (indicated by ovf:capacity) is specified in bytes. The specification says that the default unit of allocation shall be bytes, but practically every OVA file that I've ever saw specified it in GB and the current code in oVirt kind of assumes that this is the case without checking the ovf:capacityAllocationUnits attribute that could indicate the real unit of allocation [1]. Anyway, long story short, the virtual size of the disk should currently be specified in GB, e.g., ovf:populatedSize="20" in the case of HAAS-hpdio.ova. That should do it. If not, please share the OVA file and I will examine it in my environment. [1] https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/ut...
file /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-
20180220123504-ovirt01.net
<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> in the fact does not exists (nor folder /var/log/ovirt-engine/ova/)
This issue is also resolved in 4.2.2. In the meantime, please create the /var/log/ovirt-engine/ova/ folder manually and make sure its permissions match the ones of the other folders in /var/log/ovirt-engine.
ok, done. After another try there is this log file
/var/log/ovirt-engine/ova/ovirt-query-ova-ansible- 20180220173005-ovirt01.net.slu.cz.log
Is it the log of the execution of the ansible playbook that was provided with a path to the /ova folder? I'm interested in that in order to see how comes that its execution never completed.
Cheers,
Jiri Slezka
> > > > I am using latest 4.2.1.7-1.el7.centos version > > Cheers, > Jiri Slezka > > > [1] https://haas.cesnet.cz/#!index.md <
https://haas.cesnet.cz/#!index.md>
> <https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!index.md>> - Cesnet HAAS > [2] https://haas.cesnet.cz/downloads/release-01/ <https://haas.cesnet.cz/downloads/release-01/> > <https://haas.cesnet.cz/downloads/release-01/ <https://haas.cesnet.cz/downloads/release-01/>> - Image repository > > > _______________________________________________ > Users mailing list > Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> > http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users> > <http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>> > >
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

=20 =20 On Tue, Feb 20, 2018 at 6:37 PM, Ji=C5=99=C3=AD Sl=C3=A9=C5=BEka <jiri.= slezka@slu.cz <mailto:jiri.slezka@slu.cz>> wrote: =20 On 02/20/2018 03:48 PM, Arik Hadas wrote: > > > On Tue, Feb 20, 2018 at 3:49 PM, Ji=C5=99=C3=AD Sl=C3=A9=C5=BEka = <jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz> > <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>>> wrote: > >=C2=A0 =C2=A0 =C2=A0Hi Arik, > >=C2=A0 =C2=A0 =C2=A0On 02/20/2018 01:22 PM, Arik Hadas wrote: >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> On Tue, Feb 20, 2018 at 2:03 PM, Ji=C5=99=C3= =AD Sl=C3=A9=C5=BEka <jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz> <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>> >=C2=A0 =C2=A0 =C2=A0> <mailto:jiri.slezka@slu.cz <mailto:jiri.slez= ka@slu.cz> <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>>>> wrote: >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0Hi, >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> Hi Ji=C5=99=C3=AD, >=C2=A0 =C2=A0 =C2=A0> =C2=A0 >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0I would like to try impor= t some ova files into our oVirt instance [1] >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0[2] but I facing problems= =2E >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0I have downloaded all ova= images into one of hosts (ovirt01) into >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0direcory /ova >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0ll /ova/ >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0total 6532872 >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0-rw-r--r--. 1 vdsm kvm 11= 60387072 Feb 16 16:21 HAAS-hpcowrie.ovf >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0-rw-r--r--. 1 vdsm kvm 11= 11785984 Feb 16 16:22 HAAS-hpdio.ova >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0-rw-r--r--. 1 vdsm kvm=C2= =A0 846736896 Feb 16 16:22 HAAS-hpjdwpd.ova >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0-rw-r--r--. 1 vdsm kvm=C2= =A0 891043328 Feb 16 16:23 HAAS-hptelnetd.ova >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0-rw-r--r--. 1 vdsm kvm=C2= =A0 908222464 Feb 16 16:23 HAAS-hpuchotcp.ova >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0-rw-r--r--. 1 vdsm kvm=C2= =A0 880643072 Feb 16 16:24 HAAS-hpuchoudp.ova >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0-rw-r--r--. 1 vdsm kvm=C2= =A0 890833920 Feb 16 16:24 HAAS-hpuchoweb.ova >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0Then I tried to import th= em - from host ovirt01 and directory /ova but >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0spinner spins infinitly a= nd nothing is happen. >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> And does it work when you provide a path to =
This is a cryptographically signed message in MIME format. --------------ms090106090203090701080803 Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: quoted-printable On 02/20/2018 11:09 PM, Arik Hadas wrote: the actual ova
file, i.e., >=C2=A0 =C2=A0 =C2=A0> /ova/HAAS-hpdio.ova, rather than to the dire=
ctory?
> >=C2=A0 =C2=A0 =C2=A0this time it ends with "Failed to load VM conf=
iguration from
OVA file: >=C2=A0 =C2=A0 =C2=A0/ova/HAAS-hpdio.ova" error.=C2=A0 > > > Note that the logic that is applied on a specified folder is "try=
> fetching an 'ova folder' out of the destination folder" rather th=
an
> "list all the ova files inside the specified folder". It seems that you > expected the former output since there are no disks in that folder, right? =20 yes, It would be more user friendly to list all ova files and then select which one to import (like listing all vms in vmware import) =20 Maybe description of path field in manager should be "Path to ova f=
ile"
instead of "Path" :-) =20 =20 Sorry, I obviously meant 'latter' rather than 'former' before.. Yeah, I agree that would be better, at least until listing the OVA file=
s
in the folder is implemented (that was the original plan, btw) - could you please file a bug?
yes, sure
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0I cannot see anything rel=
evant in vdsm log of host ovirt01.
>=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0In the engine.log of our =
standalone ovirt manager is just this
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0relevant line >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A02018-02-20 12:35:04,289+0=
1 INFO
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0[org.ovirt.engine.core.co=
mmon.utils.ansible.AnsibleExecutor] (default
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0task-31) [458990a7-b054-4=
91a-904e-5c4fe44892c4] Executing Ansible
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0command: ANSIBLE_STDOUT_C=
ALLBACK=3Dovaqueryplugin
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0[/usr/bin/ansible-playboo=
k,
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--private-key=3D/etc/pki/=
ovirt-engine/keys/engine_id_rsa,
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--inventory=3D/tmp/ansibl=
e-inventory8237874608161160784,
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--extra-vars=3Dovirt_quer=
y_ova_path=3D/ova,
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0/usr/share/ovirt-engine/p=
laybooks/ovirt-ova-query.yml] [Logfile:
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0/var/log/ovirt-engine/ova=
/ovirt-query-ova-ansible-20180220123504-ovirt01.net
<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> >=C2=A0 =C2=A0 =C2=A0<http://ovirt-query-ova-ansible-20180220123504=
-ovirt01.net
<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0<http://20180220123504-ov=
irt01.net
<http://20180220123504-ovirt01.net> >=C2=A0 =C2=A0 =C2=A0<http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net>>>.slu.cz.log] >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0also there are two ansibl=
e processes which are still running
>=C2=A0 =C2=A0 =C2=A0(and makes >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0heavy load on system (loa=
d 9+ and growing, it looks like it
>=C2=A0 =C2=A0 =C2=A0eats all the >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0memory and system starts =
swapping))
>=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0ovirt=C2=A0 =C2=A0 32087=C2=
=A0 3.3=C2=A0 0.0 332252=C2=A0 5980 ?=C2=A0 =C2=A0 =C2=A0 =C2=A0 Sl=C2=A0=
=C2=A012:35=C2=A0 =C2=A00:41 >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0/usr/bin/python2 /usr/bin=
/ansible-playbook
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--private-key=3D/etc/pki/=
ovirt-engine/keys/engine_id_rsa
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--inventory=3D/tmp/ansibl=
e-inventory8237874608161160784
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--extra-vars=3Dovirt_quer=
y_ova_path=3D/ova
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0/usr/share/ovirt-engine/p=
laybooks/ovirt-ova-query.yml
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0ovirt=C2=A0 =C2=A0 32099 =
57.5 78.9 15972880 11215312 ?=C2=A0 =C2=A0R=C2=A0 =C2=A0
12:35=C2=A0 11:52 >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0/usr/bin/python2 /usr/bin=
/ansible-playbook
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--private-key=3D/etc/pki/=
ovirt-engine/keys/engine_id_rsa
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--inventory=3D/tmp/ansibl=
e-inventory8237874608161160784
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--extra-vars=3Dovirt_quer=
y_ova_path=3D/ova
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0/usr/share/ovirt-engine/p=
laybooks/ovirt-ova-query.yml
>=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0playbook looks like >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0- hosts: all >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0=C2=A0 remote_user: root >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0=C2=A0 gather_facts: no >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0=C2=A0 roles: >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0=C2=A0 =C2=A0 - ovirt-ova=
-query
>=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0and it looks like it only=
runs query_ova.py but on all
hosts? >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> No, the engine provides ansible the host to =
run on when it
>=C2=A0 =C2=A0 =C2=A0executes the >=C2=A0 =C2=A0 =C2=A0> playbook. >=C2=A0 =C2=A0 =C2=A0> It would only be executed on the selected ho=
st.
>=C2=A0 =C2=A0 =C2=A0> =C2=A0 >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0How does this work? ...or=
should it work?
>=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> It should, especially that part of querying =
the OVA and is
supposed to >=C2=A0 =C2=A0 =C2=A0> be really quick. >=C2=A0 =C2=A0 =C2=A0> Can you please share the engine log and >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0/var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123=
504-ovirt01.net
<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> >=C2=A0 =C2=A0 =C2=A0<http://ovirt-query-ova-ansible-20180220123504=
-ovirt01.net
<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>> >=C2=A0 =C2=A0 =C2=A0> <http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net> >=C2=A0 =C2=A0 =C2=A0<http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net>>>.slu.cz.log ? > >=C2=A0 =C2=A0 =C2=A0engine log is here: > >=C2=A0 =C2=A0 =C2=A0https://pastebin.com/nWWM3UUq > > > Thanks. > Alright, so now the configuration is fetched but its processing f=
ails.
> We fixed many issues in this area recently, but it appears that > something is wrong with the actual size of the disk within the ov=
f file
> that resides inside this ova file. > Can you please share that ovf file that resides inside=C2=A0/ova/=
HAAS-hpdio.ova?
=20 file HAAS-hpdio.ova HAAS-hpdio.ova: POSIX tar archive (GNU) =20 [root@ovirt01 backup]# tar xvf HAAS-hpdio.ova HAAS-hpdio.ovf HAAS-hpdio-disk001.vmdk =20 file HAAS-hpdio.ovf is here: =20 https://pastebin.com/80qAU0wB =20 =20 Thanks again. So that seems to be a VM that was exported from Virtual Box, right? They don't do anything that violates the OVF specification but they do some non-common things that we don't anticipate:
yes, it is most likely ova from VirtualBox
First, they don't specify the actual size of the disk and the current code in oVirt relies on that property. There is a workaround for this though: you can extract an OVA file, edi= t its OVF configuration - adding ovf:populatedSize=3D"X" (and change ovf:capacity as I'll describe next) to the Disk element inside the DiskSection and pack the OVA again (tar cvf <ovf_file> <disk_file) wher= e X is either: 1. the actual size of the vmdk file + some buffer (iirc, we used to tak= e 15% of extra space for the conversion) 2. if you're using a file storage or you don't mind consuming more storage space on your block storage, simply set X to the virtual size o= f the disk (in bytes) as indicated by the ovf:capacity filed, e.g., ovf:populatedSize=3D"21474836480" in the case of HAAS-hpdio.ova. =20 Second, the virtual size (indicated by ovf:capacity) is specified in bytes. The specification says that the default unit of allocation shall=
be bytes, but practically every OVA file that I've ever saw specified i= t in GB and the current code in oVirt kind of assumes that this is the case without checking the ovf:capacityAllocationUnits attribute that could indicate the real unit of allocation [1]. Anyway, long story short, the virtual size of the disk should currently=
be specified in GB, e.g., ovf:populatedSize=3D"20" in the case of HAAS-hpdio.ova.
wow, thanks for this excellent explanation. I have changed this in ovf fi= le =2E.. <Disk ovf:capacity=3D"20" ovf:diskId=3D"vmdisk2" ovf:populatedSize=3D"20"= ... =2E.. then I was able to import this mofified ova file (HAAS-hpdio_new.ova). Interesting thing is that the vm was shown in vm list for while (with state down with lock and status was initializing). After while this vm disapeared :-o I am going to test it again and collect some logs...
That should do it. If not, please share the OVA file and I will examine=
it in my environment.
original file is at https://haas.cesnet.cz/downloads/release-01/HAAS-hpdio.ova
=20 [1]=C2=A0https://github.com/oVirt/ovirt-engine/blob/master/backend/mana= ger/modules/utils/src/main/java/org/ovirt/engine/core/utils/ovf/OvfOvaRea= der.java#L220 =20 =20 =20 >=C2=A0 =C2=A0 =C2=A0file >=C2=A0 =C2=A0 =C2=A0/var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123= 504-ovirt01.net <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> >=C2=A0 =C2=A0 =C2=A0<http://ovirt-query-ova-ansible-20180220123504= -ovirt01.net <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>> >=C2=A0 =C2=A0 =C2=A0in the fact does not exists (nor folder /var/l= og/ovirt-engine/ova/) > > > This issue is also resolved in 4.2.2. > In the meantime, please create the =C2=A0/var/log/ovirt-engine/ov= a/ folder > manually and make sure its permissions match the ones of the othe= r > folders in =C2=A0/var/log/ovirt-engine. =20 ok, done. After another try there is this log file =20 /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220173005-ov= irt01.net <http://20180220173005-ovirt01.net>.slu.cz.log =20 https://pastebin.com/M5J44qur =20 =20 Is it the log of the execution of the ansible playbook that was provide= d with a path to the /ova folder? I'm interested in that in order to see how comes that its execution never completed.
well, I dont think so, it is log from import with full path to ova file
=C2=A0 =20 =20 =20 >=C2=A0 =C2=A0 =C2=A0Cheers, > >=C2=A0 =C2=A0 =C2=A0Jiri Slezka > >=C2=A0 =C2=A0 =C2=A0> =C2=A0 >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0I am using latest 4.2.1.7= -1.el7.centos version >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0Cheers, >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0Jiri Slezka >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0[1] https://haas.cesnet.c= z/#!index.md <https://haas.cesnet.cz/#!index.md> <https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!index.= md>> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0<https://haas.cesnet.cz/#= !index.md <https://haas.cesnet.cz/#!index.md> >=C2=A0 =C2=A0 =C2=A0<https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!index.md>>> - Cesnet HAAS >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0[2] https://haas.cesnet.c= z/downloads/release-01/ <https://haas.cesnet.cz/downloads/release-01/> >=C2=A0 =C2=A0 =C2=A0<https://haas.cesnet.cz/downloads/release-01/ <https://haas.cesnet.cz/downloads/release-01/>> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0<https://haas.cesnet.cz/d= ownloads/release-01/ <https://haas.cesnet.cz/downloads/release-01/> >=C2=A0 =C2=A0 =C2=A0<https://haas.cesnet.cz/downloads/release-01/ <https://haas.cesnet.cz/downloads/release-01/>>> - Image repository=
>=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0_________________________=
______________________
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0Users mailing list >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0Users@ovirt.org <mailto:U=
sers@ovirt.org> <mailto:Users@ovirt.org
<mailto:Users@ovirt.org>> >=C2=A0 =C2=A0 =C2=A0<mailto:Users@ovirt.org <mailto:Users@ovirt.or=
g>
<mailto:Users@ovirt.org <mailto:Users@ovirt.org>>> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0http://lists.ovirt.org/ma=
ilman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users> >=C2=A0 =C2=A0 =C2=A0<http://lists.ovirt.org/mailman/listinfo/users=
<http://lists.ovirt.org/mailman/listinfo/users>> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0<http://lists.ovirt.org/m=
ailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users> >=C2=A0 =C2=A0 =C2=A0<http://lists.ovirt.org/mailman/listinfo/users=
<http://lists.ovirt.org/mailman/listinfo/users>>> >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> > > > >=C2=A0 =C2=A0 =C2=A0______________________________________________=
_
>=C2=A0 =C2=A0 =C2=A0Users mailing list >=C2=A0 =C2=A0 =C2=A0Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> >=C2=A0 =C2=A0 =C2=A0http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users> >=C2=A0 =C2=A0 =C2=A0<http://lists.ovirt.org/mailman/listinfo/users=
<http://lists.ovirt.org/mailman/listinfo/users>> > > =20 =20 =20 _______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users> =20 =20
--------------ms090106090203090701080803 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCC Cn8wggUJMIID8aADAgECAhACt8ndrdK9CetZxFyQDGB4MA0GCSqGSIb3DQEBCwUAMGUxCzAJ BgNVBAYTAlVTMRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2Vy dC5jb20xJDAiBgNVBAMTG0RpZ2lDZXJ0IEFzc3VyZWQgSUQgUm9vdCBDQTAeFw0xNDExMTgx MjAwMDBaFw0yNDExMTgxMjAwMDBaMHIxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1I b2xsYW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzANBgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMd VEVSRU5BIGVTY2llbmNlIFBlcnNvbmFsIENBIDMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw ggEKAoIBAQCwp9Jj5Aej1xPkS1GV3LvBdemFmkUR//nSzBodqsU3dv2BCRD30r4gt5oRsYty qDGF2nnItxV1SkwVoDxFeRzOIHYNYvBRHaiGvCQjEXzPRTocOSVfWpmq/zAL/QOEqpJogeM+ 0IBGiJcAENJshl7UcfjYbBnN5qStk74f52VWFf/aiF7MVJnsUr3oriQvXYOzs8N/NXyyQyim atBbumJVCNszF1X+XHCGfPNvxlNFW9ktv7azK0baminfLcsh6ubCdINZc+Nof2lU387NCDgg oh3KsYVcZTSuhh7qp6MjxE5VqOZod1hpXXzDOkjK+DAMC57iZXssncp24eaN08VlAgMBAAGj ggGmMIIBojASBgNVHRMBAf8ECDAGAQH/AgEAMA4GA1UdDwEB/wQEAwIBhjB5BggrBgEFBQcB AQRtMGswJAYIKwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNvbTBDBggrBgEFBQcw AoY3aHR0cDovL2NhY2VydHMuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0QXNzdXJlZElEUm9vdENB LmNydDCBgQYDVR0fBHoweDA6oDigNoY0aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL0RpZ2lD ZXJ0QXNzdXJlZElEUm9vdENBLmNybDA6oDigNoY0aHR0cDovL2NybDQuZGlnaWNlcnQuY29t L0RpZ2lDZXJ0QXNzdXJlZElEUm9vdENBLmNybDA9BgNVHSAENjA0MDIGBFUdIAAwKjAoBggr BgEFBQcCARYcaHR0cHM6Ly93d3cuZGlnaWNlcnQuY29tL0NQUzAdBgNVHQ4EFgQUjJ8RLubj egSlHlWLRggEpu2XcKYwHwYDVR0jBBgwFoAUReuir/SSy4IxLVGLp6chnfNtyA8wDQYJKoZI hvcNAQELBQADggEBAI5HEV91Oen8WHFCoJkeu2Av+b/kWTV2qH/YNI1Xsbou2hHKhh4IyNkF OxA/TUiuK2qQnQ5hAS0TIrs9SJ1Ke+DjXd/cTBiw7lCYSW5hkzigFV+iSivninpItafWqYBS WxITl1KHBS9YBskhEqO5GLliDMPiAgjqUBQ/H1qZMlZNQIuFu0UaFUQuZUpJFr4+0zpzPxsB iWU2muAoGItwbaP55EYshM7+v/J+x6kIhAJt5Dng8fOmOvR9F6Vw2/E0EZ6oQ8g1fdhwM101 S1OI6J1tUil1r7ES/svNqVWVb7YkUEBcPo8ppfHnTI/uxsn2tslsWefsOGJxNYUUSMAb9Eow ggVuMIIEVqADAgECAhAKebGg8bOvnIyfOWAn4bpzMA0GCSqGSIb3DQEBCwUAMHIxCzAJBgNV BAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzAN BgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMdVEVSRU5BIGVTY2llbmNlIFBlcnNvbmFsIENBIDMw HhcNMTcxMTE2MDAwMDAwWhcNMTgxMjE1MTIwMDAwWjCBlDETMBEGCgmSJomT8ixkARkWA29y ZzEWMBQGCgmSJomT8ixkARkWBnRlcmVuYTETMBEGCgmSJomT8ixkARkWA3RjczELMAkGA1UE BhMCQ1oxJTAjBgNVBAoTHFNpbGVzaWFuIFVuaXZlcnNpdHkgaW4gT3BhdmExHDAaBgNVBAMT E0ppcmkgU2xlemthIHNsZTAwMDEwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC/ VwOD1hlYL6l7GzxNqV1ne7/iMF/gHvPfTwejsC2s9sby7It82qXPRBVA2s1Cjb1A3ucpdlDN MXM83Lvh881XfkxhS2YLLyiZDmlSzAqfoMLxQ2/E0m1UugttzGJF7/10pEwj0FJFhnIVwA/E 8svCcbhxwO9BBpUz8JG1C6fTd0qyzJtNXVyH+WuHQbU2jgu2JJ7miiEKE1Fis0hFf1rKxTzX aVGyXiQLOn7TZDfPtXrJEG7eWYlFUP58edyuJELpWHTPHn8xJKYTy8Qq5BgFNyCRQT/6imsh tZlDBZSEeqyoSNtLsC57ZrjqgtLCEQFK9EX27dOy0/u95zS0OIWdAgMBAAGjggHbMIIB1zAf BgNVHSMEGDAWgBSMnxEu5uN6BKUeVYtGCASm7ZdwpjAdBgNVHQ4EFgQUF1mSlcyDz9wWit9V jCz+zJ9CrpswDAYDVR0TAQH/BAIwADAdBgNVHREEFjAUgRJqaXJpLnNsZXprYUBzbHUuY3ow DgYDVR0PAQH/BAQDAgSwMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcDBDA0BgNVHSAE LTArMAwGCiqGSIb3TAUCAgEwDAYKYIZIAYb9bAQfATANBgsqhkiG90wFAgMDAzCBhQYDVR0f BH4wfDA8oDqgOIY2aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL1RFUkVOQWVTY2llbmNlUGVy c29uYWxDQTMuY3JsMDygOqA4hjZodHRwOi8vY3JsNC5kaWdpY2VydC5jb20vVEVSRU5BZVNj aWVuY2VQZXJzb25hbENBMy5jcmwwewYIKwYBBQUHAQEEbzBtMCQGCCsGAQUFBzABhhhodHRw Oi8vb2NzcC5kaWdpY2VydC5jb20wRQYIKwYBBQUHMAKGOWh0dHA6Ly9jYWNlcnRzLmRpZ2lj ZXJ0LmNvbS9URVJFTkFlU2NpZW5jZVBlcnNvbmFsQ0EzLmNydDANBgkqhkiG9w0BAQsFAAOC AQEADtFRxKphkcHVdWjR/+i1+cdHfkbicraHlU5Mpw8EX6nemKu4GGAWfzH+Y7p6ImZwUHWf /SSbrX+57xaFUBOr3jktQm1GRmGUZESEmsUDB8UZXzdQC79/tO9MzRhvEBXuQhdxdoO64Efx VqtYAB2ydqz7yWh56ioSwaQZEXo5rO1kZuAcmVz8Smd1r/Mur/h8Y+qbrsJng1GS25aMhFts UV6z9zXuHFkT9Ck8SLdCEDzjzYNjXIDB5n+QOmPXnXrZMlGiI/aOqa5k5Sv6xCIPdH2kbpyd M1YiH/ChmU9gWJvy0Jq42KGLvWBvuHEzcb3f473Fvn4GWsXu0zDS2oh2/TGCA8MwggO/AgEB MIGGMHIxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIwEAYDVQQHEwlB bXN0ZXJkYW0xDzANBgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMdVEVSRU5BIGVTY2llbmNlIFBl cnNvbmFsIENBIDMCEAp5saDxs6+cjJ85YCfhunMwDQYJYIZIAWUDBAIBBQCgggINMBgGCSqG SIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTE4MDIyMTE0NDMyNVowLwYJ KoZIhvcNAQkEMSIEIH2KcYVxBCX2BfxWstaKx+2Y/sSzXwkb1p4X1VpdGVNEMGwGCSqGSIb3 DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG9w0DBzAOBggqhkiG 9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcNAwICASgwgZcGCSsG AQQBgjcQBDGBiTCBhjByMQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDES MBAGA1UEBxMJQW1zdGVyZGFtMQ8wDQYDVQQKEwZURVJFTkExJjAkBgNVBAMTHVRFUkVOQSBl U2NpZW5jZSBQZXJzb25hbCBDQSAzAhAKebGg8bOvnIyfOWAn4bpzMIGZBgsqhkiG9w0BCRAC CzGBiaCBhjByMQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDESMBAGA1UE BxMJQW1zdGVyZGFtMQ8wDQYDVQQKEwZURVJFTkExJjAkBgNVBAMTHVRFUkVOQSBlU2NpZW5j ZSBQZXJzb25hbCBDQSAzAhAKebGg8bOvnIyfOWAn4bpzMA0GCSqGSIb3DQEBAQUABIIBADiX VxlUkPvrHL6Ms2PCt0K44uQxWD2kom2QDhAJSophwU0+yDvB/B87d6yqUrFdVQTXB1//trgg O0f82Do8Aexx7fkeI8cgmOlmthiJQ6B2PYKGRiTNY6UyE584f6QJW4Ks391HYbRdzCkoBaL9 oOaBDV1WaSitY+YWI1Zx2na8vFLdOcfY/OiF9N3X+/7Du/wCNXMs3rj7BSQtdMNsFOyzAwnW +BJOPd6SxaLAMeFuKIG3LPaR1EI5GskWf06dzLLqN7dnR/4s0G+kpJDKlz09oltpAW08c8dM E56tPso0IoI+eUq4e2h3Qw4wuMgcR3aqN/3KD5C1wnXAglrI0JYAAAAAAAA= --------------ms090106090203090701080803--

On 02/20/2018 11:09 PM, Arik Hadas wrote:
On Tue, Feb 20, 2018 at 6:37 PM, Ji=C5=99=C3=AD Sl=C3=A9=C5=BEka <jiri=
=2Eslezka@slu.cz
<mailto:jiri.slezka@slu.cz>> wrote:
On 02/20/2018 03:48 PM, Arik Hadas wrote: > > > On Tue, Feb 20, 2018 at 3:49 PM, Ji=C5=99=C3=AD Sl=C3=A9=C5=BEka= <jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz> > <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>>> wrote: > >=C2=A0 =C2=A0 =C2=A0Hi Arik, > >=C2=A0 =C2=A0 =C2=A0On 02/20/2018 01:22 PM, Arik Hadas wrote: >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> On Tue, Feb 20, 2018 at 2:03 PM, Ji=C5=99=C3= =AD Sl=C3=A9=C5=BEka <jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz> <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>> >=C2=A0 =C2=A0 =C2=A0> <mailto:jiri.slezka@slu.cz <mailto:jiri.sle= zka@slu.cz> <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>>>> wrote: >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0Hi, >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> Hi Ji=C5=99=C3=AD, >=C2=A0 =C2=A0 =C2=A0> =C2=A0 >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0I would like to try impo= rt some ova files into our oVirt instance [1] >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0[2] but I facing problem= s. >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0I have downloaded all ov= a images into one of hosts (ovirt01) into >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0direcory /ova >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0ll /ova/ >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0total 6532872 >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0-rw-r--r--. 1 vdsm kvm 1= 160387072 Feb 16 16:21 HAAS-hpcowrie.ovf >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0-rw-r--r--. 1 vdsm kvm 1= 111785984 Feb 16 16:22 HAAS-hpdio.ova >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0-rw-r--r--. 1 vdsm kvm=C2= =A0 846736896 Feb 16 16:22 HAAS-hpjdwpd.ova >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0-rw-r--r--. 1 vdsm kvm=C2= =A0 891043328 Feb 16 16:23 HAAS-hptelnetd.ova >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0-rw-r--r--. 1 vdsm kvm=C2= =A0 908222464 Feb 16 16:23 HAAS-hpuchotcp.ova >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0-rw-r--r--. 1 vdsm kvm=C2= =A0 880643072 Feb 16 16:24 HAAS-hpuchoudp.ova >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0-rw-r--r--. 1 vdsm kvm=C2= =A0 890833920 Feb 16 16:24 HAAS-hpuchoweb.ova >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0Then I tried to import t= hem - from host ovirt01 and directory /ova but >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0spinner spins infinitly = and nothing is happen. >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> And does it work when you provide a path to=
This is a cryptographically signed message in MIME format. --------------ms070908000408060801060808 Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: quoted-printable On 02/21/2018 03:43 PM, Ji=C5=99=C3=AD Sl=C3=A9=C5=BEka wrote: the actual ova
file, i.e., >=C2=A0 =C2=A0 =C2=A0> /ova/HAAS-hpdio.ova, rather than to the dir=
ectory?
> >=C2=A0 =C2=A0 =C2=A0this time it ends with "Failed to load VM con=
figuration from
OVA file: >=C2=A0 =C2=A0 =C2=A0/ova/HAAS-hpdio.ova" error.=C2=A0 > > > Note that the logic that is applied on a specified folder is "tr=
y
> fetching an 'ova folder' out of the destination folder" rather t=
han
> "list all the ova files inside the specified folder". It seems that you > expected the former output since there are no disks in that folder, right?
yes, It would be more user friendly to list all ova files and then=
select which one to import (like listing all vms in vmware import)=
Maybe description of path field in manager should be "Path to ova =
file"
instead of "Path" :-)
Sorry, I obviously meant 'latter' rather than 'former' before.. Yeah, I agree that would be better, at least until listing the OVA fil=
es
in the folder is implemented (that was the original plan, btw) - could=
you please file a bug? =20 yes, sure =20 =20 >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0I cannot see anything re= levant in vdsm log of host ovirt01. >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0In the engine.log of our= standalone ovirt manager is just this >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0relevant line >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A02018-02-20 12:35:04,289+= 01 INFO >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0[org.ovirt.engine.core.c= ommon.utils.ansible.AnsibleExecutor] (default >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0task-31) [458990a7-b054-= 491a-904e-5c4fe44892c4] Executing Ansible >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0command: ANSIBLE_STDOUT_= CALLBACK=3Dovaqueryplugin >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0[/usr/bin/ansible-playbo= ok, >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--private-key=3D/etc/pki= /ovirt-engine/keys/engine_id_rsa, >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--inventory=3D/tmp/ansib= le-inventory8237874608161160784, >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--extra-vars=3Dovirt_que= ry_ova_path=3D/ova, >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0/usr/share/ovirt-engine/= playbooks/ovirt-ova-query.yml] [Logfile: >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0/var/log/ovirt-engine/ov= a/ovirt-query-ova-ansible-20180220123504-ovirt01.net <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> >=C2=A0 =C2=A0 =C2=A0<http://ovirt-query-ova-ansible-2018022012350= 4-ovirt01.net <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0<http://20180220123504-o= virt01.net <http://20180220123504-ovirt01.net> >=C2=A0 =C2=A0 =C2=A0<http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net>>>.slu.cz.log] >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0also there are two ansib= le processes which are still running >=C2=A0 =C2=A0 =C2=A0(and makes >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0heavy load on system (lo= ad 9+ and growing, it looks like it >=C2=A0 =C2=A0 =C2=A0eats all the >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0memory and system starts= swapping)) >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0ovirt=C2=A0 =C2=A0 32087= =C2=A0 3.3=C2=A0 0.0 332252=C2=A0 5980 ?=C2=A0 =C2=A0 =C2=A0 =C2=A0 Sl=C2= =A0 =C2=A012:35=C2=A0 =C2=A00:41 >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0/usr/bin/python2 /usr/bi= n/ansible-playbook >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--private-key=3D/etc/pki= /ovirt-engine/keys/engine_id_rsa >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--inventory=3D/tmp/ansib= le-inventory8237874608161160784 >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--extra-vars=3Dovirt_que= ry_ova_path=3D/ova >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0/usr/share/ovirt-engine/= playbooks/ovirt-ova-query.yml >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0ovirt=C2=A0 =C2=A0 32099= 57.5 78.9 15972880 11215312 ?=C2=A0 =C2=A0R=C2=A0 =C2=A0 12:35=C2=A0 11:52 >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0/usr/bin/python2 /usr/bi= n/ansible-playbook >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--private-key=3D/etc/pki= /ovirt-engine/keys/engine_id_rsa >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--inventory=3D/tmp/ansib= le-inventory8237874608161160784 >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--extra-vars=3Dovirt_que= ry_ova_path=3D/ova >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0/usr/share/ovirt-engine/= playbooks/ovirt-ova-query.yml >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0playbook looks like >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0- hosts: all >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0=C2=A0 remote_user: root=
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0=C2=A0 gather_facts: no >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0=C2=A0 roles: >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0=C2=A0 =C2=A0 - ovirt-ov=
a-query
>=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0and it looks like it onl=
y runs query_ova.py but on all
hosts? >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> No, the engine provides ansible the host to=
run on when it
>=C2=A0 =C2=A0 =C2=A0executes the >=C2=A0 =C2=A0 =C2=A0> playbook. >=C2=A0 =C2=A0 =C2=A0> It would only be executed on the selected h=
ost.
>=C2=A0 =C2=A0 =C2=A0> =C2=A0 >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0How does this work? ...o=
r should it work?
>=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> It should, especially that part of querying=
the OVA and is
supposed to >=C2=A0 =C2=A0 =C2=A0> be really quick. >=C2=A0 =C2=A0 =C2=A0> Can you please share the engine log and >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0/var/log/ovirt-engine/ova/ovirt-query-ova-ansible-2018022012=
3504-ovirt01.net
<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> >=C2=A0 =C2=A0 =C2=A0<http://ovirt-query-ova-ansible-2018022012350=
4-ovirt01.net
<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>> >=C2=A0 =C2=A0 =C2=A0> <http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net> >=C2=A0 =C2=A0 =C2=A0<http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net>>>.slu.cz.log ? > >=C2=A0 =C2=A0 =C2=A0engine log is here: > >=C2=A0 =C2=A0 =C2=A0https://pastebin.com/nWWM3UUq > > > Thanks. > Alright, so now the configuration is fetched but its processing =
fails.
> We fixed many issues in this area recently, but it appears that > something is wrong with the actual size of the disk within the o=
vf file
> that resides inside this ova file. > Can you please share that ovf file that resides inside=C2=A0/ova=
/HAAS-hpdio.ova?
file HAAS-hpdio.ova HAAS-hpdio.ova: POSIX tar archive (GNU)
[root@ovirt01 backup]# tar xvf HAAS-hpdio.ova HAAS-hpdio.ovf HAAS-hpdio-disk001.vmdk
file HAAS-hpdio.ovf is here:
Thanks again. So that seems to be a VM that was exported from Virtual Box, right? They don't do anything that violates the OVF specification but they do=
some non-common things that we don't anticipate: =20 yes, it is most likely ova from VirtualBox =20 First, they don't specify the actual size of the disk and the current code in oVirt relies on that property. There is a workaround for this though: you can extract an OVA file, ed= it its OVF configuration - adding ovf:populatedSize=3D"X" (and change ovf:capacity as I'll describe next) to the Disk element inside the DiskSection and pack the OVA again (tar cvf <ovf_file> <disk_file) whe= re X is either: 1. the actual size of the vmdk file + some buffer (iirc, we used to ta= ke 15% of extra space for the conversion) 2. if you're using a file storage or you don't mind consuming more storage space on your block storage, simply set X to the virtual size = of the disk (in bytes) as indicated by the ovf:capacity filed, e.g., ovf:populatedSize=3D"21474836480" in the case of HAAS-hpdio.ova.
Second, the virtual size (indicated by ovf:capacity) is specified in bytes. The specification says that the default unit of allocation shal= l be bytes, but practically every OVA file that I've ever saw specified = it in GB and the current code in oVirt kind of assumes that this is the case without checking the ovf:capacityAllocationUnits attribute that could indicate the real unit of allocation [1]. Anyway, long story short, the virtual size of the disk should currentl= y be specified in GB, e.g., ovf:populatedSize=3D"20" in the case of HAAS-hpdio.ova. =20 wow, thanks for this excellent explanation. I have changed this in ovf = file =20 ... <Disk ovf:capacity=3D"20" ovf:diskId=3D"vmdisk2" ovf:populatedSize=3D"2= 0" ... ... =20 then I was able to import this mofified ova file (HAAS-hpdio_new.ova). Interesting thing is that the vm was shown in vm list for while (with state down with lock and status was initializing). After while this vm disapeared :-o =20 I am going to test it again and collect some logs...
there are interesting logs in /var/log/vdsm/import/ at the host used for import http://mirror.slu.cz/tmp/ovirt-import.tar.bz2 first of them describes situation where I chose thick provisioning, second situation with thin provisioning interesting part is I believe libguestfs: command: run: qemu-img libguestfs: command: run: \ create libguestfs: command: run: \ -f qcow2 libguestfs: command: run: \ -o preallocation=3Doff,compat=3D0.10 libguestfs: command: run: \ /rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-a570-f37fa986a772/images= /d44e1890-3e42-420b-939c-dac1290e19af/9edcccbc-b244-4b94-acd3-3c8ee12bbbe= c libguestfs: command: run: \ 21474836480 Formatting '/rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-a570-f37fa986a772/image= s/d44e1890-3e42-420b-939c-dac1290e19af/9edcccbc-b244-4b94-acd3-3c8ee12bbb= ec', fmt=3Dqcow2 size=3D21474836480 compat=3D0.10 encryption=3Doff cluster_siz= e=3D65536 preallocation=3Doff lazy_refcounts=3Doff refcount_bits=3D16 libguestfs: trace: vdsm_disk_create: disk_create =3D 0 qemu-img 'convert' '-p' '-n' '-f' 'qcow2' '-O' 'qcow2' '/var/tmp/v2vovl2dccbd.qcow2' '/rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-a570-f37fa986a772/image= s/d44e1890-3e42-420b-939c-dac1290e19af/9edcccbc-b244-4b94-acd3-3c8ee12bbb= ec' qemu-img: error while writing sector 1000960: No space left on device virt-v2v: error: qemu-img command failed, see earlier errors
=20
That should do it. If not, please share the OVA file and I will examin= e it in my environment. =20 original file is at =20 https://haas.cesnet.cz/downloads/release-01/HAAS-hpdio.ova =20
[1]=C2=A0https://github.com/oVirt/ovirt-engine/blob/master/backend/man= ager/modules/utils/src/main/java/org/ovirt/engine/core/utils/ovf/OvfOvaRe= ader.java#L220
>=C2=A0 =C2=A0 =C2=A0file >=C2=A0 =C2=A0 =C2=A0/var/log/ovirt-engine/ova/ovirt-query-ova-ansible-2018022012= 3504-ovirt01.net <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> >=C2=A0 =C2=A0 =C2=A0<http://ovirt-query-ova-ansible-2018022012350= 4-ovirt01.net <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>> >=C2=A0 =C2=A0 =C2=A0in the fact does not exists (nor folder /var/= log/ovirt-engine/ova/) > > > This issue is also resolved in 4.2.2. > In the meantime, please create the =C2=A0/var/log/ovirt-engine/o= va/ folder > manually and make sure its permissions match the ones of the oth= er > folders in =C2=A0/var/log/ovirt-engine.
ok, done. After another try there is this log file
/var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220173005-o= virt01.net <http://20180220173005-ovirt01.net>.slu.cz.log
Is it the log of the execution of the ansible playbook that was provid= ed with a path to the /ova folder? I'm interested in that in order to see how comes that its execution never completed. =20 well, I dont think so, it is log from import with full path to ova file=
=20 =20 =20
=C2=A0
>=C2=A0 =C2=A0 =C2=A0Cheers, > >=C2=A0 =C2=A0 =C2=A0Jiri Slezka > >=C2=A0 =C2=A0 =C2=A0> =C2=A0 >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0I am using latest 4.2.1.= 7-1.el7.centos version >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0Cheers, >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0Jiri Slezka >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0[1] https://haas.cesnet.= cz/#!index.md <https://haas.cesnet.cz/#!index.md> <https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!index= =2Emd>> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0<https://haas.cesnet.cz/= #!index.md <https://haas.cesnet.cz/#!index.md> >=C2=A0 =C2=A0 =C2=A0<https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!index.md>>> - Cesnet HAAS >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0[2] https://haas.cesnet.= cz/downloads/release-01/ <https://haas.cesnet.cz/downloads/release-01/> >=C2=A0 =C2=A0 =C2=A0<https://haas.cesnet.cz/downloads/release-01/=
<https://haas.cesnet.cz/downloads/release-01/>> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0<https://haas.cesnet.cz/=
downloads/release-01/
<https://haas.cesnet.cz/downloads/release-01/> >=C2=A0 =C2=A0 =C2=A0<https://haas.cesnet.cz/downloads/release-01/=
<https://haas.cesnet.cz/downloads/release-01/>>> - Image repositor=
y
>=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0________________________=
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0Users mailing list >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0Users@ovirt.org <mailto:=
Users@ovirt.org> <mailto:Users@ovirt.org
<mailto:Users@ovirt.org>> >=C2=A0 =C2=A0 =C2=A0<mailto:Users@ovirt.org <mailto:Users@ovirt.o=
rg>
<mailto:Users@ovirt.org <mailto:Users@ovirt.org>>> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0http://lists.ovirt.org/m=
ailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users> >=C2=A0 =C2=A0 =C2=A0<http://lists.ovirt.org/mailman/listinfo/user=
s
<http://lists.ovirt.org/mailman/listinfo/users>> >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0<http://lists.ovirt.org/=
mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users> >=C2=A0 =C2=A0 =C2=A0<http://lists.ovirt.org/mailman/listinfo/user=
s
<http://lists.ovirt.org/mailman/listinfo/users>>> >=C2=A0 =C2=A0 =C2=A0> >=C2=A0 =C2=A0 =C2=A0> > > > >=C2=A0 =C2=A0 =C2=A0_____________________________________________=
__
>=C2=A0 =C2=A0 =C2=A0Users mailing list >=C2=A0 =C2=A0 =C2=A0Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> >=C2=A0 =C2=A0 =C2=A0http://lists.ovirt.org/mailman/listinfo/users=
<http://lists.ovirt.org/mailman/listinfo/users> >=C2=A0 =C2=A0 =C2=A0<http://lists.ovirt.org/mailman/listinfo/user=
s
<http://lists.ovirt.org/mailman/listinfo/users>> > >
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
=20 =20 =20 =20 _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users =20
--------------ms070908000408060801060808 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCC Cn8wggUJMIID8aADAgECAhACt8ndrdK9CetZxFyQDGB4MA0GCSqGSIb3DQEBCwUAMGUxCzAJ BgNVBAYTAlVTMRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2Vy dC5jb20xJDAiBgNVBAMTG0RpZ2lDZXJ0IEFzc3VyZWQgSUQgUm9vdCBDQTAeFw0xNDExMTgx MjAwMDBaFw0yNDExMTgxMjAwMDBaMHIxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1I b2xsYW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzANBgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMd VEVSRU5BIGVTY2llbmNlIFBlcnNvbmFsIENBIDMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw ggEKAoIBAQCwp9Jj5Aej1xPkS1GV3LvBdemFmkUR//nSzBodqsU3dv2BCRD30r4gt5oRsYty qDGF2nnItxV1SkwVoDxFeRzOIHYNYvBRHaiGvCQjEXzPRTocOSVfWpmq/zAL/QOEqpJogeM+ 0IBGiJcAENJshl7UcfjYbBnN5qStk74f52VWFf/aiF7MVJnsUr3oriQvXYOzs8N/NXyyQyim atBbumJVCNszF1X+XHCGfPNvxlNFW9ktv7azK0baminfLcsh6ubCdINZc+Nof2lU387NCDgg oh3KsYVcZTSuhh7qp6MjxE5VqOZod1hpXXzDOkjK+DAMC57iZXssncp24eaN08VlAgMBAAGj ggGmMIIBojASBgNVHRMBAf8ECDAGAQH/AgEAMA4GA1UdDwEB/wQEAwIBhjB5BggrBgEFBQcB AQRtMGswJAYIKwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNvbTBDBggrBgEFBQcw AoY3aHR0cDovL2NhY2VydHMuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0QXNzdXJlZElEUm9vdENB LmNydDCBgQYDVR0fBHoweDA6oDigNoY0aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL0RpZ2lD ZXJ0QXNzdXJlZElEUm9vdENBLmNybDA6oDigNoY0aHR0cDovL2NybDQuZGlnaWNlcnQuY29t L0RpZ2lDZXJ0QXNzdXJlZElEUm9vdENBLmNybDA9BgNVHSAENjA0MDIGBFUdIAAwKjAoBggr BgEFBQcCARYcaHR0cHM6Ly93d3cuZGlnaWNlcnQuY29tL0NQUzAdBgNVHQ4EFgQUjJ8RLubj egSlHlWLRggEpu2XcKYwHwYDVR0jBBgwFoAUReuir/SSy4IxLVGLp6chnfNtyA8wDQYJKoZI hvcNAQELBQADggEBAI5HEV91Oen8WHFCoJkeu2Av+b/kWTV2qH/YNI1Xsbou2hHKhh4IyNkF OxA/TUiuK2qQnQ5hAS0TIrs9SJ1Ke+DjXd/cTBiw7lCYSW5hkzigFV+iSivninpItafWqYBS WxITl1KHBS9YBskhEqO5GLliDMPiAgjqUBQ/H1qZMlZNQIuFu0UaFUQuZUpJFr4+0zpzPxsB iWU2muAoGItwbaP55EYshM7+v/J+x6kIhAJt5Dng8fOmOvR9F6Vw2/E0EZ6oQ8g1fdhwM101 S1OI6J1tUil1r7ES/svNqVWVb7YkUEBcPo8ppfHnTI/uxsn2tslsWefsOGJxNYUUSMAb9Eow ggVuMIIEVqADAgECAhAKebGg8bOvnIyfOWAn4bpzMA0GCSqGSIb3DQEBCwUAMHIxCzAJBgNV BAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzAN BgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMdVEVSRU5BIGVTY2llbmNlIFBlcnNvbmFsIENBIDMw HhcNMTcxMTE2MDAwMDAwWhcNMTgxMjE1MTIwMDAwWjCBlDETMBEGCgmSJomT8ixkARkWA29y ZzEWMBQGCgmSJomT8ixkARkWBnRlcmVuYTETMBEGCgmSJomT8ixkARkWA3RjczELMAkGA1UE BhMCQ1oxJTAjBgNVBAoTHFNpbGVzaWFuIFVuaXZlcnNpdHkgaW4gT3BhdmExHDAaBgNVBAMT E0ppcmkgU2xlemthIHNsZTAwMDEwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC/ VwOD1hlYL6l7GzxNqV1ne7/iMF/gHvPfTwejsC2s9sby7It82qXPRBVA2s1Cjb1A3ucpdlDN MXM83Lvh881XfkxhS2YLLyiZDmlSzAqfoMLxQ2/E0m1UugttzGJF7/10pEwj0FJFhnIVwA/E 8svCcbhxwO9BBpUz8JG1C6fTd0qyzJtNXVyH+WuHQbU2jgu2JJ7miiEKE1Fis0hFf1rKxTzX aVGyXiQLOn7TZDfPtXrJEG7eWYlFUP58edyuJELpWHTPHn8xJKYTy8Qq5BgFNyCRQT/6imsh tZlDBZSEeqyoSNtLsC57ZrjqgtLCEQFK9EX27dOy0/u95zS0OIWdAgMBAAGjggHbMIIB1zAf BgNVHSMEGDAWgBSMnxEu5uN6BKUeVYtGCASm7ZdwpjAdBgNVHQ4EFgQUF1mSlcyDz9wWit9V jCz+zJ9CrpswDAYDVR0TAQH/BAIwADAdBgNVHREEFjAUgRJqaXJpLnNsZXprYUBzbHUuY3ow DgYDVR0PAQH/BAQDAgSwMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcDBDA0BgNVHSAE LTArMAwGCiqGSIb3TAUCAgEwDAYKYIZIAYb9bAQfATANBgsqhkiG90wFAgMDAzCBhQYDVR0f BH4wfDA8oDqgOIY2aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL1RFUkVOQWVTY2llbmNlUGVy c29uYWxDQTMuY3JsMDygOqA4hjZodHRwOi8vY3JsNC5kaWdpY2VydC5jb20vVEVSRU5BZVNj aWVuY2VQZXJzb25hbENBMy5jcmwwewYIKwYBBQUHAQEEbzBtMCQGCCsGAQUFBzABhhhodHRw Oi8vb2NzcC5kaWdpY2VydC5jb20wRQYIKwYBBQUHMAKGOWh0dHA6Ly9jYWNlcnRzLmRpZ2lj ZXJ0LmNvbS9URVJFTkFlU2NpZW5jZVBlcnNvbmFsQ0EzLmNydDANBgkqhkiG9w0BAQsFAAOC AQEADtFRxKphkcHVdWjR/+i1+cdHfkbicraHlU5Mpw8EX6nemKu4GGAWfzH+Y7p6ImZwUHWf /SSbrX+57xaFUBOr3jktQm1GRmGUZESEmsUDB8UZXzdQC79/tO9MzRhvEBXuQhdxdoO64Efx VqtYAB2ydqz7yWh56ioSwaQZEXo5rO1kZuAcmVz8Smd1r/Mur/h8Y+qbrsJng1GS25aMhFts UV6z9zXuHFkT9Ck8SLdCEDzjzYNjXIDB5n+QOmPXnXrZMlGiI/aOqa5k5Sv6xCIPdH2kbpyd M1YiH/ChmU9gWJvy0Jq42KGLvWBvuHEzcb3f473Fvn4GWsXu0zDS2oh2/TGCA8MwggO/AgEB MIGGMHIxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIwEAYDVQQHEwlB bXN0ZXJkYW0xDzANBgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMdVEVSRU5BIGVTY2llbmNlIFBl cnNvbmFsIENBIDMCEAp5saDxs6+cjJ85YCfhunMwDQYJYIZIAWUDBAIBBQCgggINMBgGCSqG SIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTE4MDIyMTE2MDM1MFowLwYJ KoZIhvcNAQkEMSIEIMJrV8XUGGxaKJL19bzHjyG0sDf+CSxpwEp0F7zg2/wrMGwGCSqGSIb3 DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG9w0DBzAOBggqhkiG 9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcNAwICASgwgZcGCSsG AQQBgjcQBDGBiTCBhjByMQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDES MBAGA1UEBxMJQW1zdGVyZGFtMQ8wDQYDVQQKEwZURVJFTkExJjAkBgNVBAMTHVRFUkVOQSBl U2NpZW5jZSBQZXJzb25hbCBDQSAzAhAKebGg8bOvnIyfOWAn4bpzMIGZBgsqhkiG9w0BCRAC CzGBiaCBhjByMQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDESMBAGA1UE BxMJQW1zdGVyZGFtMQ8wDQYDVQQKEwZURVJFTkExJjAkBgNVBAMTHVRFUkVOQSBlU2NpZW5j ZSBQZXJzb25hbCBDQSAzAhAKebGg8bOvnIyfOWAn4bpzMA0GCSqGSIb3DQEBAQUABIIBABuJ Oiye516O/31ZzufFnnBVNbYTcR1Z0/8IZ1KyIn3tU2pjctSQ5wV8Yo0c4Rk6jPOSjaDZ3HXl 8614pHzSdwSreBktfMmOGm3mI3j71Ed5R2dKFEsTurI+zDyy8NueJNX2fkiw0hvPLG5eH1NH pNxO38cAdJHA20/YwHOCcOBYLTUWKxzAm8LyzH6Id35y7sNwBkS/n326qULzJDv2O3L8ySc+ ij9NduYj9tcd0eNJrHX0VZ6YGaV8YwZiN7laZijY9zNDPIGe48u5EFQnoZL3v25Dy4jkrBpe +L0SOQUNmH6NcUoQmF8CObltmh9JRAJmDIPfYybtcJLd4yV52n4AAAAAAAA= --------------ms070908000408060801060808--

On Wed, Feb 21, 2018 at 6:03 PM, Jiří Sléžka <jiri.slezka@slu.cz> wrote:
On 02/20/2018 11:09 PM, Arik Hadas wrote:
On Tue, Feb 20, 2018 at 6:37 PM, Jiří Sléžka <jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>> wrote:
On 02/20/2018 03:48 PM, Arik Hadas wrote: > > > On Tue, Feb 20, 2018 at 3:49 PM, Jiří Sléžka <jiri.slezka@slu.cz
<mailto:jiri.slezka@slu.cz>
> <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>>> wrote: > > Hi Arik, > > On 02/20/2018 01:22 PM, Arik Hadas wrote: > > > > > > On Tue, Feb 20, 2018 at 2:03 PM, Jiří Sléžka <
jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>
<mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>> > > <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz> <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>>>> wrote: > > > > Hi, > > > > > > Hi Jiří, > > > > > > > > I would like to try import some ova files into our oVirt instance [1] > > [2] but I facing problems. > > > > I have downloaded all ova images into one of hosts (ovirt01) into > > direcory /ova > > > > ll /ova/ > > total 6532872 > > -rw-r--r--. 1 vdsm kvm 1160387072 Feb 16 16:21 HAAS-hpcowrie.ovf > > -rw-r--r--. 1 vdsm kvm 1111785984 Feb 16 16:22 HAAS-hpdio.ova > > -rw-r--r--. 1 vdsm kvm 846736896 Feb 16 16:22 HAAS-hpjdwpd.ova > > -rw-r--r--. 1 vdsm kvm 891043328 Feb 16 16:23 HAAS-hptelnetd.ova > > -rw-r--r--. 1 vdsm kvm 908222464 Feb 16 16:23 HAAS-hpuchotcp.ova > > -rw-r--r--. 1 vdsm kvm 880643072 Feb 16 16:24 HAAS-hpuchoudp.ova > > -rw-r--r--. 1 vdsm kvm 890833920 Feb 16 16:24 HAAS-hpuchoweb.ova > > > > Then I tried to import them - from host ovirt01 and directory /ova but > > spinner spins infinitly and nothing is happen. > > > > > > And does it work when you provide a path to the actual ova file, i.e., > > /ova/HAAS-hpdio.ova, rather than to the directory? > > this time it ends with "Failed to load VM configuration from OVA file: > /ova/HAAS-hpdio.ova" error. > > > Note that the logic that is applied on a specified folder is "try > fetching an 'ova folder' out of the destination folder" rather
> "list all the ova files inside the specified folder". It seems that you > expected the former output since there are no disks in that folder, right?
yes, It would be more user friendly to list all ova files and then select which one to import (like listing all vms in vmware import)
Maybe description of path field in manager should be "Path to ova
file"
instead of "Path" :-)
Sorry, I obviously meant 'latter' rather than 'former' before.. Yeah, I agree that would be better, at least until listing the OVA files in the folder is implemented (that was the original plan, btw) - could you please file a bug?
yes, sure
> > I cannot see anything relevant in vdsm log of host
ovirt01.
> > > > In the engine.log of our standalone ovirt manager is
just this
> > relevant line > > > > 2018-02-20 12:35:04,289+01 INFO > > [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor]
(default
> > task-31) [458990a7-b054-491a-904e-5c4fe44892c4]
Executing Ansible
> > command: ANSIBLE_STDOUT_CALLBACK=ovaqueryplugin > > [/usr/bin/ansible-playbook, > > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa, > > --inventory=/tmp/ansible-inventory8237874608161160784, > > --extra-vars=ovirt_query_ova_path=/ova, > > /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml]
[Logfile:
> > /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-
20180220123504-ovirt01.net
<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> > <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>> > > <http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net> > <http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net>>>.slu.cz.log] > > > > also there are two ansible processes which are still
running
> (and makes > > heavy load on system (load 9+ and growing, it looks
On 02/21/2018 03:43 PM, Jiří Sléžka wrote: than like it
> eats all the > > memory and system starts swapping)) > > > > ovirt 32087 3.3 0.0 332252 5980 ? Sl 12:35 0:41 > > /usr/bin/python2 /usr/bin/ansible-playbook > > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa > > --inventory=/tmp/ansible-inventory8237874608161160784 > > --extra-vars=ovirt_query_ova_path=/ova > > /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml > > ovirt 32099 57.5 78.9 15972880 11215312 ? R 12:35 11:52 > > /usr/bin/python2 /usr/bin/ansible-playbook > > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa > > --inventory=/tmp/ansible-inventory8237874608161160784 > > --extra-vars=ovirt_query_ova_path=/ova > > /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml > > > > playbook looks like > > > > - hosts: all > > remote_user: root > > gather_facts: no > > > > roles: > > - ovirt-ova-query > > > > and it looks like it only runs query_ova.py but on all hosts? > > > > > > No, the engine provides ansible the host to run on when it > executes the > > playbook. > > It would only be executed on the selected host. > > > > > > > > How does this work? ...or should it work? > > > > > > It should, especially that part of querying the OVA and is supposed to > > be really quick. > > Can you please share the engine log and > > > /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-
20180220123504-ovirt01.net
<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> > <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>> > > <http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net> > <http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net>>>.slu.cz.log ? > > engine log is here: > > https://pastebin.com/nWWM3UUq > > > Thanks. > Alright, so now the configuration is fetched but its processing
fails.
> We fixed many issues in this area recently, but it appears that > something is wrong with the actual size of the disk within the
ovf file
> that resides inside this ova file. > Can you please share that ovf file that resides
inside /ova/HAAS-hpdio.ova?
file HAAS-hpdio.ova HAAS-hpdio.ova: POSIX tar archive (GNU)
[root@ovirt01 backup]# tar xvf HAAS-hpdio.ova HAAS-hpdio.ovf HAAS-hpdio-disk001.vmdk
file HAAS-hpdio.ovf is here:
Thanks again. So that seems to be a VM that was exported from Virtual Box, right? They don't do anything that violates the OVF specification but they do some non-common things that we don't anticipate:
yes, it is most likely ova from VirtualBox
First, they don't specify the actual size of the disk and the current code in oVirt relies on that property. There is a workaround for this though: you can extract an OVA file, edit its OVF configuration - adding ovf:populatedSize="X" (and change ovf:capacity as I'll describe next) to the Disk element inside the DiskSection and pack the OVA again (tar cvf <ovf_file> <disk_file) where X is either: 1. the actual size of the vmdk file + some buffer (iirc, we used to take 15% of extra space for the conversion) 2. if you're using a file storage or you don't mind consuming more storage space on your block storage, simply set X to the virtual size of the disk (in bytes) as indicated by the ovf:capacity filed, e.g., ovf:populatedSize="21474836480" in the case of HAAS-hpdio.ova.
Second, the virtual size (indicated by ovf:capacity) is specified in bytes. The specification says that the default unit of allocation shall be bytes, but practically every OVA file that I've ever saw specified it in GB and the current code in oVirt kind of assumes that this is the case without checking the ovf:capacityAllocationUnits attribute that could indicate the real unit of allocation [1]. Anyway, long story short, the virtual size of the disk should currently be specified in GB, e.g., ovf:populatedSize="20" in the case of HAAS-hpdio.ova.
wow, thanks for this excellent explanation. I have changed this in ovf file
... <Disk ovf:capacity="20" ovf:diskId="vmdisk2" ovf:populatedSize="20" ... ...
then I was able to import this mofified ova file (HAAS-hpdio_new.ova). Interesting thing is that the vm was shown in vm list for while (with state down with lock and status was initializing). After while this vm disapeared :-o
I am going to test it again and collect some logs...
there are interesting logs in /var/log/vdsm/import/ at the host used for import
http://mirror.slu.cz/tmp/ovirt-import.tar.bz2
first of them describes situation where I chose thick provisioning, second situation with thin provisioning
interesting part is I believe
libguestfs: command: run: qemu-img libguestfs: command: run: \ create libguestfs: command: run: \ -f qcow2 libguestfs: command: run: \ -o preallocation=off,compat=0.10 libguestfs: command: run: \ /rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-a570- f37fa986a772/images/d44e1890-3e42-420b-939c-dac1290e19af/ 9edcccbc-b244-4b94-acd3-3c8ee12bbbec libguestfs: command: run: \ 21474836480 Formatting '/rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd- a570-f37fa986a772/images/d44e1890-3e42-420b-939c- dac1290e19af/9edcccbc-b244-4b94-acd3-3c8ee12bbbec', fmt=qcow2 size=21474836480 compat=0.10 encryption=off cluster_size=65536 preallocation=off lazy_refcounts=off refcount_bits=16 libguestfs: trace: vdsm_disk_create: disk_create = 0 qemu-img 'convert' '-p' '-n' '-f' 'qcow2' '-O' 'qcow2' '/var/tmp/v2vovl2dccbd.qcow2' '/rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd- a570-f37fa986a772/images/d44e1890-3e42-420b-939c- dac1290e19af/9edcccbc-b244-4b94-acd3-3c8ee12bbbec' qemu-img: error while writing sector 1000960: No space left on device
virt-v2v: error: qemu-img command failed, see earlier errors
Sorry again, I made a mistake in: "Anyway, long story short, the virtual size of the disk should currently be specified in GB, e.g., ovf:populatedSize="20" in the case of HAAS-hpdio.ova." I should have write ovf:capacity="20". So if you wish the actual size of the disk to be 20GB (which means the disk is preallocated), the disk element should be set with: <Disk ovf:capacity="20" ovf:diskId="vmdisk2" ovf:populatedSize="21474836480" ...
That should do it. If not, please share the OVA file and I will examine it in my environment.
original file is at
https://haas.cesnet.cz/downloads/release-01/HAAS-hpdio.ova
backend/manager/modules/utils/src/main/java/org/ovirt/ engine/core/utils/ovf/OvfOvaReader.java#L220
> file > /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-
20180220123504-ovirt01.net
<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> > <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>> > in the fact does not exists (nor folder
/var/log/ovirt-engine/ova/)
> > > This issue is also resolved in 4.2.2. > In the meantime, please create the /var/log/ovirt-engine/ova/
folder
> manually and make sure its permissions match the ones of the other > folders in /var/log/ovirt-engine.
ok, done. After another try there is this log file
/var/log/ovirt-engine/ova/ovirt-query-ova-ansible-
20180220173005-ovirt01.net
<http://20180220173005-ovirt01.net>.slu.cz.log
Is it the log of the execution of the ansible playbook that was provided with a path to the /ova folder? I'm interested in that in order to see how comes that its execution never completed.
well, I dont think so, it is log from import with full path to ova file
> Cheers, > > Jiri Slezka > > > > > > > > > I am using latest 4.2.1.7-1.el7.centos version > > > > Cheers, > > Jiri Slezka > > > > > > [1] https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!index.md> <https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!
index.md>>
https://haas.cesnet.cz/#!index.md>
> <https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!index.md>>> - Cesnet HAAS > > [2] https://haas.cesnet.cz/downloads/release-01/ <https://haas.cesnet.cz/downloads/release-01/> > <https://haas.cesnet.cz/downloads/release-01/ <https://haas.cesnet.cz/downloads/release-01/>> > > <https://haas.cesnet.cz/downloads/release-01/ <https://haas.cesnet.cz/downloads/release-01/> > <https://haas.cesnet.cz/downloads/release-01/ <https://haas.cesnet.cz/downloads/release-01/>>> - Image repository > > > > > > _______________________________________________ > > Users mailing list > > Users@ovirt.org <mailto:Users@ovirt.org> <mailto:
Users@ovirt.org
<mailto:Users@ovirt.org>> > <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>> > > http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users> > <http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>> > > <http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users> > <http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>> > > > > > > > > _______________________________________________ > Users mailing list > Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> > http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users> > <http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>> > >
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This is a cryptographically signed message in MIME format. --------------ms000607050001070100070707 Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: quoted-printable On 02/21/2018 05:35 PM, Arik Hadas wrote:
=20 =20 On Wed, Feb 21, 2018 at 6:03 PM, Ji=C5=99=C3=AD Sl=C3=A9=C5=BEka <jiri.= slezka@slu.cz <mailto:jiri.slezka@slu.cz>> wrote: =20 On 02/21/2018 03:43 PM, Ji=C5=99=C3=AD Sl=C3=A9=C5=BEka wrote: > On 02/20/2018 11:09 PM, Arik Hadas wrote: >> >> >> On Tue, Feb 20, 2018 at 6:37 PM, Ji=C5=99=C3=AD Sl=C3=A9=C5=BEka= <jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz> >> <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>>> wrote: >> >>=C2=A0 =C2=A0 =C2=A0On 02/20/2018 03:48 PM, Arik Hadas wrote: >>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0> On Tue, Feb 20, 2018 at 3:49 PM, Ji=C5=99=C3= =AD Sl=C3=A9=C5=BEka <jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz> <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>> >>=C2=A0 =C2=A0 =C2=A0> <mailto:jiri.slezka@slu.cz <mailto:jiri.sle= zka@slu.cz> <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>>>> wrote: >>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0Hi Arik, >>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0On 02/20/2018 01:22 PM, = Arik Hadas wrote: >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> On Tue, Feb 20, 2018 a= t 2:03 PM, Ji=C5=99=C3=AD Sl=C3=A9=C5=BEka <jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz> <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>> >>=C2=A0 =C2=A0 =C2=A0<mailto:jiri.slezka@slu.cz <mailto:jiri.slezk= a@slu.cz> <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>>> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> <mailto:jiri.slezka@sl= u.cz <mailto:jiri.slezka@slu.cz> <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>> >>=C2=A0 =C2=A0 =C2=A0<mailto:jiri.slezka@slu.cz <mailto:jiri.slezk= a@slu.cz> <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>>>>> wrote: >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0Hi,=
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> Hi Ji=C5=99=C3=AD, >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> =C2=A0 >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0I w=
ould like to try import some ova files into
our oVirt >>=C2=A0 =C2=A0 =C2=A0instance [1] >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0[2]=
but I facing problems.
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0I h=
ave downloaded all ova images into one of hosts
>>=C2=A0 =C2=A0 =C2=A0(ovirt01) into >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0dir=
ecory /ova
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0ll =
/ova/
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0tot=
al 6532872
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0-rw=
-r--r--. 1 vdsm kvm 1160387072 Feb 16 16:21
>>=C2=A0 =C2=A0 =C2=A0HAAS-hpcowrie.ovf >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0-rw=
-r--r--. 1 vdsm kvm 1111785984 Feb 16 16:22
>>=C2=A0 =C2=A0 =C2=A0HAAS-hpdio.ova >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0-rw=
-r--r--. 1 vdsm kvm=C2=A0 846736896 Feb 16 16:22
>>=C2=A0 =C2=A0 =C2=A0HAAS-hpjdwpd.ova >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0-rw=
-r--r--. 1 vdsm kvm=C2=A0 891043328 Feb 16 16:23
>>=C2=A0 =C2=A0 =C2=A0HAAS-hptelnetd.ova >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0-rw=
-r--r--. 1 vdsm kvm=C2=A0 908222464 Feb 16 16:23
>>=C2=A0 =C2=A0 =C2=A0HAAS-hpuchotcp.ova >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0-rw=
-r--r--. 1 vdsm kvm=C2=A0 880643072 Feb 16 16:24
>>=C2=A0 =C2=A0 =C2=A0HAAS-hpuchoudp.ova >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0-rw=
-r--r--. 1 vdsm kvm=C2=A0 890833920 Feb 16 16:24
>>=C2=A0 =C2=A0 =C2=A0HAAS-hpuchoweb.ova >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0The=
n I tried to import them - from host ovirt01 and
>>=C2=A0 =C2=A0 =C2=A0directory /ova but >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0spi=
nner spins infinitly and nothing is happen.
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> And does it work when =
you provide a path to the
actual ova >>=C2=A0 =C2=A0 =C2=A0file, i.e., >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> /ova/HAAS-hpdio.ova, r=
ather than to the directory?
>>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0this time it ends with "=
Failed to load VM configuration
from >>=C2=A0 =C2=A0 =C2=A0OVA file: >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0/ova/HAAS-hpdio.ova" err=
or.=C2=A0
>>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0> Note that the logic that is applied on a sp=
ecified folder
is "try >>=C2=A0 =C2=A0 =C2=A0> fetching an 'ova folder' out of the destina=
tion folder"
rather than >>=C2=A0 =C2=A0 =C2=A0> "list all the ova files inside the specifie=
d folder". It seems
>>=C2=A0 =C2=A0 =C2=A0that you >>=C2=A0 =C2=A0 =C2=A0> expected the former output since there are =
no disks in that
>>=C2=A0 =C2=A0 =C2=A0folder, right? >> >>=C2=A0 =C2=A0 =C2=A0yes, It would be more user friendly to list a=
ll ova files and
then >>=C2=A0 =C2=A0 =C2=A0select which one to import (like listing all =
vms in vmware
import) >> >>=C2=A0 =C2=A0 =C2=A0Maybe description of path field in manager sh=
ould be "Path to
ova file" >>=C2=A0 =C2=A0 =C2=A0instead of "Path" :-) >> >> >> Sorry, I obviously meant 'latter' rather than 'former' before.. >> Yeah, I agree that would be better, at least until listing the OVA files >> in the folder is implemented (that was the original plan, btw) -=
could >> you please file a bug? > > yes, sure > > >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0I c=
annot see anything relevant in vdsm log of
host ovirt01. >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0In =
the engine.log of our standalone ovirt manager
is just this >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0rel=
evant line
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0201=
8-02-20 12:35:04,289+01 INFO
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0[org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] =
(default
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0tas=
k-31) [458990a7-b054-491a-904e-5c4fe44892c4]
Executing Ansible >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0com=
mand: ANSIBLE_STDOUT_CALLBACK=3Dovaqueryplugin
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0[/u=
sr/bin/ansible-playbook,
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--private-key=3D/etc/pki/ovirt-engine/keys/engine_id_rsa, >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--inventory=3D/tmp/ansible-inventory8237874608161160784, >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--e=
xtra-vars=3Dovirt_query_ova_path=3D/ova,
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0/usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml] [Logfi=
le:
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0/var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123=
504-ovirt01.net
<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> >>=C2=A0 =C2=A0 =C2=A0<http://ovirt-query-ova-ansible-2018022012350=
4-ovirt01.net
<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> >>=C2=A0 =C2=A0 =C2=A0<http://ovirt-query-ova-ansible-2018022012350=
4-ovirt01.net
<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>>> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0<ht=
tp://20180220123504-ovirt01.net
<http://20180220123504-ovirt01.net> >>=C2=A0 =C2=A0 =C2=A0<http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net>> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0<http://20180220123504-o=
virt01.net
<http://20180220123504-ovirt01.net> >>=C2=A0 =C2=A0 =C2=A0<http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net>>>>.slu.cz.log] >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0als=
o there are two ansible processes which are
still running >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0(and makes >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0hea=
vy load on system (load 9+ and growing, it
looks like it >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0eats all the >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0mem=
ory and system starts swapping))
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0ovi=
rt=C2=A0 =C2=A0 32087=C2=A0 3.3=C2=A0 0.0 332252=C2=A0 5980 ?=C2=A0 =C2=A0= =C2=A0 =C2=A0 Sl=C2=A0
>>=C2=A0 =C2=A0 =C2=A0=C2=A012:35=C2=A0 =C2=A00:41 >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0/us=
r/bin/python2 /usr/bin/ansible-playbook
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--private-key=3D/etc/pki/ovirt-engine/keys/engine_id_rsa >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--i=
nventory=3D/tmp/ansible-inventory8237874608161160784
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--e=
xtra-vars=3Dovirt_query_ova_path=3D/ova
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0/us=
r/share/ovirt-engine/playbooks/ovirt-ova-query.yml
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0ovi=
rt=C2=A0 =C2=A0 32099 57.5 78.9 15972880 11215312 ?=C2=A0 =C2=A0R=C2=A0 =C2= =A0
>>=C2=A0 =C2=A0 =C2=A012:35=C2=A0 11:52 >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0/us=
r/bin/python2 /usr/bin/ansible-playbook
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--private-key=3D/etc/pki/ovirt-engine/keys/engine_id_rsa >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--i=
nventory=3D/tmp/ansible-inventory8237874608161160784
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0--e=
xtra-vars=3Dovirt_query_ova_path=3D/ova
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0/us=
r/share/ovirt-engine/playbooks/ovirt-ova-query.yml
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0pla=
ybook looks like
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0- h=
osts: all
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0=C2=
=A0 remote_user: root
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0=C2=
=A0 gather_facts: no
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0=C2=
=A0 roles:
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0=C2=
=A0 =C2=A0 - ovirt-ova-query
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0and=
it looks like it only runs query_ova.py but
on all >>=C2=A0 =C2=A0 =C2=A0hosts? >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> No, the engine provide=
s ansible the host to run on
when it >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0executes the >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> playbook. >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> It would only be execu=
ted on the selected host.
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> =C2=A0 >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0How=
does this work? ...or should it work?
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> It should, especially =
that part of querying the OVA
and is >>=C2=A0 =C2=A0 =C2=A0supposed to >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> be really quick. >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> Can you please share t=
he engine log and
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 >>=C2=A0 =C2=A0 =C2=A0=C2=A0/var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180=
220123504-ovirt01.net
<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> >>=C2=A0 =C2=A0 =C2=A0<http://ovirt-query-ova-ansible-2018022012350=
4-ovirt01.net
<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> >>=C2=A0 =C2=A0 =C2=A0<http://ovirt-query-ova-ansible-2018022012350=
4-ovirt01.net
<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>>> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> <http://20180220123504=
-ovirt01.net
<http://20180220123504-ovirt01.net> >>=C2=A0 =C2=A0 =C2=A0<http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net>> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0<http://20180220123504-o=
virt01.net
<http://20180220123504-ovirt01.net> >>=C2=A0 =C2=A0 =C2=A0<http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net>>>>.slu.cz.log ? >>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0engine log is here: >>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0https://pastebin.com/nWW=
M3UUq
>>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0> Thanks. >>=C2=A0 =C2=A0 =C2=A0> Alright, so now the configuration is fetche=
d but its
processing fails. >>=C2=A0 =C2=A0 =C2=A0> We fixed many issues in this area recently,=
but it appears that
>>=C2=A0 =C2=A0 =C2=A0> something is wrong with the actual size of =
the disk within
the ovf file >>=C2=A0 =C2=A0 =C2=A0> that resides inside this ova file. >>=C2=A0 =C2=A0 =C2=A0> Can you please share that ovf file that res=
ides
inside=C2=A0/ova/HAAS-hpdio.ova? >> >>=C2=A0 =C2=A0 =C2=A0file HAAS-hpdio.ova >>=C2=A0 =C2=A0 =C2=A0HAAS-hpdio.ova: POSIX tar archive (GNU) >> >>=C2=A0 =C2=A0 =C2=A0[root@ovirt01 backup]# tar xvf HAAS-hpdio.ova=
>>=C2=A0 =C2=A0 =C2=A0HAAS-hpdio.ovf >>=C2=A0 =C2=A0 =C2=A0HAAS-hpdio-disk001.vmdk >> >>=C2=A0 =C2=A0 =C2=A0file HAAS-hpdio.ovf is here: >> >>=C2=A0 =C2=A0 =C2=A0https://pastebin.com/80qAU0wB >> >> >> Thanks again. >> So that seems to be a VM that was exported from Virtual Box, rig=
ht?
>> They don't do anything that violates the OVF specification but they do >> some non-common things that we don't anticipate: > > yes, it is most likely ova from VirtualBox > >> First, they don't specify the actual size of the disk and the cu=
rrent
>> code in oVirt relies on that property. >> There is a workaround for this though: you can extract an OVA file, edit >> its OVF configuration - adding ovf:populatedSize=3D"X" (and chan=
ge
>> ovf:capacity as I'll describe next) to the Disk element inside t=
he
>> DiskSection and pack the OVA again (tar cvf <ovf_file> <disk_file) where >> X is either: >> 1. the actual size of the vmdk file + some buffer (iirc, we used=
to take >> 15% of extra space for the conversion) >> 2. if you're using a file storage or you don't mind consuming mo=
re
>> storage space on your block storage, simply set X to the virtual=
size of >> the disk (in bytes) as indicated by the ovf:capacity filed, e.g.=
,
>> ovf:populatedSize=3D"21474836480" in the case of HAAS-hpdio.ova.=
>> >> Second, the virtual size (indicated by ovf:capacity) is specifie=
d in
>> bytes. The specification says that the default unit of allocatio=
n
shall >> be bytes, but practically every OVA file that I've ever saw specified it >> in GB and the current code in oVirt kind of assumes that this is=
the
>> case without checking the ovf:capacityAllocationUnits attribute =
that
>> could indicate the real unit of allocation [1]. >> Anyway, long story short, the virtual size of the disk should currently >> be specified in GB, e.g., ovf:populatedSize=3D"20" in the case o=
f
>> HAAS-hpdio.ova. > > wow, thanks for this excellent explanation. I have changed this i=
n
ovf file > > ... > <Disk ovf:capacity=3D"20" ovf:diskId=3D"vmdisk2" ovf:populatedSize=3D"20" ... > ... > > then I was able to import this mofified ova file (HAAS-hpdio_new.=
ova).
> Interesting thing is that the vm was shown in vm list for while (=
with
> state down with lock and status was initializing). After while th=
is vm
> disapeared :-o > > I am going to test it again and collect some logs... =20 there are interesting logs in /var/log/vdsm/import/ at the host use=
d for
import =20 http://mirror.slu.cz/tmp/ovirt-import.tar.bz2 <http://mirror.slu.cz/tmp/ovirt-import.tar.bz2> =20 first of them describes situation where I chose thick provisioning,=
second situation with thin provisioning =20 interesting part is I believe =20 libguestfs: command: run: qemu-img libguestfs: command: run: \ create libguestfs: command: run: \ -f qcow2 libguestfs: command: run: \ -o preallocation=3Doff,compat=3D0.10 libguestfs: command: run: \ /rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-a570-f37fa986a772/=
images/d44e1890-3e42-420b-939c-dac1290e19af/9edcccbc-b244-4b94-acd3-3c8ee= 12bbbec
libguestfs: command: run: \ 21474836480 Formatting '/rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-a570-f37fa986a772=
/images/d44e1890-3e42-420b-939c-dac1290e19af/9edcccbc-b244-4b94-acd3-3c8e= e12bbbec',
fmt=3Dqcow2 size=3D21474836480 compat=3D0.10 encryption=3Doff clust=
er_size=3D65536
preallocation=3Doff lazy_refcounts=3Doff refcount_bits=3D16 libguestfs: trace: vdsm_disk_create: disk_create =3D 0 qemu-img 'convert' '-p' '-n' '-f' 'qcow2' '-O' 'qcow2' '/var/tmp/v2vovl2dccbd.qcow2' '/rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-a570-f37fa986a772=
/images/d44e1890-3e42-420b-939c-dac1290e19af/9edcccbc-b244-4b94-acd3-3c8e= e12bbbec'
qemu-img: error while writing sector 1000960: No space left on devi=
ce
=20 virt-v2v: error: qemu-img command failed, see earlier errors =20 =20 =20 Sorry again, I made a mistake in: =C2=A0"Anyway, long story short, the virtual size of the disk should cu= rrently =C2=A0be specified in GB, e.g., ovf:populatedSize=3D"20" in the case of=
=C2=A0HAAS-hpdio.ova." I should have write ovf:capacity=3D"20". So if you wish the actual size of the disk to be 20GB (which means the disk is preallocated), the disk element should be set with: <Disk ovf:capacity=3D"20" ovf:diskId=3D"vmdisk2" ovf:populatedSize=3D"21474836480" ...
now I have this inf ovf file <Disk ovf:capacity=3D"20" ovf:diskId=3D"vmdisk2" ovf:populatedSize=3D"21474836480"... but while import it fails again, but in this case faster. It looks like SPM cannot create disk image log from SPM host... 2018-02-21 18:02:03,599+0100 INFO (jsonrpc/1) [vdsm.api] START createVolume(sdUUID=3Du'69f6b3e7-d754-44cf-a665-9d7128260401', spUUID=3Du'00000002-0002-0002-0002-0000000002b9', imgUUID=3Du'0a5c4ecb-2c04-4f96-858a-4f74915d5caa', size=3Du'20', volFormat=3D4, preallocate=3D2, diskType=3Du'DATA', volUUID=3Du'bd3ae91a-3b37-4610-9ad3-6c5fdc6cc9b0', desc=3Du'{"DiskAlias":"HAAS-hpdio-disk001.vmdk","DiskDescription":""}', srcImgUUID=3Du'00000000-0000-0000-0000-000000000000', srcVolUUID=3Du'00000000-0000-0000-0000-000000000000', initialSize=3Du'21474836480') from=3D::ffff:193.84.206.172,53154, flow_id=3De27cd35a-dc4e-4e72-a3ef-aa5b67c2bdab, task_id=3De7598aa1-420a-4612-9ee8-03012b1277d9 (api:46) 2018-02-21 18:02:03,603+0100 INFO (jsonrpc/1) [IOProcessClient] Starting client ioprocess-3931 (__init__:330) 2018-02-21 18:02:03,638+0100 INFO (ioprocess/56120) [IOProcess] Starting ioprocess (__init__:452) 2018-02-21 18:02:03,661+0100 INFO (jsonrpc/1) [vdsm.api] FINISH createVolume return=3DNone from=3D::ffff:193.84.206.172,53154, flow_id=3De27cd35a-dc4e-4e72-a3ef-aa5b67c2bdab, task_id=3De7598aa1-420a-4612-9ee8-03012b1277d9 (api:52) 2018-02-21 18:02:03,692+0100 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Volume.create succeeded in 0.09 seconds (__init__:573) 2018-02-21 18:02:03,694+0100 INFO (tasks/1) [storage.ThreadPool.WorkerThread] START task e7598aa1-420a-4612-9ee8-03012b1277d9 (cmd=3D<bound method Task.commit of <vdsm.storage.task.Task instance at 0x3faa050>>, args=3DNone) (threadPool= :208) 2018-02-21 18:02:03,995+0100 INFO (tasks/1) [storage.StorageDomain] Create placeholder /rhev/data-center/mnt/blockSD/69f6b3e7-d754-44cf-a665-9d7128260401/images= /0a5c4ecb-2c04-4f96-858a-4f74915d5caa for image's volumes (sd:1244) 2018-02-21 18:02:04,016+0100 INFO (tasks/1) [storage.Volume] Creating volume bd3ae91a-3b37-4610-9ad3-6c5fdc6cc9b0 (volume:1151) 2018-02-21 18:02:04,060+0100 ERROR (tasks/1) [storage.Volume] The requested initial 21474836480 is bigger than the max size 134217728 (blockVolume:345) 2018-02-21 18:02:04,060+0100 ERROR (tasks/1) [storage.Volume] Failed to create volume /rhev/data-center/mnt/blockSD/69f6b3e7-d754-44cf-a665-9d7128260401/images= /0a5c4ecb-2c04-4f96-858a-4f74915d5caa/bd3ae91a-3b37-4610-9ad3-6c5fdc6cc9b= 0: Invalid parameter: 'initial size=3D41943040' (volume:1175) 2018-02-21 18:02:04,061+0100 ERROR (tasks/1) [storage.Volume] Unexpected error (volume:1215) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 1172, in create initialSize=3DinitialSize) File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py", line 501, in _create size, initialSize) File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py", line 545, in calculate_volume_alloc_size preallocate, capacity, initial_size) File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py", line 347, in calculate_volume_alloc_size initial_size) InvalidParameterException: Invalid parameter: 'initial size=3D41943040' 2018-02-21 18:02:04,062+0100 ERROR (tasks/1) [storage.TaskManager.Task] (Task=3D'e7598aa1-420a-4612-9ee8-03012b1277d9') Unexpected error (task:87= 5) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 336, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1936, in createVolume initialSize=3DinitialSize) File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 801, in createVolume initialSize=3DinitialSize) File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 1217, in create (volUUID, e)) VolumeCreationError: Error creating a new volume: (u"Volume creation bd3ae91a-3b37-4610-9ad3-6c5fdc6cc9b0 failed: Invalid parameter: 'initial size=3D41943040'",) there are no new logs in import folder on host used for import...
=C2=A0 =20 =20 > >> That should do it. If not, please share the OVA file and I will examine >> it in my environment. > > original file is at > > https://haas.cesnet.cz/downloads/release-01/HAAS-hpdio.ova <https://haas.cesnet.cz/downloads/release-01/HAAS-hpdio.ova> > >> >> [1]=C2=A0https://github.com/oVirt/ovirt-engine/blob/master/backend/= manager/modules/utils/src/main/java/org/ovirt/engine/core/utils/ovf/OvfOv= aReader.java#L220 <https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/= modules/utils/src/main/java/org/ovirt/engine/core/utils/ovf/OvfOvaReader.= java#L220> >> >> >> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0file >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 >>=C2=A0 =C2=A0 =C2=A0=C2=A0/var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180= 220123504-ovirt01.net <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> >>=C2=A0 =C2=A0 =C2=A0<http://ovirt-query-ova-ansible-2018022012350= 4-ovirt01.net <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> >>=C2=A0 =C2=A0 =C2=A0<http://ovirt-query-ova-ansible-2018022012350= 4-ovirt01.net <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>>> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0in the fact does not exi= sts (nor folder /var/log/ovirt-engine/ova/) >>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0> This issue is also resolved in 4.2.2. >>=C2=A0 =C2=A0 =C2=A0> In the meantime, please create the =C2=A0/var/log/ovirt-engine/ova/ folder >>=C2=A0 =C2=A0 =C2=A0> manually and make sure its permissions matc= h the ones of the other >>=C2=A0 =C2=A0 =C2=A0> folders in =C2=A0/var/log/ovirt-engine. >> >>=C2=A0 =C2=A0 =C2=A0ok, done. After another try there is this log= file >> >>=C2=A0 =C2=A0 =C2=A0/var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220173= 005-ovirt01.net <http://ovirt-query-ova-ansible-20180220173005-ovirt01.net> >>=C2=A0 =C2=A0 =C2=A0<http://20180220173005-ovirt01.net <http://20180220173005-ovirt01.net>>.slu.cz.log >> >>=C2=A0 =C2=A0 =C2=A0https://pastebin.com/M5J44qur >> >> >> Is it the log of the execution of the ansible playbook that was provided >> with a path to the /ova folder? >> I'm interested in that in order to see how comes that its execut= ion >> never completed. > > well, I dont think so, it is log from import with full path to ov= a file > > > >> =C2=A0 >> >> >> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0Cheers, >>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0Jiri Slezka >>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> =C2=A0 >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0I a= m using latest 4.2.1.7-1.el7.centos version >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0Che= ers, >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0Jir= i Slezka >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0[1]= https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!index.md> >>=C2=A0 =C2=A0 =C2=A0<https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!index.md>> >>=C2=A0 =C2=A0 =C2=A0<https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!index.md> <https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!index.= md>>> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0<ht= tps://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!index.md> <https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!index.= md>> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0<https://haas.cesnet.cz/= #!index.md <https://haas.cesnet.cz/#!index.md> >>=C2=A0 =C2=A0 =C2=A0<https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!index.md>>>> - Cesnet HAAS >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0[2]= https://haas.cesnet.cz/downloads/release-01/ <https://haas.cesnet.cz/downloads/release-01/> >>=C2=A0 =C2=A0 =C2=A0<https://haas.cesnet.cz/downloads/release-01/=
<https://haas.cesnet.cz/downloads/release-01/>> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0<https://haas.cesnet.cz/=
downloads/release-01/
<https://haas.cesnet.cz/downloads/release-01/> >>=C2=A0 =C2=A0 =C2=A0<https://haas.cesnet.cz/downloads/release-01/=
<https://haas.cesnet.cz/downloads/release-01/>>> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0<ht=
tps://haas.cesnet.cz/downloads/release-01/
<https://haas.cesnet.cz/downloads/release-01/> >>=C2=A0 =C2=A0 =C2=A0<https://haas.cesnet.cz/downloads/release-01/=
<https://haas.cesnet.cz/downloads/release-01/>> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0<https://haas.cesnet.cz/=
downloads/release-01/
<https://haas.cesnet.cz/downloads/release-01/> >>=C2=A0 =C2=A0 =C2=A0<https://haas.cesnet.cz/downloads/release-01/=
<https://haas.cesnet.cz/downloads/release-01/>>>> - Image repositor=
y
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0___=
____________________________________________
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0Use=
rs mailing list
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0Use=
rs@ovirt.org <mailto:Users@ovirt.org>
<mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> >>=C2=A0 =C2=A0 =C2=A0<mailto:Users@ovirt.org <mailto:Users@ovirt.o=
rg>>>
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0<mailto:Users@ovirt.org =
<mailto:Users@ovirt.org>
<mailto:Users@ovirt.org <mailto:Users@ovirt.org>> >>=C2=A0 =C2=A0 =C2=A0<mailto:Users@ovirt.org <mailto:Users@ovirt.o=
rg>
<mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0htt=
p://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users> >>=C2=A0 =C2=A0 =C2=A0<http://lists.ovirt.org/mailman/listinfo/user=
s
<http://lists.ovirt.org/mailman/listinfo/users>> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0<http://lists.ovirt.org/=
mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users> >>=C2=A0 =C2=A0 =C2=A0<http://lists.ovirt.org/mailman/listinfo/user=
s
<http://lists.ovirt.org/mailman/listinfo/users>>> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0<ht=
tp://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users> >>=C2=A0 =C2=A0 =C2=A0<http://lists.ovirt.org/mailman/listinfo/user=
s
<http://lists.ovirt.org/mailman/listinfo/users>> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0<http://lists.ovirt.org/=
mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users> >>=C2=A0 =C2=A0 =C2=A0<http://lists.ovirt.org/mailman/listinfo/user=
s
<http://lists.ovirt.org/mailman/listinfo/users>>>> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0________________________=
_______________________
>>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0Users mailing list >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0Users@ovirt.org <mailto:=
Users@ovirt.org>
<mailto:Users@ovirt.org <mailto:Users@ovirt.org>> >>=C2=A0 =C2=A0 =C2=A0<mailto:Users@ovirt.org <mailto:Users@ovirt.o=
rg>
<mailto:Users@ovirt.org <mailto:Users@ovirt.org>>> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0http://lists.ovirt.org/m=
ailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users> >>=C2=A0 =C2=A0 =C2=A0<http://lists.ovirt.org/mailman/listinfo/user=
s
<http://lists.ovirt.org/mailman/listinfo/users>> >>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0<http://lists.ovirt.org/=
mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users> >>=C2=A0 =C2=A0 =C2=A0<http://lists.ovirt.org/mailman/listinfo/user=
s
<http://lists.ovirt.org/mailman/listinfo/users>>> >>=C2=A0 =C2=A0 =C2=A0> >>=C2=A0 =C2=A0 =C2=A0> >> >> >> >>=C2=A0 =C2=A0 =C2=A0_____________________________________________=
__
>>=C2=A0 =C2=A0 =C2=A0Users mailing list >>=C2=A0 =C2=A0 =C2=A0Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> >>=C2=A0 =C2=A0 =C2=A0http://lists.ovirt.org/mailman/listinfo/users=
<http://lists.ovirt.org/mailman/listinfo/users> >>=C2=A0 =C2=A0 =C2=A0<http://lists.ovirt.org/mailman/listinfo/user=
s
<http://lists.ovirt.org/mailman/listinfo/users>> >> >> > > > > > _______________________________________________ > Users mailing list > Users@ovirt.org <mailto:Users@ovirt.org> > http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users> > =20 =20 =20 _______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users> =20 =20
--------------ms000607050001070100070707 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCC Cn8wggUJMIID8aADAgECAhACt8ndrdK9CetZxFyQDGB4MA0GCSqGSIb3DQEBCwUAMGUxCzAJ BgNVBAYTAlVTMRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2Vy dC5jb20xJDAiBgNVBAMTG0RpZ2lDZXJ0IEFzc3VyZWQgSUQgUm9vdCBDQTAeFw0xNDExMTgx MjAwMDBaFw0yNDExMTgxMjAwMDBaMHIxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1I b2xsYW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzANBgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMd VEVSRU5BIGVTY2llbmNlIFBlcnNvbmFsIENBIDMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw ggEKAoIBAQCwp9Jj5Aej1xPkS1GV3LvBdemFmkUR//nSzBodqsU3dv2BCRD30r4gt5oRsYty qDGF2nnItxV1SkwVoDxFeRzOIHYNYvBRHaiGvCQjEXzPRTocOSVfWpmq/zAL/QOEqpJogeM+ 0IBGiJcAENJshl7UcfjYbBnN5qStk74f52VWFf/aiF7MVJnsUr3oriQvXYOzs8N/NXyyQyim atBbumJVCNszF1X+XHCGfPNvxlNFW9ktv7azK0baminfLcsh6ubCdINZc+Nof2lU387NCDgg oh3KsYVcZTSuhh7qp6MjxE5VqOZod1hpXXzDOkjK+DAMC57iZXssncp24eaN08VlAgMBAAGj ggGmMIIBojASBgNVHRMBAf8ECDAGAQH/AgEAMA4GA1UdDwEB/wQEAwIBhjB5BggrBgEFBQcB AQRtMGswJAYIKwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNvbTBDBggrBgEFBQcw AoY3aHR0cDovL2NhY2VydHMuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0QXNzdXJlZElEUm9vdENB LmNydDCBgQYDVR0fBHoweDA6oDigNoY0aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL0RpZ2lD ZXJ0QXNzdXJlZElEUm9vdENBLmNybDA6oDigNoY0aHR0cDovL2NybDQuZGlnaWNlcnQuY29t L0RpZ2lDZXJ0QXNzdXJlZElEUm9vdENBLmNybDA9BgNVHSAENjA0MDIGBFUdIAAwKjAoBggr BgEFBQcCARYcaHR0cHM6Ly93d3cuZGlnaWNlcnQuY29tL0NQUzAdBgNVHQ4EFgQUjJ8RLubj egSlHlWLRggEpu2XcKYwHwYDVR0jBBgwFoAUReuir/SSy4IxLVGLp6chnfNtyA8wDQYJKoZI hvcNAQELBQADggEBAI5HEV91Oen8WHFCoJkeu2Av+b/kWTV2qH/YNI1Xsbou2hHKhh4IyNkF OxA/TUiuK2qQnQ5hAS0TIrs9SJ1Ke+DjXd/cTBiw7lCYSW5hkzigFV+iSivninpItafWqYBS WxITl1KHBS9YBskhEqO5GLliDMPiAgjqUBQ/H1qZMlZNQIuFu0UaFUQuZUpJFr4+0zpzPxsB iWU2muAoGItwbaP55EYshM7+v/J+x6kIhAJt5Dng8fOmOvR9F6Vw2/E0EZ6oQ8g1fdhwM101 S1OI6J1tUil1r7ES/svNqVWVb7YkUEBcPo8ppfHnTI/uxsn2tslsWefsOGJxNYUUSMAb9Eow ggVuMIIEVqADAgECAhAKebGg8bOvnIyfOWAn4bpzMA0GCSqGSIb3DQEBCwUAMHIxCzAJBgNV BAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzAN BgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMdVEVSRU5BIGVTY2llbmNlIFBlcnNvbmFsIENBIDMw HhcNMTcxMTE2MDAwMDAwWhcNMTgxMjE1MTIwMDAwWjCBlDETMBEGCgmSJomT8ixkARkWA29y ZzEWMBQGCgmSJomT8ixkARkWBnRlcmVuYTETMBEGCgmSJomT8ixkARkWA3RjczELMAkGA1UE BhMCQ1oxJTAjBgNVBAoTHFNpbGVzaWFuIFVuaXZlcnNpdHkgaW4gT3BhdmExHDAaBgNVBAMT E0ppcmkgU2xlemthIHNsZTAwMDEwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC/ VwOD1hlYL6l7GzxNqV1ne7/iMF/gHvPfTwejsC2s9sby7It82qXPRBVA2s1Cjb1A3ucpdlDN MXM83Lvh881XfkxhS2YLLyiZDmlSzAqfoMLxQ2/E0m1UugttzGJF7/10pEwj0FJFhnIVwA/E 8svCcbhxwO9BBpUz8JG1C6fTd0qyzJtNXVyH+WuHQbU2jgu2JJ7miiEKE1Fis0hFf1rKxTzX aVGyXiQLOn7TZDfPtXrJEG7eWYlFUP58edyuJELpWHTPHn8xJKYTy8Qq5BgFNyCRQT/6imsh tZlDBZSEeqyoSNtLsC57ZrjqgtLCEQFK9EX27dOy0/u95zS0OIWdAgMBAAGjggHbMIIB1zAf BgNVHSMEGDAWgBSMnxEu5uN6BKUeVYtGCASm7ZdwpjAdBgNVHQ4EFgQUF1mSlcyDz9wWit9V jCz+zJ9CrpswDAYDVR0TAQH/BAIwADAdBgNVHREEFjAUgRJqaXJpLnNsZXprYUBzbHUuY3ow DgYDVR0PAQH/BAQDAgSwMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcDBDA0BgNVHSAE LTArMAwGCiqGSIb3TAUCAgEwDAYKYIZIAYb9bAQfATANBgsqhkiG90wFAgMDAzCBhQYDVR0f BH4wfDA8oDqgOIY2aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL1RFUkVOQWVTY2llbmNlUGVy c29uYWxDQTMuY3JsMDygOqA4hjZodHRwOi8vY3JsNC5kaWdpY2VydC5jb20vVEVSRU5BZVNj aWVuY2VQZXJzb25hbENBMy5jcmwwewYIKwYBBQUHAQEEbzBtMCQGCCsGAQUFBzABhhhodHRw Oi8vb2NzcC5kaWdpY2VydC5jb20wRQYIKwYBBQUHMAKGOWh0dHA6Ly9jYWNlcnRzLmRpZ2lj ZXJ0LmNvbS9URVJFTkFlU2NpZW5jZVBlcnNvbmFsQ0EzLmNydDANBgkqhkiG9w0BAQsFAAOC AQEADtFRxKphkcHVdWjR/+i1+cdHfkbicraHlU5Mpw8EX6nemKu4GGAWfzH+Y7p6ImZwUHWf /SSbrX+57xaFUBOr3jktQm1GRmGUZESEmsUDB8UZXzdQC79/tO9MzRhvEBXuQhdxdoO64Efx VqtYAB2ydqz7yWh56ioSwaQZEXo5rO1kZuAcmVz8Smd1r/Mur/h8Y+qbrsJng1GS25aMhFts UV6z9zXuHFkT9Ck8SLdCEDzjzYNjXIDB5n+QOmPXnXrZMlGiI/aOqa5k5Sv6xCIPdH2kbpyd M1YiH/ChmU9gWJvy0Jq42KGLvWBvuHEzcb3f473Fvn4GWsXu0zDS2oh2/TGCA8MwggO/AgEB MIGGMHIxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIwEAYDVQQHEwlB bXN0ZXJkYW0xDzANBgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMdVEVSRU5BIGVTY2llbmNlIFBl cnNvbmFsIENBIDMCEAp5saDxs6+cjJ85YCfhunMwDQYJYIZIAWUDBAIBBQCgggINMBgGCSqG SIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTE4MDIyMTE3MTAyN1owLwYJ KoZIhvcNAQkEMSIEIDe8YMk+RTbKTZiSz/zDK3nm9gi7IXYizwmKsP0Ddnx8MGwGCSqGSIb3 DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG9w0DBzAOBggqhkiG 9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcNAwICASgwgZcGCSsG AQQBgjcQBDGBiTCBhjByMQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDES MBAGA1UEBxMJQW1zdGVyZGFtMQ8wDQYDVQQKEwZURVJFTkExJjAkBgNVBAMTHVRFUkVOQSBl U2NpZW5jZSBQZXJzb25hbCBDQSAzAhAKebGg8bOvnIyfOWAn4bpzMIGZBgsqhkiG9w0BCRAC CzGBiaCBhjByMQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDESMBAGA1UE BxMJQW1zdGVyZGFtMQ8wDQYDVQQKEwZURVJFTkExJjAkBgNVBAMTHVRFUkVOQSBlU2NpZW5j ZSBQZXJzb25hbCBDQSAzAhAKebGg8bOvnIyfOWAn4bpzMA0GCSqGSIb3DQEBAQUABIIBAAzJ cMJXtRLnIl43l941UzXirR7u19IlBZayvgIiBx3TjqApB6POyhJmdK+4J4Sx7vJ5M5LTgIc7 DIynqJTrkCGE9yz8PafxyKAGTyoYAmrVcnCyGHYaUj8N9zMJT3Xq6cKXDkSXPeRkUPyWtbSZ 6RRxB1FsO/acwLF83E2L10FI9gOY2Wms/j08ovPgvJcwnIPK5CMecsqoNSiBU1WBBro7fAzA nNauKZRlLgqQHKG46T2Q6z4qk2xHYEnK+b2pkKMYz5Znu1OiV1/PdnORSI4kC2mzMSKh5NCH /U1JQSgu3ijHqNtc2NQC2tNQUJPrZYQDWo7NrhDzVB4tkbk+WK4AAAAAAAA= --------------ms000607050001070100070707--

So I have some good news and some bad news. The good news is that I just used the provided OVA and identified the issues that prevent oVirt from processing its OVF configuration: 1. The <File> element in the References section lacks ovf:size attribute and oVirt unfortunately, is not prepared for this. 2. The USB item doesn't include an oVirt-specific attribute (makes sense..) that oVirt require (that doesn't make sense..) called usbPolicy. I'll post fixes for those issues. In the meantime, the OVF can be modified with the following changes: 1. Add ovf:size="3221225472" to the File element (there's no need for more than 3gb, even 2gb should be enough). 2. Remove the following Item: <Item> <rasd:Address>0</rasd:Address> <rasd:Caption>usb</rasd:Caption> <rasd:Description>USB Controller</rasd:Description> <rasd:ElementName>usb</rasd:ElementName> <rasd:InstanceID>6</rasd:InstanceID> <rasd:ResourceType>23</rasd:ResourceType> </Item> The bad news is that the conversion that would finally start with those changes then fails on my host (with virt-v2v v1.36.3) with the following error: supermin: failed to find a suitable kernel (host_cpu=x86_64). I looked for kernels in /boot and modules in /lib/modules. If this is a Xen guest, and you only have Xen domU kernels installed, try installing a fullvirt kernel (only for supermin use, you shouldn't boot the Xen guest with it). libguestfs: trace: v2v: launch = -1 (error) @Richard, this is an OVA of a VM installed with Debian64 as guest OS that was exported from VirtualBox, is it supported by virt-v2v? On Wed, Feb 21, 2018 at 7:10 PM, Jiří Sléžka <jiri.slezka@slu.cz> wrote:
On 02/21/2018 05:35 PM, Arik Hadas wrote:
On Wed, Feb 21, 2018 at 6:03 PM, Jiří Sléžka <jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>> wrote:
On 02/21/2018 03:43 PM, Jiří Sléžka wrote: > On 02/20/2018 11:09 PM, Arik Hadas wrote: >> >> >> On Tue, Feb 20, 2018 at 6:37 PM, Jiří Sléžka <jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz> >> <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>>> wrote: >> >> On 02/20/2018 03:48 PM, Arik Hadas wrote: >> > >> > >> > On Tue, Feb 20, 2018 at 3:49 PM, Jiří Sléžka <jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz> <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>> >> > <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz> <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>>>> wrote: >> > >> > Hi Arik, >> > >> > On 02/20/2018 01:22 PM, Arik Hadas wrote: >> > > >> > > >> > > On Tue, Feb 20, 2018 at 2:03 PM, Jiří Sléžka <jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz> <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>> >> <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz> <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>>> >> > > <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz> <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>> >> <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz> <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>>>>> wrote: >> > > >> > > Hi, >> > > >> > > >> > > Hi Jiří, >> > > >> > > >> > > >> > > I would like to try import some ova files into our oVirt >> instance [1] >> > > [2] but I facing problems. >> > > >> > > I have downloaded all ova images into one of hosts >> (ovirt01) into >> > > direcory /ova >> > > >> > > ll /ova/ >> > > total 6532872 >> > > -rw-r--r--. 1 vdsm kvm 1160387072 Feb 16 16:21 >> HAAS-hpcowrie.ovf >> > > -rw-r--r--. 1 vdsm kvm 1111785984 Feb 16 16:22 >> HAAS-hpdio.ova >> > > -rw-r--r--. 1 vdsm kvm 846736896 Feb 16 16:22 >> HAAS-hpjdwpd.ova >> > > -rw-r--r--. 1 vdsm kvm 891043328 Feb 16 16:23 >> HAAS-hptelnetd.ova >> > > -rw-r--r--. 1 vdsm kvm 908222464 Feb 16 16:23 >> HAAS-hpuchotcp.ova >> > > -rw-r--r--. 1 vdsm kvm 880643072 Feb 16 16:24 >> HAAS-hpuchoudp.ova >> > > -rw-r--r--. 1 vdsm kvm 890833920 Feb 16 16:24 >> HAAS-hpuchoweb.ova >> > > >> > > Then I tried to import them - from host ovirt01
and
>> directory /ova but >> > > spinner spins infinitly and nothing is happen. >> > > >> > > >> > > And does it work when you provide a path to the actual ova >> file, i.e., >> > > /ova/HAAS-hpdio.ova, rather than to the directory? >> > >> > this time it ends with "Failed to load VM configuration from >> OVA file: >> > /ova/HAAS-hpdio.ova" error. >> > >> > >> > Note that the logic that is applied on a specified folder is "try >> > fetching an 'ova folder' out of the destination folder" rather than >> > "list all the ova files inside the specified folder". It
seems
>> that you >> > expected the former output since there are no disks in that >> folder, right? >> >> yes, It would be more user friendly to list all ova files and then >> select which one to import (like listing all vms in vmware import) >> >> Maybe description of path field in manager should be "Path to ova file" >> instead of "Path" :-) >> >> >> Sorry, I obviously meant 'latter' rather than 'former' before.. >> Yeah, I agree that would be better, at least until listing the OVA files >> in the folder is implemented (that was the original plan, btw) - could >> you please file a bug? > > yes, sure > > >> > > I cannot see anything relevant in vdsm log of host ovirt01. >> > > >> > > In the engine.log of our standalone ovirt manager is just this >> > > relevant line >> > > >> > > 2018-02-20 12:35:04,289+01 INFO >> > > [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor]
(default
>> > > task-31) [458990a7-b054-491a-904e-5c4fe44892c4] Executing Ansible >> > > command: ANSIBLE_STDOUT_CALLBACK=ovaqueryplugin >> > > [/usr/bin/ansible-playbook, >> > > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa, >> > > --inventory=/tmp/ansible-inventory8237874608161160784, >> > > --extra-vars=ovirt_query_ova_path=/ova, >> > > /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml] [Logfile: >> > > /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-
20180220123504-ovirt01.net
<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> >> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>> >> > <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> >> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>>> >> > > <http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net> >> <http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net>> >> > <http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net> >> <http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net>>>>.slu.cz.log] >> > > >> > > also there are two ansible processes which are still running >> > (and makes >> > > heavy load on system (load 9+ and growing, it looks like it >> > eats all the >> > > memory and system starts swapping)) >> > > >> > > ovirt 32087 3.3 0.0 332252 5980 ? Sl >> 12:35 0:41 >> > > /usr/bin/python2 /usr/bin/ansible-playbook >> > > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa >> > > --inventory=/tmp/ansible-
inventory8237874608161160784
>> > > --extra-vars=ovirt_query_ova_path=/ova >> > > /usr/share/ovirt-engine/
playbooks/ovirt-ova-query.yml
>> > > ovirt 32099 57.5 78.9 15972880 11215312 ? R
>> 12:35 11:52 >> > > /usr/bin/python2 /usr/bin/ansible-playbook >> > > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa >> > > --inventory=/tmp/ansible-
inventory8237874608161160784
>> > > --extra-vars=ovirt_query_ova_path=/ova >> > > /usr/share/ovirt-engine/
playbooks/ovirt-ova-query.yml
>> > > >> > > playbook looks like >> > > >> > > - hosts: all >> > > remote_user: root >> > > gather_facts: no >> > > >> > > roles: >> > > - ovirt-ova-query >> > > >> > > and it looks like it only runs query_ova.py but on all >> hosts? >> > > >> > > >> > > No, the engine provides ansible the host to run on when it >> > executes the >> > > playbook. >> > > It would only be executed on the selected host. >> > > >> > > >> > > >> > > How does this work? ...or should it work? >> > > >> > > >> > > It should, especially that part of querying the OVA and is >> supposed to >> > > be really quick. >> > > Can you please share the engine log and >> > > >> > >> /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-
20180220123504-ovirt01.net
<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> >> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>> >> > <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> >> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>>> >> > > <http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net> >> <http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net>> >> > <http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net> >> <http://20180220123504-ovirt01.net <http://20180220123504-ovirt01.net>>>>.slu.cz.log ? >> > >> > engine log is here: >> > >> > https://pastebin.com/nWWM3UUq >> > >> > >> > Thanks. >> > Alright, so now the configuration is fetched but its processing fails. >> > We fixed many issues in this area recently, but it appears
that
>> > something is wrong with the actual size of the disk within the ovf file >> > that resides inside this ova file. >> > Can you please share that ovf file that resides inside /ova/HAAS-hpdio.ova? >> >> file HAAS-hpdio.ova >> HAAS-hpdio.ova: POSIX tar archive (GNU) >> >> [root@ovirt01 backup]# tar xvf HAAS-hpdio.ova >> HAAS-hpdio.ovf >> HAAS-hpdio-disk001.vmdk >> >> file HAAS-hpdio.ovf is here: >> >> https://pastebin.com/80qAU0wB >> >> >> Thanks again. >> So that seems to be a VM that was exported from Virtual Box,
right?
>> They don't do anything that violates the OVF specification but they do >> some non-common things that we don't anticipate: > > yes, it is most likely ova from VirtualBox > >> First, they don't specify the actual size of the disk and the
current
>> code in oVirt relies on that property. >> There is a workaround for this though: you can extract an OVA file, edit >> its OVF configuration - adding ovf:populatedSize="X" (and change >> ovf:capacity as I'll describe next) to the Disk element inside the >> DiskSection and pack the OVA again (tar cvf <ovf_file> <disk_file) where >> X is either: >> 1. the actual size of the vmdk file + some buffer (iirc, we used to take >> 15% of extra space for the conversion) >> 2. if you're using a file storage or you don't mind consuming more >> storage space on your block storage, simply set X to the virtual size of >> the disk (in bytes) as indicated by the ovf:capacity filed, e.g., >> ovf:populatedSize="21474836480" in the case of HAAS-hpdio.ova. >> >> Second, the virtual size (indicated by ovf:capacity) is specified
in
>> bytes. The specification says that the default unit of allocation shall >> be bytes, but practically every OVA file that I've ever saw specified it >> in GB and the current code in oVirt kind of assumes that this is
the
>> case without checking the ovf:capacityAllocationUnits attribute
that
>> could indicate the real unit of allocation [1]. >> Anyway, long story short, the virtual size of the disk should currently >> be specified in GB, e.g., ovf:populatedSize="20" in the case of >> HAAS-hpdio.ova. > > wow, thanks for this excellent explanation. I have changed this in ovf file > > ... > <Disk ovf:capacity="20" ovf:diskId="vmdisk2" ovf:populatedSize="20" ... > ... > > then I was able to import this mofified ova file
(HAAS-hpdio_new.ova).
> Interesting thing is that the vm was shown in vm list for while
(with
> state down with lock and status was initializing). After while
this vm
> disapeared :-o > > I am going to test it again and collect some logs...
there are interesting logs in /var/log/vdsm/import/ at the host used
for
import
http://mirror.slu.cz/tmp/ovirt-import.tar.bz2 <http://mirror.slu.cz/tmp/ovirt-import.tar.bz2>
first of them describes situation where I chose thick provisioning, second situation with thin provisioning
interesting part is I believe
libguestfs: command: run: qemu-img libguestfs: command: run: \ create libguestfs: command: run: \ -f qcow2 libguestfs: command: run: \ -o preallocation=off,compat=0.10 libguestfs: command: run: \ /rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-a570-
f37fa986a772/images/d44e1890-3e42-420b-939c-dac1290e19af/ 9edcccbc-b244-4b94-acd3-3c8ee12bbbec
libguestfs: command: run: \ 21474836480 Formatting '/rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-
a570-f37fa986a772/images/d44e1890-3e42-420b-939c- dac1290e19af/9edcccbc-b244-4b94-acd3-3c8ee12bbbec',
fmt=qcow2 size=21474836480 compat=0.10 encryption=off
cluster_size=65536
preallocation=off lazy_refcounts=off refcount_bits=16 libguestfs: trace: vdsm_disk_create: disk_create = 0 qemu-img 'convert' '-p' '-n' '-f' 'qcow2' '-O' 'qcow2' '/var/tmp/v2vovl2dccbd.qcow2' '/rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-
a570-f37fa986a772/images/d44e1890-3e42-420b-939c- dac1290e19af/9edcccbc-b244-4b94-acd3-3c8ee12bbbec'
qemu-img: error while writing sector 1000960: No space left on device
virt-v2v: error: qemu-img command failed, see earlier errors
Sorry again, I made a mistake in: "Anyway, long story short, the virtual size of the disk should currently be specified in GB, e.g., ovf:populatedSize="20" in the case of HAAS-hpdio.ova." I should have write ovf:capacity="20". So if you wish the actual size of the disk to be 20GB (which means the disk is preallocated), the disk element should be set with: <Disk ovf:capacity="20" ovf:diskId="vmdisk2" ovf:populatedSize="21474836480" ...
now I have this inf ovf file
<Disk ovf:capacity="20" ovf:diskId="vmdisk2" ovf:populatedSize="21474836480"...
but while import it fails again, but in this case faster. It looks like SPM cannot create disk image
log from SPM host...
2018-02-21 18:02:03,599+0100 INFO (jsonrpc/1) [vdsm.api] START createVolume(sdUUID=u'69f6b3e7-d754-44cf-a665-9d7128260401', spUUID=u'00000002-0002-0002-0002-0000000002b9', imgUUID=u'0a5c4ecb-2c04-4f96-858a-4f74915d5caa', size=u'20', volFormat=4, preallocate=2, diskType=u'DATA', volUUID=u'bd3ae91a-3b37-4610-9ad3-6c5fdc6cc9b0', desc=u'{"DiskAlias":"HAAS-hpdio-disk001.vmdk","DiskDescription":""}', srcImgUUID=u'00000000-0000-0000-0000-000000000000', srcVolUUID=u'00000000-0000-0000-0000-000000000000', initialSize=u'21474836480') from=::ffff:193.84.206.172,53154, flow_id=e27cd35a-dc4e-4e72-a3ef-aa5b67c2bdab, task_id=e7598aa1-420a-4612-9ee8-03012b1277d9 (api:46) 2018-02-21 18:02:03,603+0100 INFO (jsonrpc/1) [IOProcessClient] Starting client ioprocess-3931 (__init__:330) 2018-02-21 18:02:03,638+0100 INFO (ioprocess/56120) [IOProcess] Starting ioprocess (__init__:452) 2018-02-21 18:02:03,661+0100 INFO (jsonrpc/1) [vdsm.api] FINISH createVolume return=None from=::ffff:193.84.206.172,53154, flow_id=e27cd35a-dc4e-4e72-a3ef-aa5b67c2bdab, task_id=e7598aa1-420a-4612-9ee8-03012b1277d9 (api:52) 2018-02-21 18:02:03,692+0100 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Volume.create succeeded in 0.09 seconds (__init__:573) 2018-02-21 18:02:03,694+0100 INFO (tasks/1) [storage.ThreadPool.WorkerThread] START task e7598aa1-420a-4612-9ee8-03012b1277d9 (cmd=<bound method Task.commit of <vdsm.storage.task.Task instance at 0x3faa050>>, args=None) (threadPool:208) 2018-02-21 18:02:03,995+0100 INFO (tasks/1) [storage.StorageDomain] Create placeholder /rhev/data-center/mnt/blockSD/69f6b3e7-d754-44cf-a665- 9d7128260401/images/0a5c4ecb-2c04-4f96-858a-4f74915d5caa for image's volumes (sd:1244) 2018-02-21 18:02:04,016+0100 INFO (tasks/1) [storage.Volume] Creating volume bd3ae91a-3b37-4610-9ad3-6c5fdc6cc9b0 (volume:1151) 2018-02-21 18:02:04,060+0100 ERROR (tasks/1) [storage.Volume] The requested initial 21474836480 is bigger than the max size 134217728 (blockVolume:345) 2018-02-21 18:02:04,060+0100 ERROR (tasks/1) [storage.Volume] Failed to create volume /rhev/data-center/mnt/blockSD/69f6b3e7-d754-44cf-a665- 9d7128260401/images/0a5c4ecb-2c04-4f96-858a-4f74915d5caa/ bd3ae91a-3b37-4610-9ad3-6c5fdc6cc9b0: Invalid parameter: 'initial size=41943040' (volume:1175) 2018-02-21 18:02:04,061+0100 ERROR (tasks/1) [storage.Volume] Unexpected error (volume:1215) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 1172, in create initialSize=initialSize) File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py", line 501, in _create size, initialSize) File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py", line 545, in calculate_volume_alloc_size preallocate, capacity, initial_size) File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py", line 347, in calculate_volume_alloc_size initial_size) InvalidParameterException: Invalid parameter: 'initial size=41943040' 2018-02-21 18:02:04,062+0100 ERROR (tasks/1) [storage.TaskManager.Task] (Task='e7598aa1-420a-4612-9ee8-03012b1277d9') Unexpected error (task:875) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 336, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1936, in createVolume initialSize=initialSize) File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 801, in createVolume initialSize=initialSize) File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 1217, in create (volUUID, e)) VolumeCreationError: Error creating a new volume: (u"Volume creation bd3ae91a-3b37-4610-9ad3-6c5fdc6cc9b0 failed: Invalid parameter: 'initial size=41943040'",)
there are no new logs in import folder on host used for import...
> >> That should do it. If not, please share the OVA file and I will examine >> it in my environment. > > original file is at > > https://haas.cesnet.cz/downloads/release-01/HAAS-hpdio.ova <https://haas.cesnet.cz/downloads/release-01/HAAS-hpdio.ova> > >> >> [1] https://github.com/oVirt/ovirt-engine/blob/master/
backend/manager/modules/utils/src/main/java/org/ovirt/ engine/core/utils/ovf/OvfOvaReader.java#L220
backend/manager/modules/utils/src/main/java/org/ovirt/ engine/core/utils/ovf/OvfOvaReader.java#L220>
>> >> >> >> > file >> > >> /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-
20180220123504-ovirt01.net
<http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> >> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>> >> > <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net> >> <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net <http://ovirt-query-ova-ansible-20180220123504-ovirt01.net>>> >> > in the fact does not exists (nor folder /var/log/ovirt-engine/ova/) >> > >> > >> > This issue is also resolved in 4.2.2. >> > In the meantime, please create the /var/log/ovirt-engine/ova/ folder >> > manually and make sure its permissions match the ones of the other >> > folders in /var/log/ovirt-engine. >> >> ok, done. After another try there is this log file >> >> /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-
20180220173005-ovirt01.net
<http://ovirt-query-ova-ansible-20180220173005-ovirt01.net> >> <http://20180220173005-ovirt01.net <http://20180220173005-ovirt01.net>>.slu.cz.log >> >> https://pastebin.com/M5J44qur >> >> >> Is it the log of the execution of the ansible playbook that was provided >> with a path to the /ova folder? >> I'm interested in that in order to see how comes that its
execution
>> never completed. > > well, I dont think so, it is log from import with full path to ova file > > > >> >> >> >> >> > Cheers, >> > >> > Jiri Slezka >> > >> > > >> > > >> > > >> > > I am using latest 4.2.1.7-1.el7.centos version >> > > >> > > Cheers, >> > > Jiri Slezka >> > > >> > > >> > > [1] https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!index.md> >> <https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!index.md>> >> <https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!index.md> <https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!
index.md>>>
>> > > <https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!index.md> <https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!
index.md>>
>> > <https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!index.md> >> <https://haas.cesnet.cz/#!index.md <https://haas.cesnet.cz/#!index.md>>>> - Cesnet HAAS >> > > [2] https://haas.cesnet.cz/downloads/release-01/ <https://haas.cesnet.cz/downloads/release-01/> >> <https://haas.cesnet.cz/downloads/release-01/ <https://haas.cesnet.cz/downloads/release-01/>> >> > <https://haas.cesnet.cz/downloads/release-01/ <https://haas.cesnet.cz/downloads/release-01/> >> <https://haas.cesnet.cz/downloads/release-01/ <https://haas.cesnet.cz/downloads/release-01/>>> >> > > <https://haas.cesnet.cz/downloads/release-01/ <https://haas.cesnet.cz/downloads/release-01/> >> <https://haas.cesnet.cz/downloads/release-01/ <https://haas.cesnet.cz/downloads/release-01/>> >> > <https://haas.cesnet.cz/downloads/release-01/ <https://haas.cesnet.cz/downloads/release-01/> >> <https://haas.cesnet.cz/downloads/release-01/ <https://haas.cesnet.cz/downloads/release-01/>>>> - Image repository >> > > >> > > >> > > _______________________________________________ >> > > Users mailing list >> > > Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> >> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>> >> > <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> >> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>> >> > > http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users> >> <http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>> >> > <http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users> >> <http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>> >> > > <http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users> >> <http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>> >> > <http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users> >> <http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>>> >> > > >> > > >> > >> > >> > >> > _______________________________________________ >> > Users mailing list >> > Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> >> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>> >> > http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users> >> <http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>> >> > <http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users> >> <http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>> >> > >> > >> >> >> >> _______________________________________________ >> Users mailing list >> Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> >> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users> >> <http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>> >> >> > > > > > _______________________________________________ > Users mailing list > Users@ovirt.org <mailto:Users@ovirt.org> > http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users> >
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Thu, Feb 22, 2018 at 12:09:56PM +0200, Arik Hadas wrote:
supermin: failed to find a suitable kernel (host_cpu=x86_64).
Please run ‘libguestfs-test-tool’ and attach the complete output.
@Richard, this is an OVA of a VM installed with Debian64 as guest OS that was exported from VirtualBox, is it supported by virt-v2v?
No, we only support OVAs exported from VMware. OVF isn't a real standard, it's a ploy by VMware to pretend that they conform to standards. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-builder quickly builds VMs from scratch http://libguestfs.org/virt-builder.1.html

This is a cryptographically signed message in MIME format. --------------ms060205050508010702060300 Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: quoted-printable On 02/22/2018 11:22 AM, Richard W.M. Jones wrote:
On Thu, Feb 22, 2018 at 12:09:56PM +0200, Arik Hadas wrote:
supermin: failed to find a suitable kernel (host_cpu=3Dx86_64). =20 Please run =E2=80=98libguestfs-test-tool=E2=80=99 and attach the comple= te output.
libguestfs-test-tool ************************************************************ * IMPORTANT NOTICE * * When reporting bugs, include the COMPLETE, UNEDITED * output below in your bug report. * ************************************************************ PATH=3D/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin XDG_RUNTIME_DIR=3D/run/user/0 SELinux: Enforcing guestfs_get_append: (null) guestfs_get_autosync: 1 guestfs_get_backend: libvirt guestfs_get_backend_settings: [] guestfs_get_cachedir: /var/tmp guestfs_get_direct: 0 guestfs_get_hv: /usr/libexec/qemu-kvm guestfs_get_memsize: 500 guestfs_get_network: 0 guestfs_get_path: /usr/lib64/guestfs guestfs_get_pgroup: 0 guestfs_get_program: libguestfs-test-tool guestfs_get_recovery_proc: 1 guestfs_get_smp: 1 guestfs_get_sockdir: /tmp guestfs_get_tmpdir: /tmp guestfs_get_trace: 0 guestfs_get_verbose: 1 host_cpu: x86_64 Launching appliance, timeout set to 600 seconds. libguestfs: launch: program=3Dlibguestfs-test-tool libguestfs: launch: version=3D1.36.3rhel=3D7,release=3D6.el7_4.3,libvirt libguestfs: launch: backend registered: unix libguestfs: launch: backend registered: uml libguestfs: launch: backend registered: libvirt libguestfs: launch: backend registered: direct libguestfs: launch: backend=3Dlibvirt libguestfs: launch: tmpdir=3D/tmp/libguestfsmsimNR libguestfs: launch: umask=3D0022 libguestfs: launch: euid=3D0 libguestfs: libvirt version =3D 3002000 (3.2.0) libguestfs: guest random name =3D guestfs-ii13o2gd48kt6mrz libguestfs: connect to libvirt libguestfs: opening libvirt handle: URI =3D qemu:///system, auth =3D default+wrapper, flags =3D 0 libvirt needs authentication to connect to libvirt URI qemu:///system (see also: http://libvirt.org/auth.html http://libvirt.org/uri.html) (not sure if you need information after authentication (and I am not sure which credentials it needs))
=20
@Richard, this is an OVA of a VM installed with Debian64 as guest OS t= hat
was exported from VirtualBox, is it supported by virt-v2v? =20 No, we only support OVAs exported from VMware. OVF isn't a real standard, it's a ploy by VMware to pretend that they conform to standards.
:-) maybe supporting import from VirtualBox is the way to lowering VmWare importance :-) Cheers, Jiri --------------ms060205050508010702060300 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCC Cn8wggUJMIID8aADAgECAhACt8ndrdK9CetZxFyQDGB4MA0GCSqGSIb3DQEBCwUAMGUxCzAJ BgNVBAYTAlVTMRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2Vy dC5jb20xJDAiBgNVBAMTG0RpZ2lDZXJ0IEFzc3VyZWQgSUQgUm9vdCBDQTAeFw0xNDExMTgx MjAwMDBaFw0yNDExMTgxMjAwMDBaMHIxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1I b2xsYW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzANBgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMd VEVSRU5BIGVTY2llbmNlIFBlcnNvbmFsIENBIDMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw ggEKAoIBAQCwp9Jj5Aej1xPkS1GV3LvBdemFmkUR//nSzBodqsU3dv2BCRD30r4gt5oRsYty qDGF2nnItxV1SkwVoDxFeRzOIHYNYvBRHaiGvCQjEXzPRTocOSVfWpmq/zAL/QOEqpJogeM+ 0IBGiJcAENJshl7UcfjYbBnN5qStk74f52VWFf/aiF7MVJnsUr3oriQvXYOzs8N/NXyyQyim atBbumJVCNszF1X+XHCGfPNvxlNFW9ktv7azK0baminfLcsh6ubCdINZc+Nof2lU387NCDgg oh3KsYVcZTSuhh7qp6MjxE5VqOZod1hpXXzDOkjK+DAMC57iZXssncp24eaN08VlAgMBAAGj ggGmMIIBojASBgNVHRMBAf8ECDAGAQH/AgEAMA4GA1UdDwEB/wQEAwIBhjB5BggrBgEFBQcB AQRtMGswJAYIKwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNvbTBDBggrBgEFBQcw AoY3aHR0cDovL2NhY2VydHMuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0QXNzdXJlZElEUm9vdENB LmNydDCBgQYDVR0fBHoweDA6oDigNoY0aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL0RpZ2lD ZXJ0QXNzdXJlZElEUm9vdENBLmNybDA6oDigNoY0aHR0cDovL2NybDQuZGlnaWNlcnQuY29t L0RpZ2lDZXJ0QXNzdXJlZElEUm9vdENBLmNybDA9BgNVHSAENjA0MDIGBFUdIAAwKjAoBggr BgEFBQcCARYcaHR0cHM6Ly93d3cuZGlnaWNlcnQuY29tL0NQUzAdBgNVHQ4EFgQUjJ8RLubj egSlHlWLRggEpu2XcKYwHwYDVR0jBBgwFoAUReuir/SSy4IxLVGLp6chnfNtyA8wDQYJKoZI hvcNAQELBQADggEBAI5HEV91Oen8WHFCoJkeu2Av+b/kWTV2qH/YNI1Xsbou2hHKhh4IyNkF OxA/TUiuK2qQnQ5hAS0TIrs9SJ1Ke+DjXd/cTBiw7lCYSW5hkzigFV+iSivninpItafWqYBS WxITl1KHBS9YBskhEqO5GLliDMPiAgjqUBQ/H1qZMlZNQIuFu0UaFUQuZUpJFr4+0zpzPxsB iWU2muAoGItwbaP55EYshM7+v/J+x6kIhAJt5Dng8fOmOvR9F6Vw2/E0EZ6oQ8g1fdhwM101 S1OI6J1tUil1r7ES/svNqVWVb7YkUEBcPo8ppfHnTI/uxsn2tslsWefsOGJxNYUUSMAb9Eow ggVuMIIEVqADAgECAhAKebGg8bOvnIyfOWAn4bpzMA0GCSqGSIb3DQEBCwUAMHIxCzAJBgNV BAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzAN BgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMdVEVSRU5BIGVTY2llbmNlIFBlcnNvbmFsIENBIDMw HhcNMTcxMTE2MDAwMDAwWhcNMTgxMjE1MTIwMDAwWjCBlDETMBEGCgmSJomT8ixkARkWA29y ZzEWMBQGCgmSJomT8ixkARkWBnRlcmVuYTETMBEGCgmSJomT8ixkARkWA3RjczELMAkGA1UE BhMCQ1oxJTAjBgNVBAoTHFNpbGVzaWFuIFVuaXZlcnNpdHkgaW4gT3BhdmExHDAaBgNVBAMT E0ppcmkgU2xlemthIHNsZTAwMDEwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC/ VwOD1hlYL6l7GzxNqV1ne7/iMF/gHvPfTwejsC2s9sby7It82qXPRBVA2s1Cjb1A3ucpdlDN MXM83Lvh881XfkxhS2YLLyiZDmlSzAqfoMLxQ2/E0m1UugttzGJF7/10pEwj0FJFhnIVwA/E 8svCcbhxwO9BBpUz8JG1C6fTd0qyzJtNXVyH+WuHQbU2jgu2JJ7miiEKE1Fis0hFf1rKxTzX aVGyXiQLOn7TZDfPtXrJEG7eWYlFUP58edyuJELpWHTPHn8xJKYTy8Qq5BgFNyCRQT/6imsh tZlDBZSEeqyoSNtLsC57ZrjqgtLCEQFK9EX27dOy0/u95zS0OIWdAgMBAAGjggHbMIIB1zAf BgNVHSMEGDAWgBSMnxEu5uN6BKUeVYtGCASm7ZdwpjAdBgNVHQ4EFgQUF1mSlcyDz9wWit9V jCz+zJ9CrpswDAYDVR0TAQH/BAIwADAdBgNVHREEFjAUgRJqaXJpLnNsZXprYUBzbHUuY3ow DgYDVR0PAQH/BAQDAgSwMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcDBDA0BgNVHSAE LTArMAwGCiqGSIb3TAUCAgEwDAYKYIZIAYb9bAQfATANBgsqhkiG90wFAgMDAzCBhQYDVR0f BH4wfDA8oDqgOIY2aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL1RFUkVOQWVTY2llbmNlUGVy c29uYWxDQTMuY3JsMDygOqA4hjZodHRwOi8vY3JsNC5kaWdpY2VydC5jb20vVEVSRU5BZVNj aWVuY2VQZXJzb25hbENBMy5jcmwwewYIKwYBBQUHAQEEbzBtMCQGCCsGAQUFBzABhhhodHRw Oi8vb2NzcC5kaWdpY2VydC5jb20wRQYIKwYBBQUHMAKGOWh0dHA6Ly9jYWNlcnRzLmRpZ2lj ZXJ0LmNvbS9URVJFTkFlU2NpZW5jZVBlcnNvbmFsQ0EzLmNydDANBgkqhkiG9w0BAQsFAAOC AQEADtFRxKphkcHVdWjR/+i1+cdHfkbicraHlU5Mpw8EX6nemKu4GGAWfzH+Y7p6ImZwUHWf /SSbrX+57xaFUBOr3jktQm1GRmGUZESEmsUDB8UZXzdQC79/tO9MzRhvEBXuQhdxdoO64Efx VqtYAB2ydqz7yWh56ioSwaQZEXo5rO1kZuAcmVz8Smd1r/Mur/h8Y+qbrsJng1GS25aMhFts UV6z9zXuHFkT9Ck8SLdCEDzjzYNjXIDB5n+QOmPXnXrZMlGiI/aOqa5k5Sv6xCIPdH2kbpyd M1YiH/ChmU9gWJvy0Jq42KGLvWBvuHEzcb3f473Fvn4GWsXu0zDS2oh2/TGCA8MwggO/AgEB MIGGMHIxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIwEAYDVQQHEwlB bXN0ZXJkYW0xDzANBgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMdVEVSRU5BIGVTY2llbmNlIFBl cnNvbmFsIENBIDMCEAp5saDxs6+cjJ85YCfhunMwDQYJYIZIAWUDBAIBBQCgggINMBgGCSqG SIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTE4MDIyMjEyMjcxOFowLwYJ KoZIhvcNAQkEMSIEIAu9/E/+n/VTnlHwCaTLqnQjSkIyES/nqIoXBUpoiu1BMGwGCSqGSIb3 DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG9w0DBzAOBggqhkiG 9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcNAwICASgwgZcGCSsG AQQBgjcQBDGBiTCBhjByMQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDES MBAGA1UEBxMJQW1zdGVyZGFtMQ8wDQYDVQQKEwZURVJFTkExJjAkBgNVBAMTHVRFUkVOQSBl U2NpZW5jZSBQZXJzb25hbCBDQSAzAhAKebGg8bOvnIyfOWAn4bpzMIGZBgsqhkiG9w0BCRAC CzGBiaCBhjByMQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDESMBAGA1UE BxMJQW1zdGVyZGFtMQ8wDQYDVQQKEwZURVJFTkExJjAkBgNVBAMTHVRFUkVOQSBlU2NpZW5j ZSBQZXJzb25hbCBDQSAzAhAKebGg8bOvnIyfOWAn4bpzMA0GCSqGSIb3DQEBAQUABIIBAEoF mZOu41Tg1YoTFzMh+LhFkc38e0N75YmpJJSdhKT4zYo025qb/yrQWLhJ98RwP3f1Zvbm1ANX nH//ItbTrOFpJnJoJobISA6tZf3/5QsWVET04gOa0tJGVSFH1wBmfUSQX630HHTOTd57UKrQ 6OcOC9fQoDH3zc7tqajRz9aEP7pTNrA2l0QQJicx2GDTVa4aHDGu7aZsRdVD994RX/GRfYCQ GkfJRM6bDdrGZhLk5Bj3eX9Piyem9q+H5pmd0Y65ZfuwyH2FW9yjCCrrV13Sy6Nm6wiatqMQ kXNa7N7Jh1ULSCWNNtA2I4ATTyB09z9hRBOc0qy1jLcdcjX5O3YAAAAAAAA= --------------ms060205050508010702060300--

On Thu, Feb 22, 2018 at 01:27:18PM +0100, Jiří Sléžka wrote:
libvirt needs authentication to connect to libvirt URI qemu:///system (see also: http://libvirt.org/auth.html http://libvirt.org/uri.html)
You can set the backend to direct to avoid needing libvirt: export LIBGUESTFS_BACKEND=direct Alternately you can fiddle with the libvirt polkit configuration to permit access: https://libvirt.org/aclpolkit.html Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-p2v converts physical machines to virtual machines. Boot with a live CD or over the network (PXE) and turn machines into KVM guests. http://libguestfs.org/virt-v2v

This is a cryptographically signed message in MIME format. --------------ms010003060801060607080502 Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: quoted-printable On 02/22/2018 01:58 PM, Richard W.M. Jones wrote:
On Thu, Feb 22, 2018 at 01:27:18PM +0100, Ji=C5=99=C3=AD Sl=C3=A9=C5=BE= ka wrote:
libvirt needs authentication to connect to libvirt URI qemu:///system (see also: http://libvirt.org/auth.html http://libvirt.org/uri.html) =20 You can set the backend to direct to avoid needing libvirt: =20 export LIBGUESTFS_BACKEND=3Ddirect =20 Alternately you can fiddle with the libvirt polkit configuration to permit access:
thanks, here is full output http://mirror.slu.cz/tmp/libguestfs-test-tool.txt Jiri
=20 https://libvirt.org/aclpolkit.html =20 Rich. =20
--------------ms010003060801060607080502 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCC Cn8wggUJMIID8aADAgECAhACt8ndrdK9CetZxFyQDGB4MA0GCSqGSIb3DQEBCwUAMGUxCzAJ BgNVBAYTAlVTMRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2Vy dC5jb20xJDAiBgNVBAMTG0RpZ2lDZXJ0IEFzc3VyZWQgSUQgUm9vdCBDQTAeFw0xNDExMTgx MjAwMDBaFw0yNDExMTgxMjAwMDBaMHIxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1I b2xsYW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzANBgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMd VEVSRU5BIGVTY2llbmNlIFBlcnNvbmFsIENBIDMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw ggEKAoIBAQCwp9Jj5Aej1xPkS1GV3LvBdemFmkUR//nSzBodqsU3dv2BCRD30r4gt5oRsYty qDGF2nnItxV1SkwVoDxFeRzOIHYNYvBRHaiGvCQjEXzPRTocOSVfWpmq/zAL/QOEqpJogeM+ 0IBGiJcAENJshl7UcfjYbBnN5qStk74f52VWFf/aiF7MVJnsUr3oriQvXYOzs8N/NXyyQyim atBbumJVCNszF1X+XHCGfPNvxlNFW9ktv7azK0baminfLcsh6ubCdINZc+Nof2lU387NCDgg oh3KsYVcZTSuhh7qp6MjxE5VqOZod1hpXXzDOkjK+DAMC57iZXssncp24eaN08VlAgMBAAGj ggGmMIIBojASBgNVHRMBAf8ECDAGAQH/AgEAMA4GA1UdDwEB/wQEAwIBhjB5BggrBgEFBQcB AQRtMGswJAYIKwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNvbTBDBggrBgEFBQcw AoY3aHR0cDovL2NhY2VydHMuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0QXNzdXJlZElEUm9vdENB LmNydDCBgQYDVR0fBHoweDA6oDigNoY0aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL0RpZ2lD ZXJ0QXNzdXJlZElEUm9vdENBLmNybDA6oDigNoY0aHR0cDovL2NybDQuZGlnaWNlcnQuY29t L0RpZ2lDZXJ0QXNzdXJlZElEUm9vdENBLmNybDA9BgNVHSAENjA0MDIGBFUdIAAwKjAoBggr BgEFBQcCARYcaHR0cHM6Ly93d3cuZGlnaWNlcnQuY29tL0NQUzAdBgNVHQ4EFgQUjJ8RLubj egSlHlWLRggEpu2XcKYwHwYDVR0jBBgwFoAUReuir/SSy4IxLVGLp6chnfNtyA8wDQYJKoZI hvcNAQELBQADggEBAI5HEV91Oen8WHFCoJkeu2Av+b/kWTV2qH/YNI1Xsbou2hHKhh4IyNkF OxA/TUiuK2qQnQ5hAS0TIrs9SJ1Ke+DjXd/cTBiw7lCYSW5hkzigFV+iSivninpItafWqYBS WxITl1KHBS9YBskhEqO5GLliDMPiAgjqUBQ/H1qZMlZNQIuFu0UaFUQuZUpJFr4+0zpzPxsB iWU2muAoGItwbaP55EYshM7+v/J+x6kIhAJt5Dng8fOmOvR9F6Vw2/E0EZ6oQ8g1fdhwM101 S1OI6J1tUil1r7ES/svNqVWVb7YkUEBcPo8ppfHnTI/uxsn2tslsWefsOGJxNYUUSMAb9Eow ggVuMIIEVqADAgECAhAKebGg8bOvnIyfOWAn4bpzMA0GCSqGSIb3DQEBCwUAMHIxCzAJBgNV BAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzAN BgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMdVEVSRU5BIGVTY2llbmNlIFBlcnNvbmFsIENBIDMw HhcNMTcxMTE2MDAwMDAwWhcNMTgxMjE1MTIwMDAwWjCBlDETMBEGCgmSJomT8ixkARkWA29y ZzEWMBQGCgmSJomT8ixkARkWBnRlcmVuYTETMBEGCgmSJomT8ixkARkWA3RjczELMAkGA1UE BhMCQ1oxJTAjBgNVBAoTHFNpbGVzaWFuIFVuaXZlcnNpdHkgaW4gT3BhdmExHDAaBgNVBAMT E0ppcmkgU2xlemthIHNsZTAwMDEwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC/ VwOD1hlYL6l7GzxNqV1ne7/iMF/gHvPfTwejsC2s9sby7It82qXPRBVA2s1Cjb1A3ucpdlDN MXM83Lvh881XfkxhS2YLLyiZDmlSzAqfoMLxQ2/E0m1UugttzGJF7/10pEwj0FJFhnIVwA/E 8svCcbhxwO9BBpUz8JG1C6fTd0qyzJtNXVyH+WuHQbU2jgu2JJ7miiEKE1Fis0hFf1rKxTzX aVGyXiQLOn7TZDfPtXrJEG7eWYlFUP58edyuJELpWHTPHn8xJKYTy8Qq5BgFNyCRQT/6imsh tZlDBZSEeqyoSNtLsC57ZrjqgtLCEQFK9EX27dOy0/u95zS0OIWdAgMBAAGjggHbMIIB1zAf BgNVHSMEGDAWgBSMnxEu5uN6BKUeVYtGCASm7ZdwpjAdBgNVHQ4EFgQUF1mSlcyDz9wWit9V jCz+zJ9CrpswDAYDVR0TAQH/BAIwADAdBgNVHREEFjAUgRJqaXJpLnNsZXprYUBzbHUuY3ow DgYDVR0PAQH/BAQDAgSwMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcDBDA0BgNVHSAE LTArMAwGCiqGSIb3TAUCAgEwDAYKYIZIAYb9bAQfATANBgsqhkiG90wFAgMDAzCBhQYDVR0f BH4wfDA8oDqgOIY2aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL1RFUkVOQWVTY2llbmNlUGVy c29uYWxDQTMuY3JsMDygOqA4hjZodHRwOi8vY3JsNC5kaWdpY2VydC5jb20vVEVSRU5BZVNj aWVuY2VQZXJzb25hbENBMy5jcmwwewYIKwYBBQUHAQEEbzBtMCQGCCsGAQUFBzABhhhodHRw Oi8vb2NzcC5kaWdpY2VydC5jb20wRQYIKwYBBQUHMAKGOWh0dHA6Ly9jYWNlcnRzLmRpZ2lj ZXJ0LmNvbS9URVJFTkFlU2NpZW5jZVBlcnNvbmFsQ0EzLmNydDANBgkqhkiG9w0BAQsFAAOC AQEADtFRxKphkcHVdWjR/+i1+cdHfkbicraHlU5Mpw8EX6nemKu4GGAWfzH+Y7p6ImZwUHWf /SSbrX+57xaFUBOr3jktQm1GRmGUZESEmsUDB8UZXzdQC79/tO9MzRhvEBXuQhdxdoO64Efx VqtYAB2ydqz7yWh56ioSwaQZEXo5rO1kZuAcmVz8Smd1r/Mur/h8Y+qbrsJng1GS25aMhFts UV6z9zXuHFkT9Ck8SLdCEDzjzYNjXIDB5n+QOmPXnXrZMlGiI/aOqa5k5Sv6xCIPdH2kbpyd M1YiH/ChmU9gWJvy0Jq42KGLvWBvuHEzcb3f473Fvn4GWsXu0zDS2oh2/TGCA8MwggO/AgEB MIGGMHIxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIwEAYDVQQHEwlB bXN0ZXJkYW0xDzANBgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMdVEVSRU5BIGVTY2llbmNlIFBl cnNvbmFsIENBIDMCEAp5saDxs6+cjJ85YCfhunMwDQYJYIZIAWUDBAIBBQCgggINMBgGCSqG SIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTE4MDIyMjEzMjYwNlowLwYJ KoZIhvcNAQkEMSIEICP2I2jhBo7w2x11FE/3m7xlrAnqK+v43ME7Fs9VNV4YMGwGCSqGSIb3 DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG9w0DBzAOBggqhkiG 9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcNAwICASgwgZcGCSsG AQQBgjcQBDGBiTCBhjByMQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDES MBAGA1UEBxMJQW1zdGVyZGFtMQ8wDQYDVQQKEwZURVJFTkExJjAkBgNVBAMTHVRFUkVOQSBl U2NpZW5jZSBQZXJzb25hbCBDQSAzAhAKebGg8bOvnIyfOWAn4bpzMIGZBgsqhkiG9w0BCRAC CzGBiaCBhjByMQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDESMBAGA1UE BxMJQW1zdGVyZGFtMQ8wDQYDVQQKEwZURVJFTkExJjAkBgNVBAMTHVRFUkVOQSBlU2NpZW5j ZSBQZXJzb25hbCBDQSAzAhAKebGg8bOvnIyfOWAn4bpzMA0GCSqGSIb3DQEBAQUABIIBAJ63 YyMbFX9LGThdzUBgFJALmowcZlYeYOhN+CZov6Mv3LpYpXX+aodjrUYepYxuIgZG8z60uU72 01ITd7ghEi4O8bzfDMaHWeTM04l4ks1jy4wk/bgFFbJ+XJQ5JeyIzYiGfDOZ6Ilg6ipiyWAd TnlBJ7LRVuaRs2HMiLsrF0wXEwK/jydRHlfK2oPtv9fIBuIEFOAOp6ZGnpCzeL6WkFit8mUr fErmpU/CmKjruMR/YaWoUKuccHeFjtSCC2QOzWZzwmiaQEblVXosSwftdJJJOKi/9OUaFG8c GN3HtAqCVfKcZAxkVVzdCVSrYMm4Y57eqXvPtk5QlA4JU7ZtC64AAAAAAAA= --------------ms010003060801060607080502--

On Thu, Feb 22, 2018 at 3:26 PM, Jiří Sléžka <jiri.slezka@slu.cz> wrote:
On 02/22/2018 01:58 PM, Richard W.M. Jones wrote:
On Thu, Feb 22, 2018 at 01:27:18PM +0100, Jiří Sléžka wrote:
libvirt needs authentication to connect to libvirt URI qemu:///system (see also: http://libvirt.org/auth.html http://libvirt.org/uri.html)
You can set the backend to direct to avoid needing libvirt:
export LIBGUESTFS_BACKEND=direct
Alternately you can fiddle with the libvirt polkit configuration to permit access:
thanks, here is full output
http://mirror.slu.cz/tmp/libguestfs-test-tool.txt
Jiri
Thanks, there is apparently something wrong with that particular host of mine - not worth spending the time on investigating it. Jiri, your test seems to past, could you try invoking the import again with the latest proposed changes to the OVF configuration (adding ovf:size to the File element and removing the USB item) and update us?
https://libvirt.org/aclpolkit.html
Rich.
participants (3)
-
Arik Hadas
-
Jiří Sléžka
-
Richard W.M. Jones