On Wed, May 8, 2019 at 3:44 PM Andreas Elvers <
andreas.elvers+ovirtforum(a)solutions.work> wrote:
Hello,
when trying to deploy the engine using "hosted-engine --deploy
--restore-from-file=myenginebackup" the ansible playbook errors out at
[ INFO ] TASK [ovirt.hosted_engine_setup : Trigger hosted engine OVF
update and enable the serial console]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Wait until OVF update finishes]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Parse OVF_STORE disk list]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Check OVF_STORE volume status]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Wait for OVF_STORE disk
content]
[ ERROR ] {u'_ansible_parsed': True, u'stderr_lines': [u'20+0 records
in',
u'20+0 records out', u'10240 bytes (10 kB) copied, 0.000141645 s, 72.3
MB/s', u'tar: ebb09b0e-2d03-40f0-8fa4-c40b18612a54.ovf: Not found in
archive', u'tar: Exiting with failure status due to previous errors'],
u'changed': True, u'end': u'2019-05-08 15:21:47.595195',
u'_ansible_item_label': {u'image_id':
u'65fd6c57-033c-4c95-87c1-b16c26e4bc98', u'name': u'OVF_STORE',
u'id':
u'9ff8b389-5e24-4166-9842-f1d6104b662b'}, u'stdout': u'',
u'failed': True,
u'_ansible_item_result': True, u'msg': u'non-zero return code',
u'rc': 2,
u'start': u'2019-05-08 15:21:46.906877', u'attempts': 12,
u'cmd':
u"vdsm-client Image prepare
storagepoolID=597f329c-0296-03af-0369-000000000139
storagedomainID=f708ced4-e339-4d02-a07f-78f1a30fc2a8
imageID=9ff8b389-5e24-4166-9842-f1d6104b662b
volumeID=65fd6c57-033c-4c95-87c1-b16c26e4bc98 | grep path | awk '{ print $2
}' | xargs -I{} sudo -u vdsm dd if={} | tar -tvf -
ebb09b0e-2d03-40f0-8fa4-c40
b18612a54.ovf", u'item': {u'image_id':
u'65fd6c57-033c-4c95-87c1-b16c26e4bc98', u'name': u'OVF_STORE',
u'id':
u'9ff8b389-5e24-4166-9842-f1d6104b662b'}, u'delta':
u'0:00:00.688318',
u'invocation': {u'module_args': {u'warn': False,
u'executable': None,
u'_uses_shell': True, u'_raw_params': u"vdsm-client Image prepare
storagepoolID=597f329c-0296-03af-0369-000000000139
storagedomainID=f708ced4-e339-4d02-a07f-78f1a30fc2a8
imageID=9ff8b389-5e24-4166-9842-f1d6104b662b
volumeID=65fd6c57-033c-4c95-87c1-b16c26e4bc98 | grep path | awk '{ print $2
}' | xargs -I{} sudo -u vdsm dd if={} | tar -tvf -
ebb09b0e-2d03-40f0-8fa4-c40b18612a54.ovf", u'removes': None,
u'argv': None,
u'creates': None, u'chdir': None, u'stdin': None}},
u'stdout_lines': [],
u'stderr': u'20+0 records in\n20+0 records out\n10240 bytes (10 kB) copied,
0.000141645 s, 72.3 MB/s\ntar: ebb09b0e-2d03-40f0-8fa4-c40b18612a54.ovf:
Not found in archive\ntar: Exiting with failure status due to previous
errors', u'_ansible_no_log': False}
[ ERROR ] {u'_ansible_parsed': True, u'stderr_lines': [u'20+0 records
in',
u'20+0 records out', u'10240 bytes (10 kB) copied, 0.000140541 s, 72.9
MB/s', u'tar: ebb09b0e-2d03-40f0-8fa4-c40b18612a54.ovf: Not found in
archive', u'tar: Exiting with failure status due to previous errors'],
u'changed': True, u'end': u'2019-05-08 15:24:01.387469',
u'_ansible_item_label': {u'image_id':
u'dacf9ad8-77b9-4205-8ca2-d6877627ad4a', u'name': u'OVF_STORE',
u'id':
u'8691076a-8e45-4429-a18a-5faebef866cc'}, u'stdout': u'',
u'failed': True,
u'_ansible_item_result': True, u'msg': u'non-zero return code',
u'rc': 2,
u'start': u'2019-05-08 15:24:00.660309', u'attempts': 12,
u'cmd':
u"vdsm-client Image prepare
storagepoolID=597f329c-0296-03af-0369-000000000139
storagedomainID=f708ced4-e339-4d02-a07f-78f1a30fc2a8
imageID=8691076a-8e45-4429-a18a-5faebef866cc
volumeID=dacf9ad8-77b9-4205-8ca2-d6877627ad4a | grep path | awk '{ print $2
}' | xargs -I{} sudo -u vdsm dd if={} | tar -tvf -
ebb09b0e-2d03-40f0-8fa4-c40
b18612a54.ovf", u'item': {u'image_id':
u'dacf9ad8-77b9-4205-8ca2-d6877627ad4a', u'name': u'OVF_STORE',
u'id':
u'8691076a-8e45-4429-a18a-5faebef866cc'}, u'delta':
u'0:00:00.727160',
u'invocation': {u'module_args': {u'warn': False,
u'executable': None,
u'_uses_shell': True, u'_raw_params': u"vdsm-client Image prepare
storagepoolID=597f329c-0296-03af-0369-000000000139
storagedomainID=f708ced4-e339-4d02-a07f-78f1a30fc2a8
imageID=8691076a-8e45-4429-a18a-5faebef866cc
volumeID=dacf9ad8-77b9-4205-8ca2-d6877627ad4a | grep path | awk '{ print $2
}' | xargs -I{} sudo -u vdsm dd if={} | tar -tvf -
ebb09b0e-2d03-40f0-8fa4-c40b18612a54.ovf", u'removes': None,
u'argv': None,
u'creates': None, u'chdir': None, u'stdin': None}},
u'stdout_lines': [],
u'stderr': u'20+0 records in\n20+0 records out\n10240 bytes (10 kB) copied,
0.000140541 s, 72.9 MB/s\ntar: ebb09b0e-2d03-40f0-8fa4-c40b18612a54.ovf:
Not found in archive\ntar: Exiting with failure status due to previous
errors', u'_ansible_no_log': False}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing
ansible-playbook
I tried twice. Same result. Should I retry?
We had a bug about that in the past:
https://bugzilla.redhat.com/1644748
<
https://bugzilla.redhat.com/show_bug.cgi?id=1644748>
but it's reported as CLOSED CURRENTRELEASE.
Can I ask which versions of ovirt-hosted-engine-setup,
ovirt-ansible-hosted-engine-setup
and ovirt-engine-appliance are you using?
I see that you sent more than one email in the last days and in general all
the issue you are reporting are due to timeouts/race conditions.
Can you please provide some more info about your storage configuration?
Is it safe to use the local hosted engine for starting stopping vms? I'm
kind of headless for some days :-)
Best regards.
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UNBCYRKXM24...
--
Simone Tiraboschi
He / Him / His
Principal Software Engineer
Red Hat <
https://www.redhat.com/>
stirabos(a)redhat.com
@redhatjobs <
https://twitter.com/redhatjobs> redhatjobs
<
https://www.facebook.com/redhatjobs> @redhatjobs
<
https://instagram.com/redhatjobs>
<
https://red.ht/sig>
<
https://redhat.com/summit>