Ok, solved.
Simply the server node2 could not mount via NFS the data domain of the
node 1. Added node1 in the node2 firewall and in /etc/exports, tested
and everything went fine.
Regards,
Francesco
Il 21/09/2020 17:44, francesco--- via Users ha scritto:
Hi Everyone,
In a test environment I'm trying to deploy a single node self hosted engine 4.4 on
CentOS 8 from a 4.3 backup. The actual setup is:
- node1 with CentOS7, oVirt 4.3 with a working SH engine. The data domain is a local
NFS;
- node2 with CentOS8, where we are triyng to deploy the engine starting from the node1
engine backup
- host1, with CentOS78, running a couple of VMs (4.3)
I'm following the guide:
https://www.ovirt.org/documentation/upgrade_guide/#Upgrading_the_Manager_...
Everything seems working fine, the engine on the node1 is in maintenance:global mode and
the ovirt-engine service i stopped. The deploy on the node2 stucks in the following
error:
TASK [ovirt.hosted_engine_setup : Wait for OVF_STORE disk content]
[ ERROR ] {'msg': 'non-zero return code', 'cmd':
"vdsm-client Image prepare storagepoolID=06c58622-f99b-11ea-9122-00163e1bbc93
storagedomainID=2a4a3cce-f2f6-4ddd-b337-df5ef562f520
imageID=e48a66dd-74c9-43eb-890e-778e9c4ee8db volumeID=06bb5f34-112d-4214-91d2-53d0bdb84321
| grep path | awk '{ print $2 }' | xargs -I{} sudo -u vdsm dd if={} | tar -tvf -
6023764f-5547-4b23-92ca-422eafdf3f87.ovf", 'stdout': '',
'stderr': "vdsm-client: Command Image.prepare with args
{'storagepoolID': '06c58622-f99b-11ea-9122-00163e1bbc93',
'storagedomainID': '2a4a3cce-f2f6-4ddd-b337-df5ef562f520',
'imageID': 'e48a66dd-74c9-43eb-890e-778e9c4ee8db',
'volumeID': '06bb5f34-112d-4214-91d2-53d0bdb84321'} failed:\n(code=309,
message=Unknown pool id, pool not connected:
('06c58622-f99b-11ea-9122-00163e1bbc93',))\ntar: This does not look like a tar
archive\ntar: 6023764f-5547-4b23-92ca-422eafdf3f87.ovf: Not found in archive\ntar: Exiting
with failure status due to previous errors", 'rc': 2, 'start':
'2020-09-21 17:14:17.293090', 'end': '2020-09-21 17:14:17.644253',
'delta': '0:00:00.351163', 'changed': True, 'failed':
True, 'invocation': {'module_args': {'warn': False,
'_raw_params': "vdsm-client Image prepare
storagepoolID=06c58622-f99b-11ea-9122-00163e1bbc93
storagedomainID=2a4a3cce-f2f6-4ddd-b337-df5ef562f520
imageID=e48a66dd-74c9-43eb-890e-778e9c4ee8db volumeID=06bb5f34-112d-4214-91d2-53d0bdb84321
| grep path | awk '{ print $2 }' | xargs -I{} sudo -u vdsm dd if={} | tar -tvf -
6023764f-5547-4b23-92ca-422eafdf3f87.ovf", '_uses_shell': True,
'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None,
'chdir': None, 'executable
': None, 'creates': None, 'removes': None, 'stdin': None}},
'stdout_lines': [], 'stderr_lines': ["vdsm-client: Command
Image.prepare with args {'storagepoolID':
'06c58622-f99b-11ea-9122-00163e1bbc93', 'storagedomainID':
'2a4a3cce-f2f6-4ddd-b337-df5ef562f520', 'imageID':
'e48a66dd-74c9-43eb-890e-778e9c4ee8db', 'volumeID':
'06bb5f34-112d-4214-91d2-53d0bdb84321'} failed:", "(code=309,
message=Unknown pool id, pool not connected:
('06c58622-f99b-11ea-9122-00163e1bbc93',))", 'tar: This does not look
like a tar archive', 'tar: 6023764f-5547-4b23-92ca-422eafdf3f87.ovf: Not found in
archive', 'tar: Exiting with failure status due to previous errors'],
'_ansible_no_log': False, 'attempts':
12, 'item': {'name': 'OVF_STORE', 'image_id':
'06bb5f34-112d-4214-91d2-53d0bdb84321', 'id':
'e48a66dd-74c9-43eb-890e-778e9c4ee8db'}, 'ansible_loop_var':
'item', '_ansible_item_label': {'name': 'OVF_STORE',
'image_id': '06bb5f34-112d-4214-91d2-53d0bdb84321', 'id':
'e48a66dd-74c9-43eb-890e-778e9c4ee8db'}}
[ ERROR ] {'msg': 'non-zero return code', 'cmd':
"vdsm-client Image prepare storagepoolID=06c58622-f99b-11ea-9122-00163e1bbc93
storagedomainID=2a4a3cce-f2f6-4ddd-b337-df5ef562f520
imageID=750428bd-1273-467f-9b27-7f6fe58a446c volumeID=1c89c678-f883-4e61-945c-5f7321add343
| grep path | awk '{ print $2 }' | xargs -I{} sudo -u vdsm dd if={} | tar -tvf -
6023764f-5547-4b23-92ca-422eafdf3f87.ovf", 'stdout': '',
'stderr': "vdsm-client: Command Image.prepare with args
{'storagepoolID': '06c58622-f99b-11ea-9122-00163e1bbc93',
'storagedomainID': '2a4a3cce-f2f6-4ddd-b337-df5ef562f520',
'imageID': '750428bd-1273-467f-9b27-7f6fe58a446c',
'volumeID': '1c89c678-f883-4e61-945c-5f7321add343'} failed:\n(code=309,
message=Unknown pool id, pool not connected:
('06c58622-f99b-11ea-9122-00163e1bbc93',))\ntar: This does not look like a tar
archive\ntar: 6023764f-5547-4b23-92ca-422eafdf3f87.ovf: Not found in archive\ntar: Exiting
with failure status due to previous errors", 'rc': 2, 'start':
'2020-09-21 17:16:26.030343', 'end': '2020-09-21 17:16:26.381862',
'delta': '0:00:00.351519', 'changed': True, 'failed':
True, 'invocation': {'module_args': {'warn': False,
'_raw_params': "vdsm-client Image prepare
storagepoolID=06c58622-f99b-11ea-9122-00163e1bbc93
storagedomainID=2a4a3cce-f2f6-4ddd-b337-df5ef562f520
imageID=750428bd-1273-467f-9b27-7f6fe58a446c volumeID=1c89c678-f883-4e61-945c-5f7321add343
| grep path | awk '{ print $2 }' | xargs -I{} sudo -u vdsm dd if={} | tar -tvf -
6023764f-5547-4b23-92ca-422eafdf3f87.ovf", '_uses_shell': True,
'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None,
'chdir': None, 'executable
': None, 'creates': None, 'removes': None, 'stdin': None}},
'stdout_lines': [], 'stderr_lines': ["vdsm-client: Command
Image.prepare with args {'storagepoolID':
'06c58622-f99b-11ea-9122-00163e1bbc93', 'storagedomainID':
'2a4a3cce-f2f6-4ddd-b337-df5ef562f520', 'imageID':
'750428bd-1273-467f-9b27-7f6fe58a446c', 'volumeID':
'1c89c678-f883-4e61-945c-5f7321add343'} failed:", "(code=309,
message=Unknown pool id, pool not connected:
('06c58622-f99b-11ea-9122-00163e1bbc93',))", 'tar: This does not look
like a tar archive', 'tar: 6023764f-5547-4b23-92ca-422eafdf3f87.ovf: Not found in
archive', 'tar: Exiting with failure status due to previous errors'],
'_ansible_no_log': False, 'attempts':
12, 'item': {'name': 'OVF_STORE', 'image_id':
'1c89c678-f883-4e61-945c-5f7321add343', 'id':
'750428bd-1273-467f-9b27-7f6fe58a446c'}, 'ansible_loop_var':
'item', '_ansible_item_label': {'name': 'OVF_STORE',
'image_id': '1c89c678-f883-4e61-945c-5f7321add343', 'id':
'750428bd-1273-467f-9b27-7f6fe58a446c'}}
For the "domain data" steps, I created the same folder "/data" on the
node2 server that should be used as NFS domain data (and, the playbook added correctly the
storage):
Please specify the storage you would like to use (glusterfs, iscsi, fc,
nfs)[nfs]:
Please specify the nfs version you would like to use (auto, v3, v4, v4_0,
v4_1, v4_2)[auto]:
Please specify the full shared storage connection path to use (example:
host:/path): node2-server.tld:/data
If needed, specify additional mount options for the connection to the
hosted-engine storagedomain (example: rsize=32768,wsize=32768) []:
[ INFO ] Creating Storage Domain
[ INFO ] TASK [ovirt.hosted_engine_setup : Execute just a specific set of steps]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Force facts gathering]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Wait for the storage interface to be up]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Check local VM dir stat]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Enforce local VM dir existence]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Obtain SSO token using username/password
credentials]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Fetch host facts]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Fetch cluster ID]
[ INFO ] ok: [localhost]
[ INFO ] TASK
[ovirt.hosted_engine_setup : Fetch cluster facts]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Fetch Datacenter
facts]
[ INFO ] ok: [localhost]
[
INFO ] TASK [ovirt.hosted_engine_setup : Fetch Datacenter ID]
[ INFO ] ok: [localhost]
[ INFO ] TASK
[ovirt.hosted_engine_setup : Fetch Datacenter name]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Add
NFS storage domain]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Add glusterfs
storage domain]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Add iSCSI
storage domain]
[ INFO ] skipping:
[localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Add Fibre Channel storage domain]
[ INFO ] skipping: [localhost]
[ INFO ] TASK
[ovirt.hosted_engine_setup : Get storage domain details]
[ INFO ] ok: [localhost]
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5OOGN2WW643...
--
--
Shellrent - Il primo hosting italiano Security First
*Francesco Lorenzini*
/System Administrator & DevOps Engineer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155 <tel:+390444321155> | Fax 04441492177