2019-01-11 10:53:32,213+0000 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Wait for the host to be up]
2019-01-11 10:55:56,251+0000 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost]
2019-01-11 10:55:56,857+0000 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug]
2019-01-11 10:55:57,459+0000 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 host_result_up_check: {u'deprecations': [{u'msg': u"The 'ovirt_hosts_facts' module is being renamed 'ovirt_host_facts'", u'version': 2.8}], 'attempts': 27, u'changed': False, u'ansible_facts': {u'ovirt_hosts': [{u'comment': u'', u'update_available': False, u'protocol': u'stomp', u'affinity_labels': [], u'hooks': [], u'cluster': {u'href': u'/ovirt-engine/api/clusters/c30d9494-4f7c-11e8-b69f-00163e6a7aff', u'id': u'c30d9494-4f7c-11e8-b69f-00163e6a7aff'}, u'href': u'/ovirt-engine/api/hosts/384e456c-6dfb-4dd9-bb39-a5b11cccdba0', u'devices': [], u'id': u'384e456c-6dfb-4dd9-bb39-a5b11cccdba0', u'external_status': u'ok', u'statistics': [], u'certificate': {u'organization': u'cluster', u'subject': u'O=cluster,CN=virtA006.cluster'}, u'nics': [], u'storage_connection_extensions': [], u'port': 54321, u'hardware_information': {u'supported_rng_sources': []}, u'memory': 0, u'ksm': {u'enabled': False}, u'se_linux': {}, u'type': u'ovirt_node', u'status': u'non_operational', u'tags': [], u'katello_errata': [], u'external_network_provider_configurations': [], u'status_detail': u'none', u'ssh': {u'port': 22, u'fingerprint': u'SHA256:ioEk58fN4Em/9HgpEO1ImXh+/qh2xW1Oj+9nDFVN+jg'}, u'address': u'virtA006.cluster', u'numa_nodes': [], u'device_passthrough': {u'enabled': False}, u'unmanaged_networks': [], u'permissions': [], u'numa_supported': False, u'power_management': {u'kdump_detection': True, u'enabled': False, u'pm_proxies': [], u'automatic_pm_enabled': True}, u'name': u'virtA006.cluster', u'max_scheduling_memory': 0, u'summary': {u'total': 0}, u'auto_numa_status': u'unknown', u'transparent_huge_pages': {u'enabled': False}, u'network_attachments': [], u'os': {u'custom_kernel_cmdline': u''}, u'cpu': {u'speed': 0.0, u'topology': {}}, u'kdump_status': u'unknown', u'spm': {u'priority': 5, u'status': u'none'}}]}, 'failed': False}
2019-01-11 10:55:58,063+0000 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Check host status]
2019-01-11 10:55:58,665+0000 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 {u'msg': u'The host has been set in non_operational status, please check engine logs, fix accordingly and re-deploy.\n', u'changed': False, u'_ansible_no_log': False}
2019-01-11 10:55:58,766+0000 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:98 fatal: [localhost]: FAILED! => {"changed": false, "msg": "The host has been set in non_operational status, please check engine logs, fix accordingly and re-deploy.\n"}
You need to check engine.log to understand why the host has set in non operational state.
It could be related to storage domains recorded in the backup file that are not available at restore time: in that case I'd suggest to restore on a new datacenter and manually fix from the restored engine.
At the end you can take another backup and eventually restore again in the initial datacenter.