Self Hosted Engine failed during setup using oVirt Node 4.3

Hardware Specs Hypervisor: Supermicro A2SDi-16C-HLN4F Intel(R) Atom(TM) CPU C3955 @ 2.10GHz 16cores Samsung NVME 1TB Sandisk SSHD 1TB 256GB RAM 4 x 1G nics Storage: FreeNAS-11.2-RELEASE-U1 - NFS - iSCSI * Issues #1 using iscsi unable to discover the target using cockpit dashboard * Issue #2 using nfs can successfully connect but the host wont come up after shutdown by the installation * Error message: [ INFO ] TASK [oVirt.hosted-engine-setup : Check engine VM health] [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed": true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta": "0:00:00.358141", "end": "2019-03-14 06:33:35.499429", "rc": 0, "start": "2019-03-14 06:33:35.141288", "stderr": "", "stderr_lines": [], "stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=43479 (Thu Mar 14 06:33:34 2019)\\nhost-id=1\\nscore=0\\nvm_conf_refresh_time=43480 (Thu Mar 14 06:33:34 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineUnexpectedlyDown\\nstopped=False\\ntimeout=Thu Jan 1 07:04:46 1970\\n\", \"hostname\": \"dreamlevel-1.logistics.corp\", \"host-id\": 1, \"engine-status\": {\"reason\": \"bad vm status\", \"health\": \"bad\", \"vm\": \"down_unexpected\", \"detail\": \"Down\"}, \"score\": 0, \"stopped\": false, \"maintenance\": false, \"crc32\": \"de300a81\", \"local_conf_timestamp\": 43480, \"host-ts\": 43479}, \"global_maintenance\": false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=43479 (Thu Mar 14 06:33:34 2019)\\nhost-id=1\\nscore=0\\nvm_conf_refresh_time=43480 (Thu Mar 14 06:33:34 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineUnexpectedlyDown\\nstopped=False\\ntimeout=Thu Jan 1 07:04:46 1970\\n\", \"hostname\": \"dreamlevel-1.logistics.corp\", \"host-id\": 1, \"engine-status\": {\"reason\": \"bad vm status\", \"health\": \"bad\", \"vm\": \"down_unexpected\", \"detail\": \"Down\"}, \"score\": 0, \"stopped\": false, \"maintenance\": false, \"crc32\": \"de300a81\", \"local_conf_timestamp\": 43480, \"host-ts\": 43479}, \"global_maintenance\": false}"]} [ INFO ] TASK [oVirt.hosted-engine-setup : Check VM status at virt level] [ INFO ] TASK [oVirt.hosted-engine-setup : debug] [ INFO ] ok: [localhost] [ INFO ] TASK [oVirt.hosted-engine-setup : Fail if engine VM is not running] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Engine VM is not running, please check vdsm logs"} 2019-03-14 06:33:41,168-0400 ERROR ansible failed {'status': 'FAILED', 'ansible_type': 'task', 'ansible_task': u'Check VM status at virt level', 'ansible_result': u"type: <type 'dict'>\nstr: {'_ansible_parsed': True, 'stderr_lines': [], u'changed': True, u'end': u'2019-03-14 06:33:39.429283', '_ansible_no_log': False, u'stdout': u'', u'cmd': u'virsh -r list | grep HostedEngine | grep running', u'rc': 1, u'stderr': u'', u'delta': u'0:00:00.118398', u'invocation': {u'module_args': {u'crea", 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml'} 2019-03-14 06:33:41,168-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8948c2fd0> kwargs ignore_errors:True 2019-03-14 06:30:45,000-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:30:50,678-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894782490> kwargs 2019-03-14 06:30:56,369-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894675490> kwargs 2019-03-14 06:31:02,059-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8950ff750> kwargs 2019-03-14 06:31:07,738-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:31:13,429-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8947824d0> kwargs 2019-03-14 06:31:19,118-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8947c2390> kwargs 2019-03-14 06:31:24,797-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894750b10> kwargs 2019-03-14 06:31:30,481-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:31:36,174-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894782490> kwargs 2019-03-14 06:31:41,852-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894675490> kwargs 2019-03-14 06:31:47,534-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8947c2390> kwargs 2019-03-14 06:31:53,226-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:31:58,908-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8947824d0> kwargs 2019-03-14 06:32:04,592-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8945f9f10> kwargs 2019-03-14 06:32:10,266-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944fb390> kwargs 2019-03-14 06:32:15,948-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:32:21,630-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894782490> kwargs 2019-03-14 06:32:27,316-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894675490> kwargs 2019-03-14 06:32:33,016-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8945f9f10> kwargs 2019-03-14 06:32:38,702-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:32:44,382-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8947824d0> kwargs 2019-03-14 06:32:50,063-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8950ff750> kwargs 2019-03-14 06:32:55,745-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894750b10> kwargs 2019-03-14 06:33:01,422-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:33:07,107-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894782490> kwargs 2019-03-14 06:33:12,794-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894675490> kwargs 2019-03-14 06:33:18,470-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8950ff750> kwargs Engine VM status: EngineUnexpectedlyDown-EngineDown. * Manually start the Engine VM via virsh: error: Failed to start domain HostedEngine error: the CPU is incompatible with host CPU: Host CPU does not provide required features: pcid

On Thu, Mar 14, 2019 at 12:06 PM Jagi Sarcilla < jagi.sarcilla@cevalogistics.com> wrote:
Hardware Specs Hypervisor: Supermicro A2SDi-16C-HLN4F Intel(R) Atom(TM) CPU C3955 @ 2.10GHz 16cores
I fear it could come from this: Ryan, do we support Intel Atom C3000 family?
Samsung NVME 1TB Sandisk SSHD 1TB 256GB RAM 4 x 1G nics
Storage: FreeNAS-11.2-RELEASE-U1 - NFS - iSCSI
* Issues #1 using iscsi unable to discover the target using cockpit dashboard
* Issue #2 using nfs can successfully connect but the host wont come up after shutdown by the installation
* Error message:
[ INFO ] TASK [oVirt.hosted-engine-setup : Check engine VM health] [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed": true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta": "0:00:00.358141", "end": "2019-03-14 06:33:35.499429", "rc": 0, "start": "2019-03-14 06:33:35.141288", "stderr": "", "stderr_lines": [], "stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=43479 (Thu Mar 14 06:33:34 2019)\\nhost-id=1\\nscore=0\\nvm_conf_refresh_time=43480 (Thu Mar 14 06:33:34 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineUnexpectedlyDown\\nstopped=False\\ntimeout=Thu Jan 1 07:04:46 1970\\n\", \"hostname\": \"dreamlevel-1.logistics.corp\", \"host-id\": 1, \"engine-status\": {\"reason\": \"bad vm status\", \"health\": \"bad\", \"vm\": \"down_unexpected\", \"detail\": \"Down\"}, \"score\": 0, \"stopped\": false, \"maintenance\": false, \"crc32\": \"de300a81\", \"local_conf_timestamp\": 43480, \"host-ts\": 43479}, \"global_maintenance\": false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=43479 (Thu Mar 14 06:33:34 2019)\\nhost-id=1\\nscore=0\\nvm_conf_refresh_time=43480 (Thu Mar 14 06:33:34 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineUnexpectedlyDown\\nstopped=False\\ntimeout=Thu Jan 1 07:04:46 1970\\n\", \"hostname\": \"dreamlevel-1.logistics.corp\", \"host-id\": 1, \"engine-status\": {\"reason\": \"bad vm status\", \"health\": \"bad\", \"vm\": \"down_unexpected\", \"detail\": \"Down\"}, \"score\": 0, \"stopped\": false, \"maintenance\": false, \"crc32\": \"de300a81\", \"local_conf_timestamp\": 43480, \"host-ts\": 43479}, \"global_maintenance\": false}"]} [ INFO ] TASK [oVirt.hosted-engine-setup : Check VM status at virt level] [ INFO ] TASK [oVirt.hosted-engine-setup : debug] [ INFO ] ok: [localhost] [ INFO ] TASK [oVirt.hosted-engine-setup : Fail if engine VM is not running] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Engine VM is not running, please check vdsm logs"}
2019-03-14 06:33:41,168-0400 ERROR ansible failed {'status': 'FAILED', 'ansible_type': 'task', 'ansible_task': u'Check VM status at virt level', 'ansible_result': u"type: <type 'dict'>\nstr: {'_ansible_parsed': True, 'stderr_lines': [], u'changed': True, u'end': u'2019-03-14 06:33:39.429283', '_ansible_no_log': False, u'stdout': u'', u'cmd': u'virsh -r list | grep HostedEngine | grep running', u'rc': 1, u'stderr': u'', u'delta': u'0:00:00.118398', u'invocation': {u'module_args': {u'crea", 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml'} 2019-03-14 06:33:41,168-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8948c2fd0> kwargs ignore_errors:True
2019-03-14 06:30:45,000-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:30:50,678-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894782490> kwargs 2019-03-14 06:30:56,369-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894675490> kwargs 2019-03-14 06:31:02,059-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8950ff750> kwargs 2019-03-14 06:31:07,738-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:31:13,429-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8947824d0> kwargs 2019-03-14 06:31:19,118-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8947c2390> kwargs 2019-03-14 06:31:24,797-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894750b10> kwargs 2019-03-14 06:31:30,481-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:31:36,174-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894782490> kwargs 2019-03-14 06:31:41,852-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894675490> kwargs 2019-03-14 06:31:47,534-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8947c2390> kwargs 2019-03-14 06:31:53,226-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:31:58,908-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8947824d0> kwargs 2019-03-14 06:32:04,592-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8945f9f10> kwargs 2019-03-14 06:32:10,266-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944fb390> kwargs 2019-03-14 06:32:15,948-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:32:21,630-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894782490> kwargs 2019-03-14 06:32:27,316-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894675490> kwargs 2019-03-14 06:32:33,016-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8945f9f10> kwargs 2019-03-14 06:32:38,702-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:32:44,382-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8947824d0> kwargs 2019-03-14 06:32:50,063-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8950ff750> kwargs 2019-03-14 06:32:55,745-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894750b10> kwargs 2019-03-14 06:33:01,422-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:33:07,107-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894782490> kwargs 2019-03-14 06:33:12,794-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894675490> kwargs 2019-03-14 06:33:18,470-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8950ff750> kwargs
Engine VM status: EngineUnexpectedlyDown-EngineDown.
* Manually start the Engine VM via virsh: error: Failed to start domain HostedEngine error: the CPU is incompatible with host CPU: Host CPU does not provide required features: pcid _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/2O45TRVNJAEHFV...

On Thu, Mar 14, 2019 at 12:51 PM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Thu, Mar 14, 2019 at 12:06 PM Jagi Sarcilla < jagi.sarcilla@cevalogistics.com> wrote:
Hardware Specs Hypervisor: Supermicro A2SDi-16C-HLN4F Intel(R) Atom(TM) CPU C3955 @ 2.10GHz 16cores
I fear it could come from this: Ryan, do we support Intel Atom C3000 family?
Samsung NVME 1TB Sandisk SSHD 1TB 256GB RAM 4 x 1G nics
Storage: FreeNAS-11.2-RELEASE-U1 - NFS - iSCSI
* Issues #1 using iscsi unable to discover the target using cockpit dashboard
* Issue #2 using nfs can successfully connect but the host wont come up after shutdown by the installation
* Error message:
[ INFO ] TASK [oVirt.hosted-engine-setup : Check engine VM health]
But in order to reach that point the host went up for the engine but then the engine is not able to start a VM on it. So maybe we have an issue here and the host should be refused by the engine with a clear error message. Jagi, can you please share the logs you have in /var/log/ovirt-hosted-engine-setup?
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed":
true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta": "0:00:00.358141", "end": "2019-03-14 06:33:35.499429", "rc": 0, "start": "2019-03-14 06:33:35.141288", "stderr": "", "stderr_lines": [], "stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=43479 (Thu Mar 14 06:33:34 2019)\\nhost-id=1\\nscore=0\\nvm_conf_refresh_time=43480 (Thu Mar 14 06:33:34 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineUnexpectedlyDown\\nstopped=False\\ntimeout=Thu Jan 1 07:04:46 1970\\n\", \"hostname\": \"dreamlevel-1.logistics.corp\", \"host-id\": 1, \"engine-status\": {\"reason\": \"bad vm status\", \"health\": \"bad\", \"vm\": \"down_unexpected\", \"detail\": \"Down\"}, \"score\": 0, \"stopped\": false, \"maintenance\": false, \"crc32\": \"de300a81\", \"local_conf_timestamp\": 43480, \"host-ts\": 43479}, \"global_maintenance\": false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=43479 (Thu Mar 14 06:33:34 2019)\\nhost-id=1\\nscore=0\\nvm_conf_refresh_time=43480 (Thu Mar 14 06:33:34 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineUnexpectedlyDown\\nstopped=False\\ntimeout=Thu Jan 1 07:04:46 1970\\n\", \"hostname\": \"dreamlevel-1.logistics.corp\", \"host-id\": 1, \"engine-status\": {\"reason\": \"bad vm status\", \"health\": \"bad\", \"vm\": \"down_unexpected\", \"detail\": \"Down\"}, \"score\": 0, \"stopped\": false, \"maintenance\": false, \"crc32\": \"de300a81\", \"local_conf_timestamp\": 43480, \"host-ts\": 43479}, \"global_maintenance\": false}"]} [ INFO ] TASK [oVirt.hosted-engine-setup : Check VM status at virt level] [ INFO ] TASK [oVirt.hosted-engine-setup : debug] [ INFO ] ok: [localhost] [ INFO ] TASK [oVirt.hosted-engine-setup : Fail if engine VM is not running] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Engine VM is not running, please check vdsm logs"}
2019-03-14 06:33:41,168-0400 ERROR ansible failed {'status': 'FAILED', 'ansible_type': 'task', 'ansible_task': u'Check VM status at virt level', 'ansible_result': u"type: <type 'dict'>\nstr: {'_ansible_parsed': True, 'stderr_lines': [], u'changed': True, u'end': u'2019-03-14 06:33:39.429283', '_ansible_no_log': False, u'stdout': u'', u'cmd': u'virsh -r list | grep HostedEngine | grep running', u'rc': 1, u'stderr': u'', u'delta': u'0:00:00.118398', u'invocation': {u'module_args': {u'crea", 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml'} 2019-03-14 06:33:41,168-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8948c2fd0> kwargs ignore_errors:True
2019-03-14 06:30:45,000-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:30:50,678-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894782490> kwargs 2019-03-14 06:30:56,369-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894675490> kwargs 2019-03-14 06:31:02,059-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8950ff750> kwargs 2019-03-14 06:31:07,738-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:31:13,429-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8947824d0> kwargs 2019-03-14 06:31:19,118-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8947c2390> kwargs 2019-03-14 06:31:24,797-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894750b10> kwargs 2019-03-14 06:31:30,481-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:31:36,174-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894782490> kwargs 2019-03-14 06:31:41,852-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894675490> kwargs 2019-03-14 06:31:47,534-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8947c2390> kwargs 2019-03-14 06:31:53,226-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:31:58,908-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8947824d0> kwargs 2019-03-14 06:32:04,592-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8945f9f10> kwargs 2019-03-14 06:32:10,266-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944fb390> kwargs 2019-03-14 06:32:15,948-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:32:21,630-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894782490> kwargs 2019-03-14 06:32:27,316-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894675490> kwargs 2019-03-14 06:32:33,016-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8945f9f10> kwargs 2019-03-14 06:32:38,702-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:32:44,382-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8947824d0> kwargs 2019-03-14 06:32:50,063-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8950ff750> kwargs 2019-03-14 06:32:55,745-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894750b10> kwargs 2019-03-14 06:33:01,422-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:33:07,107-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894782490> kwargs 2019-03-14 06:33:12,794-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894675490> kwargs 2019-03-14 06:33:18,470-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8950ff750> kwargs
Engine VM status: EngineUnexpectedlyDown-EngineDown.
* Manually start the Engine VM via virsh: error: Failed to start domain HostedEngine error: the CPU is incompatible with host CPU: Host CPU does not provide required features: pcid _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/2O45TRVNJAEHFV...

reply but not showing here.. anyway the hosted engine vm, is not starting because the "PCID" cpu flag is required the processor im using dont have that flag processor : 15 vendor_id : GenuineIntel cpu family : 6 model : 95 model name : Intel(R) Atom(TM) CPU C3955 @ 2.10GHz stepping : 1 microcode : 0x24 cpu MHz : 2101.000 cache size : 2048 KB physical id : 0 siblings : 16 core id : 15 cpu cores : 16 apicid : 30 initial apicid : 30 fpu : yes fpu_exception : yes cpuid level : 21 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand lahf_lm 3dnowprefetch epb cat_l2 intel_pt ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust smep erms mpx rdt_a rdseed smap clflushopt sha_ni xsaveopt xsavec xgetbv1 dtherm ida arat pln pts spec_ctrl intel_stibp arch_capabilities bogomips : 4199.97 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management:

On Thu, Mar 14, 2019 at 6:50 PM Jagi Sarcilla < jagi.sarcilla@cevalogistics.com> wrote:
reply but not showing here..
anyway the hosted engine vm, is not starting because the "PCID" cpu flag is required the processor im using dont have that flag
Yes, but in that case the engine should detect and refuse to add that host while engine accepted it.
processor : 15 vendor_id : GenuineIntel cpu family : 6 model : 95 model name : Intel(R) Atom(TM) CPU C3955 @ 2.10GHz stepping : 1 microcode : 0x24 cpu MHz : 2101.000 cache size : 2048 KB physical id : 0 siblings : 16 core id : 15 cpu cores : 16 apicid : 30 initial apicid : 30 fpu : yes fpu_exception : yes cpuid level : 21 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand lahf_lm 3dnowprefetch epb cat_l2 intel_pt ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust smep erms mpx rdt_a rdseed smap clflushopt sha_ni xsaveopt xsavec xgetbv1 dtherm ida arat pln pts spec_ctrl intel_stibp arch_capabilities bogomips : 4199.97 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EZSEPBV6GYHPPR...

On Thu, Mar 14, 2019 at 9:19 PM Simone Tiraboschi <stirabos@redhat.com> wrote:
On Thu, Mar 14, 2019 at 6:50 PM Jagi Sarcilla < jagi.sarcilla@cevalogistics.com> wrote:
reply but not showing here..
anyway the hosted engine vm, is not starting because the "PCID" cpu flag is required the processor im using dont have that flag
Yes, but in that case the engine should detect and refuse to add that host while engine accepted it.
While, according to your logs, the engine detected it as Westmere with +pcid 2019-03-14 06:05:29,929-0400 DEBUG var changed: host "localhost" var "cluster_cpu_model" type "<class 'ansible.utils.unsafe_proxy.AnsibleUnsafeText'>" value: ""Westmere,+pcid,+spec-ctrl,+ssbd"" so maybe we have an issue here. I filed https://bugzilla.redhat.com/1688989 <https://bugzilla.redhat.com/show_bug.cgi?id=1688989>
processor : 15 vendor_id : GenuineIntel cpu family : 6 model : 95 model name : Intel(R) Atom(TM) CPU C3955 @ 2.10GHz stepping : 1 microcode : 0x24 cpu MHz : 2101.000 cache size : 2048 KB physical id : 0 siblings : 16 core id : 15 cpu cores : 16 apicid : 30 initial apicid : 30 fpu : yes fpu_exception : yes cpuid level : 21 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand lahf_lm 3dnowprefetch epb cat_l2 intel_pt ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust smep erms mpx rdt_a rdseed smap clflushopt sha_ni xsaveopt xsavec xgetbv1 dtherm ida arat pln pts spec_ctrl intel_stibp arch_capabilities bogomips : 4199.97 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EZSEPBV6GYHPPR...

correct but the process dont support +pcid flags, how could I turn off pcid flags prior running hosted-engine --deploy so it will not check for pcid flags during installation

Is there a way to specify to disable the PCID flag during Hosted Engine setup command line or via Cockpit, because the Processor don't support PCID flag When the setup is trying to start up the ovirt appliance it wont start due to processor flag any help very much appreciated

On Fri, Mar 15, 2019 at 2:45 PM Jagi Sarcilla < jagi.sarcilla@cevalogistics.com> wrote:
Is there a way to specify to disable the PCID flag during Hosted Engine setup command line or via Cockpit, because the Processor don't support PCID flag
When the setup is trying to start up the ovirt appliance it wont start due to processor flag
any help very much appreciated
Atom C3000 family is currently not supported: they are wrongly detected as Westmere and so the engine assumes that PCID is there since it's there on all the Westmeres. Please follow the discussion on https://bugzilla.redhat.com/1688989 <https://bugzilla.redhat.com/show_bug.cgi?id=1688989> I don't see any viable workaround on that HW now.
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RMBDRUOIFHKEXW...

Thank you for the information, will monitor the bz update

Attached the logs Regards, Jagi From: Simone Tiraboschi [mailto:stirabos@redhat.com] Sent: Thursday, 14 March 2019 8:06 AM To: Sarcilla, Jagi <Jagi.Sarcilla@Cevalogistics.com>; Ryan Barry <rbarry@redhat.com> Cc: users <users@ovirt.org> Subject: Re: [ovirt-users] Self Hosted Engine failed during setup using oVirt Node 4.3 On Thu, Mar 14, 2019 at 12:51 PM Simone Tiraboschi <stirabos@redhat.com<mailto:stirabos@redhat.com>> wrote: On Thu, Mar 14, 2019 at 12:06 PM Jagi Sarcilla <jagi.sarcilla@cevalogistics.com<mailto:jagi.sarcilla@cevalogistics.com>> wrote: Hardware Specs Hypervisor: Supermicro A2SDi-16C-HLN4F Intel(R) Atom(TM) CPU C3955 @ 2.10GHz 16cores I fear it could come from this: Ryan, do we support Intel Atom C3000 family? Samsung NVME 1TB Sandisk SSHD 1TB 256GB RAM 4 x 1G nics Storage: FreeNAS-11.2-RELEASE-U1 - NFS - iSCSI * Issues #1 using iscsi unable to discover the target using cockpit dashboard * Issue #2 using nfs can successfully connect but the host wont come up after shutdown by the installation * Error message: [ INFO ] TASK [oVirt.hosted-engine-setup : Check engine VM health] But in order to reach that point the host went up for the engine but then the engine is not able to start a VM on it. So maybe we have an issue here and the host should be refused by the engine with a clear error message. Jagi, can you please share the logs you have in /var/log/ovirt-hosted-engine-setup? [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed": true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta": "0:00:00.358141", "end": "2019-03-14 06:33:35.499429", "rc": 0, "start": "2019-03-14 06:33:35.141288", "stderr": "", "stderr_lines": [], "stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=43479 (Thu Mar 14 06:33:34 2019)\\nhost-id=1\\nscore=0\\nvm_conf_refresh_time=43480<file://nhost-id=1/nscore=0/nvm_conf_refresh_time=43480> (Thu Mar 14 06:33:34 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineUnexpectedlyDown\\nstopped=False\\ntimeout=Thu<file://nconf_on_shared_storage=True/nmaintenance=False/nstate=EngineUnexpectedlyDown/nstopped=False/ntimeout=Thu> Jan 1 07:04:46 1970\\n\", \"hostname\": \"dreamlevel-1.logistics.corp\", \"host-id\": 1, \"engine-status\": {\"reason\": \"bad vm status\", \"health\": \"bad\", \"vm\": \"down_unexpected\", \"detail\": \"Down\"}, \"score\": 0, \"stopped\": false, \"maintenance\": false, \"crc32\": \"de300a81\", \"local_conf_timestamp\": 43480, \"host-ts\": 43479}, \"global_maintenance\": false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=43479 (Thu Mar 14 06:33:34 2019)\\nhost-id=1\\nscore=0\\nvm_conf_refresh_time=43480<file://nhost-id=1/nscore=0/nvm_conf_refresh_time=43480> (Thu Mar 14 06:33:34 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineUnexpectedlyDown\\nstopped=False\\ntimeout=Thu<file://nconf_on_shared_storage=True/nmaintenance=False/nstate=EngineUnexpectedlyDown/nstopped=False/ntimeout=Thu> Jan 1 07:04:46 1970\\n\", \"hostname\": \"dreamlevel-1.logistics.corp\", \"host-id\": 1, \"engine-status\": {\"reason\": \"bad vm status\", \"health\": \"bad\", \"vm\": \"down_unexpected\", \"detail\": \"Down\"}, \"score\": 0, \"stopped\": false, \"maintenance\": false, \"crc32\": \"de300a81\", \"local_conf_timestamp\": 43480, \"host-ts\": 43479}, \"global_maintenance\": false}"]} [ INFO ] TASK [oVirt.hosted-engine-setup : Check VM status at virt level] [ INFO ] TASK [oVirt.hosted-engine-setup : debug] [ INFO ] ok: [localhost] [ INFO ] TASK [oVirt.hosted-engine-setup : Fail if engine VM is not running] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Engine VM is not running, please check vdsm logs"} 2019-03-14 06:33:41,168-0400 ERROR ansible failed {'status': 'FAILED', 'ansible_type': 'task', 'ansible_task': u'Check VM status at virt level', 'ansible_result': u"type: <type 'dict'>\nstr: {'_ansible_parsed': True, 'stderr_lines': [], u'changed': True, u'end': u'2019-03-14 06:33:39.429283', '_ansible_no_log': False, u'stdout': u'', u'cmd': u'virsh -r list | grep HostedEngine | grep running', u'rc': 1, u'stderr': u'', u'delta': u'0:00:00.118398', u'invocation': {u'module_args': {u'crea", 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml'} 2019-03-14 06:33:41,168-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8948c2fd0> kwargs ignore_errors:True 2019-03-14 06:30:45,000-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:30:50,678-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894782490> kwargs 2019-03-14 06:30:56,369-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894675490> kwargs 2019-03-14 06:31:02,059-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8950ff750> kwargs 2019-03-14 06:31:07,738-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:31:13,429-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8947824d0> kwargs 2019-03-14 06:31:19,118-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8947c2390> kwargs 2019-03-14 06:31:24,797-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894750b10> kwargs 2019-03-14 06:31:30,481-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:31:36,174-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894782490> kwargs 2019-03-14 06:31:41,852-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894675490> kwargs 2019-03-14 06:31:47,534-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8947c2390> kwargs 2019-03-14 06:31:53,226-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:31:58,908-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8947824d0> kwargs 2019-03-14 06:32:04,592-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8945f9f10> kwargs 2019-03-14 06:32:10,266-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944fb390> kwargs 2019-03-14 06:32:15,948-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:32:21,630-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894782490> kwargs 2019-03-14 06:32:27,316-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894675490> kwargs 2019-03-14 06:32:33,016-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8945f9f10> kwargs 2019-03-14 06:32:38,702-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:32:44,382-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8947824d0> kwargs 2019-03-14 06:32:50,063-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8950ff750> kwargs 2019-03-14 06:32:55,745-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894750b10> kwargs 2019-03-14 06:33:01,422-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8944dd810> kwargs 2019-03-14 06:33:07,107-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894782490> kwargs 2019-03-14 06:33:12,794-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe894675490> kwargs 2019-03-14 06:33:18,470-0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fe8950ff750> kwargs Engine VM status: EngineUnexpectedlyDown-EngineDown. * Manually start the Engine VM via virsh: error: Failed to start domain HostedEngine error: the CPU is incompatible with host CPU: Host CPU does not provide required features: pcid _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/2O45TRVNJAEHFV... NVOCC Services are provided by CEVA as agents for and on behalf of Pyramid Lines Limited trading as Pyramid Lines. This e-mail message is intended for the above named recipient(s) only. It may contain confidential information that is privileged. If you are not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this e-mail and any attachment(s) is strictly prohibited. If you have received this e-mail by error, please immediately notify the sender by replying to this e-mail and deleting the message including any attachment(s) from your system. Thank you in advance for your cooperation and assistance. Although the company has taken reasonable precautions to ensure no viruses are present in this email, the company cannot accept responsibility for any loss or damage arising from the use of this email or attachments.
participants (3)
-
Jagi Sarcilla
-
Sarcilla, Jagi
-
Simone Tiraboschi