On Fri, Mar 9, 2018 at 8:33 PM, Kristian Petersen <nesretep(a)chem.byu.edu>
wrote:
The real issue is here:
<cpu match="exact">
<model>BroadwellIBRS</model>
</cpu>
<on_poweroff>destroy</on_poweroff><on_reboot>destroy</on_reboot><on_crash>destroy</on_crash></domain>
(vm:2751)
2018-03-08 08:04:13,757-0700 ERROR (vm/9a1e133d) [virt.vm]
(vmId='9a1e133d-13d8-4613-b1a5-fd3ca81ffcc3') The vm start process failed
(vm:927)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 856, in
_startUnderlyingVm
self._run()
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2756, in
_run
dom.createWithFlags(flags)
File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py",
line 130, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92,
in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1069, in
createWithFlags
if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed',
dom=self)
libvirtError: internal error: Unknown CPU model BroadwellIBRS
Indeed it should be Broadwell-IBRS
Can you please report which rpm version of ovirt-hosted-engine-setup did
you used?
You can fix it in this way:
copy /var/run/ovirt-hosted-engine-ha/vm.conf somewhere, edit it and update
the cpuType field.
Then start the engine VM with your custom vm.conf with something like:
hosted-engine --vm-start --vm-conf=/root/my_vm.conf
keep the engine up for at least one hour and it will generate the OVF_STORE
disks with the right configuration for the hosted-engine VM.
It failed really at the end of the setup so anything else should be fine.
On Fri, Mar 9, 2018 at 1:21 AM, Simone Tiraboschi <stirabos(a)redhat.com>
wrote:
>
>
> On Thu, Mar 8, 2018 at 7:28 PM, Kristian Petersen <nesretep(a)chem.byu.edu>
> wrote:
>
>> I am trying to deploy oVirt with a self-hosted engine and the setup
>> seems to go well until near the very end when the status message says:
>> [ INFO ] TASK [Wait for the engine to come up on the target VM]
>>
>> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120,
"changed":
>> true, "cmd": ["hosted-engine", "--vm-status",
"--json"], "delta": "0:0
>> 0:00.216412", "end": "2018-03-07 16:02:02.677478",
"rc": 0, "start":
>> "2018-03-07 16:02:02.461066", "stderr": "",
"stderr_lines": [], "stdout
>> ": "{\"1\": {\"conf_on_shared_storage\": true,
\"live-data\": true,
>> \"extra\":
\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntim
>> estamp=4679955 (Wed Mar 7 16:01:50 2018)\\nhost-id=1\\nscore=3400
>> \\nvm_conf_refresh_time=4679956 (Wed Mar 7 16:01:51
>> 2018)\\nconf_on_share
>>
d_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
>> \"hostname\": \"rhv1.cpms.byu.edu\", \"host-id\":
1, \"engin
>> e-status\": {\"reason\": \"vm not running on this
host\", \"health\":
>> \"bad\", \"vm\": \"down\", \"detail\":
\"unknown\"}, \"score\": 3400,
>> \"stopped\": false, \"maintenance\": false,
\"crc32\": \"d3a67cf7\",
>> \"local_conf_timestamp\": 4679956, \"host-ts\": 4679955},
\"global_main
>> tenance\": false}", "stdout_lines": ["{\"1\":
>> {\"conf_on_shared_storage\": true, \"live-data\": true,
\"extra\":
>> \"metadata_parse_version=1\
>> \nmetadata_feature_version=1\\ntimestamp=4679955 (Wed Mar 7 16:01:50
>> 2018)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=4679956 (Wed Mar
>> 7 16:01:51 2018)\\nconf_on_shared_storage
>> =True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
>> \"hostname\": \"rhv1.cpms.
>> byu.edu\", \"host-id\": 1, \"engine-status\":
{\"reason\": \"vm not
>> running on this host\", \"health\": \"bad\",
\"vm\": \"down\", \"detail\
>> ": \"unknown\"}, \"score\": 3400, \"stopped\":
false, \"maintenance\":
>> false, \"crc32\": \"d3a67cf7\",
\"local_conf_timestamp\": 4679956, \"
>> host-ts\": 4679955}, \"global_maintenance\": false}"]}
>> [ ERROR ] Failed to execute stage 'Closing up': Failed executing
>> ansible-playbook
>>
>> Any ideas that might help?
>>
>
>
> Hi Kristian,
> {\"reason\": \"vm not running on this host\" sonds really bad.
> I means that ovirt-ha-agent (in charge of restarting the engine VM) think
> that another host took over but at that stage you should have just one host.
>
> Could you please attach /var/log/ovirt-hosted-engine-ha/agent.log and
> /var/log/vdsm/vdsm.log for the relevant time frame?
>
>
>>
>>
>> --
>> Kristian Petersen
>> System Administrator
>> Dept. of Chemistry and Biochemistry
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>>
http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
--
Kristian Petersen
System Administrator
Dept. of Chemistry and Biochemistry