On Mon, Mar 12, 2018 at 6:27 PM, Kristian Petersen <nesretep(a)chem.byu.edu>
wrote:
I tried using my customized vm.conf with the fix in the CPU name as
you
suggested. When I ran hosted-engine --vm-start --vm-conf=/root/myvm.conf
and that failed.
It said the vm didn't exist. It sounds like I might need to
get the
updated package from the ovirt-4.2-pre repo and try deploying again.
On Mon, Mar 12, 2018 at 10:31 AM, Simone Tiraboschi <stirabos(a)redhat.com>
wrote:
>
>
> On Mon, Mar 12, 2018 at 5:25 PM, Kristian Petersen <nesretep(a)chem.byu.edu
> > wrote:
>
>> I'm guessing that v2.2.10 is not in the oVirt repo yet. When I looked
>> at vm.conf, the CPU name has a space in it like the one mentioned in the
>> link you included. So replacing that space with an underscore should do
>> the trick prehaps?
>>
>
> v2.2.12 is in -pre repo.
>
> You should replace the space with a dash: Broadwell-IBRS
>
>
>>
>> On Mon, Mar 12, 2018 at 10:00 AM, Kristian Petersen <
>> nesretep(a)chem.byu.edu> wrote:
>>
>>> I have v2.2.9 of ovirt-hosted-engine-setup currently installed. I'll
>>> try out the other suggestion you made also. Thanks for the help.
>>>
>>> On Fri, Mar 9, 2018 at 4:26 PM, Simone Tiraboschi
<stirabos(a)redhat.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Fri, Mar 9, 2018 at 8:33 PM, Kristian Petersen <
>>>> nesretep(a)chem.byu.edu> wrote:
>>>>
>>>>> I have attached the relevant log files as requested.
>>>>> vdsm.log.1
>>>>>
<
https://drive.google.com/a/chem.byu.edu/file/d/1ibJG_SEjK9NSEPft_HCkZzQO2...
>>>>>
>>>>>
>>>>
>>>>
>>>> The real issue is here:
>>>>
>>>> <cpu match="exact">
>>>> <model>BroadwellIBRS</model>
>>>> </cpu>
>>>>
<on_poweroff>destroy</on_poweroff><on_reboot>destroy</on_reb
>>>> oot><on_crash>destroy</on_crash></domain> (vm:2751)
>>>> 2018-03-08 08:04:13,757-0700 ERROR (vm/9a1e133d) [virt.vm]
>>>> (vmId='9a1e133d-13d8-4613-b1a5-fd3ca81ffcc3') The vm start
process
>>>> failed (vm:927)
>>>> Traceback (most recent call last):
>>>> File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
856,
>>>> in _startUnderlyingVm
>>>> self._run()
>>>> File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line
2756,
>>>> in _run
>>>> dom.createWithFlags(flags)
>>>> File
"/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py",
>>>> line 130, in wrapper
>>>> ret = f(*args, **kwargs)
>>>> File
"/usr/lib/python2.7/site-packages/vdsm/common/function.py",
>>>> line 92, in wrapper
>>>> return func(inst, *args, **kwargs)
>>>> File "/usr/lib64/python2.7/site-packages/libvirt.py", line
1069, in
>>>> createWithFlags
>>>> if ret == -1: raise libvirtError ('virDomainCreateWithFlags()
>>>> failed', dom=self)
>>>> libvirtError: internal error: Unknown CPU model BroadwellIBRS
>>>>
>>>> Indeed it should be Broadwell-IBRS
>>>>
>>>> Can you please report which rpm version of ovirt-hosted-engine-setup
>>>> did you used?
>>>>
>>>> You can fix it in this way:
>>>> copy /var/run/ovirt-hosted-engine-ha/vm.conf somewhere, edit it and
>>>> update the cpuType field.
>>>>
>>>> Then start the engine VM with your custom vm.conf with something like:
>>>> hosted-engine --vm-start --vm-conf=/root/my_vm.conf
>>>> keep the engine up for at least one hour and it will generate the
>>>> OVF_STORE disks with the right configuration for the hosted-engine VM.
>>>>
>>>> It failed really at the end of the setup so anything else should be
>>>> fine.
>>>>
>>>>
>>>>
>>>>>
>>>>> On Fri, Mar 9, 2018 at 1:21 AM, Simone Tiraboschi <
>>>>> stirabos(a)redhat.com> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Mar 8, 2018 at 7:28 PM, Kristian Petersen <
>>>>>> nesretep(a)chem.byu.edu> wrote:
>>>>>>
>>>>>>> I am trying to deploy oVirt with a self-hosted engine and the
setup
>>>>>>> seems to go well until near the very end when the status
message says:
>>>>>>> [ INFO ] TASK [Wait for the engine to come up on the target
VM]
>>>>>>>
>>>>>>> [ ERROR ] fatal: [localhost]: FAILED! =>
{"attempts": 120,
>>>>>>> "changed": true, "cmd":
["hosted-engine", "--vm-status", "--json"],
>>>>>>> "delta": "0:0
>>>>>>> 0:00.216412", "end": "2018-03-07
16:02:02.677478", "rc": 0,
>>>>>>> "start": "2018-03-07 16:02:02.461066",
"stderr": "", "stderr_lines": [],
>>>>>>> "stdout
>>>>>>> ": "{\"1\":
{\"conf_on_shared_storage\": true, \"live-data\": true,
>>>>>>> \"extra\": \"metadata_parse_version=1\\nm
>>>>>>> etadata_feature_version=1\\ntim
>>>>>>> estamp=4679955 (Wed Mar 7 16:01:50
2018)\\nhost-id=1\\nscore=3400
>>>>>>> \\nvm_conf_refresh_time=4679956 (Wed Mar 7 16:01:51
>>>>>>> 2018)\\nconf_on_share
>>>>>>>
d_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
>>>>>>> \"hostname\": \"rhv1.cpms.byu.edu\",
\"host-id\": 1, \"engin
>>>>>>> e-status\": {\"reason\": \"vm not running
on this host\",
>>>>>>> \"health\": \"bad\", \"vm\":
\"down\", \"detail\": \"unknown\"}, \"score\":
>>>>>>> 3400,
>>>>>>> \"stopped\": false, \"maintenance\":
false, \"crc32\":
>>>>>>> \"d3a67cf7\", \"local_conf_timestamp\":
4679956, \"host-ts\": 4679955},
>>>>>>> \"global_main
>>>>>>> tenance\": false}", "stdout_lines":
["{\"1\":
>>>>>>> {\"conf_on_shared_storage\": true,
\"live-data\": true, \"extra\":
>>>>>>> \"metadata_parse_version=1\
>>>>>>> \nmetadata_feature_version=1\\ntimestamp=4679955 (Wed Mar 7
>>>>>>> 16:01:50 2018)\\nhost-id=1\\nscore=3400
>>>>>>> \\nvm_conf_refresh_time=4679956 (Wed Mar
>>>>>>> 7 16:01:51 2018)\\nconf_on_shared_storage
>>>>>>>
=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
>>>>>>> \"hostname\": \"rhv1.cpms.
>>>>>>> byu.edu\", \"host-id\": 1,
\"engine-status\": {\"reason\": \"vm
>>>>>>> not running on this host\", \"health\":
\"bad\", \"vm\": \"down\", \"detail\
>>>>>>> ": \"unknown\"}, \"score\": 3400,
\"stopped\": false,
>>>>>>> \"maintenance\": false, \"crc32\":
\"d3a67cf7\", \"local_conf_timestamp\":
>>>>>>> 4679956, \"
>>>>>>> host-ts\": 4679955}, \"global_maintenance\":
false}"]}
>>>>>>> [ ERROR ] Failed to execute stage 'Closing up':
Failed executing
>>>>>>> ansible-playbook
>>>>>>>
>>>>>>> Any ideas that might help?
>>>>>>>
>>>>>>
>>>>>>
>>>>>> Hi Kristian,
>>>>>> {\"reason\": \"vm not running on this host\"
sonds really bad.
>>>>>> I means that ovirt-ha-agent (in charge of restarting the engine
VM)
>>>>>> think that another host took over but at that stage you should
have just
>>>>>> one host.
>>>>>>
>>>>>> Could you please attach
/var/log/ovirt-hosted-engine-ha/agent.log
>>>>>> and /var/log/vdsm/vdsm.log for the relevant time frame?
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Kristian Petersen
>>>>>>> System Administrator
>>>>>>> Dept. of Chemistry and Biochemistry
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Users mailing list
>>>>>>> Users(a)ovirt.org
>>>>>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Kristian Petersen
>>>>> System Administrator
>>>>> Dept. of Chemistry and Biochemistry
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Kristian Petersen
>>> System Administrator
>>> BYU Dept. of Chemistry and Biochemistry
>>>
>>
>>
>>
>> --
>> Kristian Petersen
>> System Administrator
>> BYU Dept. of Chemistry and Biochemistry
>>
>
>
--
Kristian Petersen
System Administrator
BYU Dept. of Chemistry and Biochemistry