<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Jun 22, 2017 at 12:38 PM, Michal Skrivanek <span dir="ltr"><<a href="mailto:michal.skrivanek@redhat.com" target="_blank">michal.skrivanek@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span class="gmail-"><br>
> On 22 Jun 2017, at 12:31, Martin Sivak <<a href="mailto:msivak@redhat.com">msivak@redhat.com</a>> wrote:<br>
><br>
> Tomas, what fields are needed in a VM to pass the check that causes<br>
> the following error?<br>
><br>
>>>>> WARN [org.ovirt.engine.core.bll.<wbr>exportimport.ImportVmCommand]<br>
>>>>> (org.ovirt.thread.pool-6-<wbr>thread-23) [] Validation of action 'ImportVm'<br>
>>>>> failed for user SYSTEM. Reasons: VAR__ACTION__IMPORT<br>
>>>>> ,VAR__TYPE__VM,ACTION_TYPE_<wbr>FAILED_ILLEGAL_VM_DISPLAY_<wbr>TYPE_IS_NOT_SUPPORTED_BY_OS<br>
<br>
</span>to match the OS and VM Display type;-)<br>
Configuration is in osinfo….e.g. if that is import from older releases on Linux this is typically caused by the cahgen of cirrus to vga for non-SPICE VMs<br></blockquote><div><br></div><div>yep, the default supported combinations for 4.0+ is this:<br>os.other.devices.display.protocols.value = spice/qxl,vnc/vga,vnc/qxl,vnc/cirrus<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div class="gmail-HOEnZb"><div class="gmail-h5"><br>
><br>
> Thanks.<br>
><br>
> On Thu, Jun 22, 2017 at 12:19 PM, cmc <<a href="mailto:iucounu@gmail.com">iucounu@gmail.com</a>> wrote:<br>
>> Hi Martin,<br>
>><br>
>>><br>
>>> just as a random comment, do you still have the database backup from<br>
>>> the bare metal -> VM attempt? It might be possible to just try again<br>
>>> using it. Or in the worst case.. update the offending value there<br>
>>> before restoring it to the new engine instance.<br>
>><br>
>> I still have the backup. I'd rather do the latter, as re-running the<br>
>> HE deployment is quite lengthy and involved (I have to re-initialise<br>
>> the FC storage each time). Do you know what the offending value(s)<br>
>> would be? Would it be in the Postgres DB or in a config file<br>
>> somewhere?<br>
>><br>
>> Cheers,<br>
>><br>
>> Cam<br>
>><br>
>>> Regards<br>
>>><br>
>>> Martin Sivak<br>
>>><br>
>>> On Thu, Jun 22, 2017 at 11:39 AM, cmc <<a href="mailto:iucounu@gmail.com">iucounu@gmail.com</a>> wrote:<br>
>>>> Hi Yanir,<br>
>>>><br>
>>>> Thanks for the reply.<br>
>>>><br>
>>>>> First of all, maybe a chain reaction of :<br>
>>>>> WARN [org.ovirt.engine.core.bll.<wbr>exportimport.ImportVmCommand]<br>
>>>>> (org.ovirt.thread.pool-6-<wbr>thread-23) [] Validation of action 'ImportVm'<br>
>>>>> failed for user SYSTEM. Reasons: VAR__ACTION__IMPORT<br>
>>>>> ,VAR__TYPE__VM,ACTION_TYPE_<wbr>FAILED_ILLEGAL_VM_DISPLAY_<wbr>TYPE_IS_NOT_SUPPORTED_BY_OS<br>
>>>>> is causing the hosted engine vm not to be set up correctly and further<br>
>>>>> actions were made when the hosted engine vm wasnt in a stable state.<br>
>>>>><br>
>>>>> As for now, are you trying to revert back to a previous/initial state ?<br>
>>>><br>
>>>> I'm not trying to revert it to a previous state for now. This was a<br>
>>>> migration from a bare metal engine, and it didn't report any error<br>
>>>> during the migration. I'd had some problems on my first attempts at<br>
>>>> this migration, whereby it never completed (due to a proxy issue) but<br>
>>>> I managed to resolve this. Do you know of a way to get the Hosted<br>
>>>> Engine VM into a stable state, without rebuilding the entire cluster<br>
>>>> from scratch (since I have a lot of VMs on it)?<br>
>>>><br>
>>>> Thanks for any help.<br>
>>>><br>
>>>> Regards,<br>
>>>><br>
>>>> Cam<br>
>>>><br>
>>>>> Regards,<br>
>>>>> Yanir<br>
>>>>><br>
>>>>> On Wed, Jun 21, 2017 at 4:32 PM, cmc <<a href="mailto:iucounu@gmail.com">iucounu@gmail.com</a>> wrote:<br>
>>>>>><br>
>>>>>> Hi Jenny/Martin,<br>
>>>>>><br>
>>>>>> Any idea what I can do here? The hosted engine VM has no log on any<br>
>>>>>> host in /var/log/libvirt/qemu, and I fear that if I need to put the<br>
>>>>>> host into maintenance, e.g., to upgrade it that I created it on (which<br>
>>>>>> I think is hosting it), or if it fails for any reason, it won't get<br>
>>>>>> migrated to another host, and I will not be able to manage the<br>
>>>>>> cluster. It seems to be a very dangerous position to be in.<br>
>>>>>><br>
>>>>>> Thanks,<br>
>>>>>><br>
>>>>>> Cam<br>
>>>>>><br>
>>>>>> On Wed, Jun 21, 2017 at 11:48 AM, cmc <<a href="mailto:iucounu@gmail.com">iucounu@gmail.com</a>> wrote:<br>
>>>>>>> Thanks Martin. The hosts are all part of the same cluster.<br>
>>>>>>><br>
>>>>>>> I get these errors in the engine.log on the engine:<br>
>>>>>>><br>
>>>>>>> 2017-06-19 03:28:05,030Z WARN<br>
>>>>>>> [org.ovirt.engine.core.bll.<wbr>exportimport.ImportVmCommand]<br>
>>>>>>> (org.ovirt.thread.pool-6-<wbr>thread-23) [] Validation of action 'ImportVm'<br>
>>>>>>> failed for user SYST<br>
>>>>>>> EM. Reasons:<br>
>>>>>>> VAR__ACTION__IMPORT,VAR__TYPE_<wbr>_VM,ACTION_TYPE_FAILED_<wbr>ILLEGAL_VM_DISPLAY_TYPE_IS_<wbr>NOT_SUPPORTED_BY_OS<br>
>>>>>>> 2017-06-19 03:28:05,030Z INFO<br>
>>>>>>> [org.ovirt.engine.core.bll.<wbr>exportimport.ImportVmCommand]<br>
>>>>>>> (org.ovirt.thread.pool-6-<wbr>thread-23) [] Lock freed to object<br>
>>>>>>> 'EngineLock:{exclusiveLocks='[<wbr>a<br>
>>>>>>> 79e6b0e-fff4-4cba-a02c-<wbr>4c00be151300=<VM,<br>
>>>>>>> ACTION_TYPE_FAILED_VM_IS_<wbr>BEING_IMPORTED$VmName HostedEngine>,<br>
>>>>>>> HostedEngine=<VM_NAME, ACTION_TYPE_FAILED_NAME_<wbr>ALREADY_USED>]',<br>
>>>>>>> sharedLocks=<br>
>>>>>>> '[a79e6b0e-fff4-4cba-a02c-<wbr>4c00be151300=<REMOTE_VM,<br>
>>>>>>> ACTION_TYPE_FAILED_VM_IS_<wbr>BEING_IMPORTED$VmName HostedEngine>]'}'<br>
>>>>>>> 2017-06-19 03:28:05,030Z ERROR<br>
>>>>>>> [org.ovirt.engine.core.bll.<wbr>HostedEngineImporter]<br>
>>>>>>> (org.ovirt.thread.pool-6-<wbr>thread-23) [] Failed importing the Hosted<br>
>>>>>>> Engine VM<br>
>>>>>>><br>
>>>>>>> The sanlock.log reports conflicts on that same host, and a different<br>
>>>>>>> error on the other hosts, not sure if they are related.<br>
>>>>>>><br>
>>>>>>> And this in the /var/log/ovirt-hosted-engine-<wbr>ha/agent log on the host<br>
>>>>>>> which I deployed the hosted engine VM on:<br>
>>>>>>><br>
>>>>>>> MainThread::ERROR::2017-06-19<br>
>>>>>>><br>
>>>>>>> 13:09:49,743::ovf_store::124::<wbr>ovirt_hosted_engine_ha.lib.<wbr>ovf.ovf_store.OVFStore::(<wbr>getEngineVMOVF)<br>
>>>>>>> Unable to extract HEVM OVF<br>
>>>>>>> MainThread::ERROR::2017-06-19<br>
>>>>>>><br>
>>>>>>> 13:09:49,743::config::445::<wbr>ovirt_hosted_engine_ha.agent.<wbr>hosted_engine.HostedEngine.<wbr>config::(_get_vm_conf_content_<wbr>from_ovf_store)<br>
>>>>>>> Failed extracting VM OVF from the OVF_STORE volume, falling back to<br>
>>>>>>> initial vm.conf<br>
>>>>>>><br>
>>>>>>> I've seen some of these issues reported in bugzilla, but they were for<br>
>>>>>>> older versions of oVirt (and appear to be resolved).<br>
>>>>>>><br>
>>>>>>> I will install that package on the other two hosts, for which I will<br>
>>>>>>> put them in maintenance as vdsm is installed as an upgrade. I guess<br>
>>>>>>> restarting vdsm is a good idea after that?<br>
>>>>>>><br>
>>>>>>> Thanks,<br>
>>>>>>><br>
>>>>>>> Campbell<br>
>>>>>>><br>
>>>>>>> On Wed, Jun 21, 2017 at 10:51 AM, Martin Sivak <<a href="mailto:msivak@redhat.com">msivak@redhat.com</a>><br>
>>>>>>> wrote:<br>
>>>>>>>> Hi,<br>
>>>>>>>><br>
>>>>>>>> you do not have to install it on all hosts. But you should have more<br>
>>>>>>>> than one and ideally all hosted engine enabled nodes should belong to<br>
>>>>>>>> the same engine cluster.<br>
>>>>>>>><br>
>>>>>>>> Best regards<br>
>>>>>>>><br>
>>>>>>>> Martin Sivak<br>
>>>>>>>><br>
>>>>>>>> On Wed, Jun 21, 2017 at 11:29 AM, cmc <<a href="mailto:iucounu@gmail.com">iucounu@gmail.com</a>> wrote:<br>
>>>>>>>>> Hi Jenny,<br>
>>>>>>>>><br>
>>>>>>>>> Does ovirt-hosted-engine-ha need to be installed across all hosts?<br>
>>>>>>>>> Could that be the reason it is failing to see it properly?<br>
>>>>>>>>><br>
>>>>>>>>> Thanks,<br>
>>>>>>>>><br>
>>>>>>>>> Cam<br>
>>>>>>>>><br>
>>>>>>>>> On Mon, Jun 19, 2017 at 1:27 PM, cmc <<a href="mailto:iucounu@gmail.com">iucounu@gmail.com</a>> wrote:<br>
>>>>>>>>>> Hi Jenny,<br>
>>>>>>>>>><br>
>>>>>>>>>> Logs are attached. I can see errors in there, but am unsure how they<br>
>>>>>>>>>> arose.<br>
>>>>>>>>>><br>
>>>>>>>>>> Thanks,<br>
>>>>>>>>>><br>
>>>>>>>>>> Campbell<br>
>>>>>>>>>><br>
>>>>>>>>>> On Mon, Jun 19, 2017 at 12:29 PM, Evgenia Tokar <<a href="mailto:etokar@redhat.com">etokar@redhat.com</a>><br>
>>>>>>>>>> wrote:<br>
>>>>>>>>>>> From the output it looks like the agent is down, try starting it by<br>
>>>>>>>>>>> running:<br>
>>>>>>>>>>> systemctl start ovirt-ha-agent.<br>
>>>>>>>>>>><br>
>>>>>>>>>>> The engine is supposed to see the hosted engine storage domain and<br>
>>>>>>>>>>> import it<br>
>>>>>>>>>>> to the system, then it should import the hosted engine vm.<br>
>>>>>>>>>>><br>
>>>>>>>>>>> Can you attach the agent log from the host<br>
>>>>>>>>>>> (/var/log/ovirt-hosted-engine-<wbr>ha/agent.log)<br>
>>>>>>>>>>> and the engine log from the engine vm<br>
>>>>>>>>>>> (/var/log/ovirt-engine/engine.<wbr>log)?<br>
>>>>>>>>>>><br>
>>>>>>>>>>> Thanks,<br>
>>>>>>>>>>> Jenny<br>
>>>>>>>>>>><br>
>>>>>>>>>>><br>
>>>>>>>>>>> On Mon, Jun 19, 2017 at 12:41 PM, cmc <<a href="mailto:iucounu@gmail.com">iucounu@gmail.com</a>> wrote:<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> Hi Jenny,<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>>> What version are you running?<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> 4.1.2.2-1.el7.centos<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>>> For the hosted engine vm to be imported and displayed in the<br>
>>>>>>>>>>>>> engine, you<br>
>>>>>>>>>>>>> must first create a master storage domain.<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> To provide a bit more detail: this was a migration of a bare-metal<br>
>>>>>>>>>>>> engine in an existing cluster to a hosted engine VM for that<br>
>>>>>>>>>>>> cluster.<br>
>>>>>>>>>>>> As part of this migration, I built an entirely new host and ran<br>
>>>>>>>>>>>> 'hosted-engine --deploy' (followed these instructions:<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> <a href="http://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment/" rel="noreferrer" target="_blank">http://www.ovirt.org/<wbr>documentation/self-hosted/<wbr>chap-Migrating_from_Bare_<wbr>Metal_to_an_EL-Based_Self-<wbr>Hosted_Environment/</a>).<br>
>>>>>>>>>>>> I restored the backup from the engine and it completed without any<br>
>>>>>>>>>>>> errors. I didn't see any instructions regarding a master storage<br>
>>>>>>>>>>>> domain in the page above. The cluster has two existing master<br>
>>>>>>>>>>>> storage<br>
>>>>>>>>>>>> domains, one is fibre channel, which is up, and one ISO domain,<br>
>>>>>>>>>>>> which<br>
>>>>>>>>>>>> is currently offline.<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>>> What do you mean the hosted engine commands are failing? What<br>
>>>>>>>>>>>>> happens<br>
>>>>>>>>>>>>> when<br>
>>>>>>>>>>>>> you run hosted-engine --vm-status now?<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> Interestingly, whereas when I ran it before, it exited with no<br>
>>>>>>>>>>>> output<br>
>>>>>>>>>>>> and a return code of '1', it now reports:<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> --== Host 1 status ==--<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> conf_on_shared_storage : True<br>
>>>>>>>>>>>> Status up-to-date : False<br>
>>>>>>>>>>>> Hostname : <a href="http://kvm-ldn-03.ldn.fscfc.co.uk" rel="noreferrer" target="_blank">kvm-ldn-03.ldn.fscfc.co.uk</a><br>
>>>>>>>>>>>> Host ID : 1<br>
>>>>>>>>>>>> Engine status : unknown stale-data<br>
>>>>>>>>>>>> Score : 0<br>
>>>>>>>>>>>> stopped : True<br>
>>>>>>>>>>>> Local maintenance : False<br>
>>>>>>>>>>>> crc32 : 0217f07b<br>
>>>>>>>>>>>> local_conf_timestamp : 2911<br>
>>>>>>>>>>>> Host timestamp : 2897<br>
>>>>>>>>>>>> Extra metadata (valid at timestamp):<br>
>>>>>>>>>>>> metadata_parse_version=1<br>
>>>>>>>>>>>> metadata_feature_version=1<br>
>>>>>>>>>>>> timestamp=2897 (Thu Jun 15 16:22:54 2017)<br>
>>>>>>>>>>>> host-id=1<br>
>>>>>>>>>>>> score=0<br>
>>>>>>>>>>>> vm_conf_refresh_time=2911 (Thu Jun 15 16:23:08 2017)<br>
>>>>>>>>>>>> conf_on_shared_storage=True<br>
>>>>>>>>>>>> maintenance=False<br>
>>>>>>>>>>>> state=AgentStopped<br>
>>>>>>>>>>>> stopped=True<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> Yet I can login to the web GUI fine. I guess it is not HA due to<br>
>>>>>>>>>>>> being<br>
>>>>>>>>>>>> in an unknown state currently? Does the hosted-engine-ha rpm need<br>
>>>>>>>>>>>> to<br>
>>>>>>>>>>>> be installed across all nodes in the cluster, btw?<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> Thanks for the help,<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> Cam<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> Jenny Tokar<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> On Thu, Jun 15, 2017 at 6:32 PM, cmc <<a href="mailto:iucounu@gmail.com">iucounu@gmail.com</a>> wrote:<br>
>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>> Hi,<br>
>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>> I've migrated from a bare-metal engine to a hosted engine. There<br>
>>>>>>>>>>>>>> were<br>
>>>>>>>>>>>>>> no errors during the install, however, the hosted engine did not<br>
>>>>>>>>>>>>>> get<br>
>>>>>>>>>>>>>> started. I tried running:<br>
>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>> hosted-engine --status<br>
>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>> on the host I deployed it on, and it returns nothing (exit code<br>
>>>>>>>>>>>>>> is 1<br>
>>>>>>>>>>>>>> however). I could not ping it either. So I tried starting it via<br>
>>>>>>>>>>>>>> 'hosted-engine --vm-start' and it returned:<br>
>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>> Virtual machine does not exist<br>
>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>> But it then became available. I logged into it successfully. It<br>
>>>>>>>>>>>>>> is not<br>
>>>>>>>>>>>>>> in the list of VMs however.<br>
>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>> Any ideas why the hosted-engine commands fail, and why it is not<br>
>>>>>>>>>>>>>> in<br>
>>>>>>>>>>>>>> the list of virtual machines?<br>
>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>> Thanks for any help,<br>
>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>> Cam<br>
>>>>>>>>>>>>>> ______________________________<wbr>_________________<br>
>>>>>>>>>>>>>> Users mailing list<br>
>>>>>>>>>>>>>> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
>>>>>>>>>>>>>> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>><br>
>>>>>>>>>>><br>
>>>>>>>>>>><br>
>>>>>>>>> ______________________________<wbr>_________________<br>
>>>>>>>>> Users mailing list<br>
>>>>>>>>> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
>>>>>>>>> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
>>>>>> ______________________________<wbr>_________________<br>
>>>>>> Users mailing list<br>
>>>>>> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
>>>>>> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
>>>>><br>
>>>>><br>
> ______________________________<wbr>_________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
><br>
><br>
<br>
</div></div></blockquote></div><br></div></div>