<div dir="ltr"><div><div>I've checked id's in /rhev/data-center/mnt/glusterSD/*...../dom_md/ <br><br># -rw-rw----. 1 vdsm kvm 1048576 Mar 12 05:14 ids<br><br></div><div>seems ok<br></div><div><br>sanlock.log showing;<br>---------------------------<br>r14 acquire_token open error -13<br>r14 cmd_acquire 2,11,89283 acquire_token -13 <br><br></div>Now I'm not quiet sure on which direction to take. <br><br>Lockspace<br>---------------<br>"hosted-engine --reinitialize-lockspace" is throwing an exception;<br><br>Exception("Lockfile reset cannot be performed with"<br>Exception: Lockfile reset cannot be performed with an active agent.<br><br><br></div>@didi - I am in "Global Maintenance". <br>I just noticed that host 1 now shows.<br>Engine status: unknown stale-data<br>state= AgentStopped<br><br>I'm pretty sure Ive been able to start the Engine VM while in Global Maintenance. But you raise a good question. I don't see why you would be restricted in running the engine while in Global or even starting the VM. If so this is a little bakwards.<br><br><br><br><div><br><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On 12 March 2017 at 16:28, Yedidyah Bar David <span dir="ltr"><<a href="mailto:didi@redhat.com" target="_blank">didi@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Fri, Mar 10, 2017 at 12:39 PM, Martin Sivak <<a href="mailto:msivak@redhat.com">msivak@redhat.com</a>> wrote:<br>
> Hi Ian,<br>
><br>
> it is normal that VDSMs are competing for the lock, one should win<br>
> though. If that is not the case then the lockspace might be corrupted<br>
> or the sanlock daemons can't reach it.<br>
><br>
> I would recommend putting the cluster to global maintenance and<br>
> attempting a manual start using:<br>
><br>
> # hosted-engine --set-maintenance --mode=global<br>
> # hosted-engine --vm-start<br>
<br>
</span>Is that possible? See also:<br>
<br>
<a href="http://lists.ovirt.org/pipermail/users/2016-January/036993.html" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>pipermail/users/2016-January/<wbr>036993.html</a><br>
<div class="HOEnZb"><div class="h5"><br>
><br>
> You will need to check your storage connectivity and sanlock status on<br>
> all hosts if that does not work.<br>
><br>
> # sanlock client status<br>
><br>
> There are couple of locks I would expect to be there (ha_agent, spm),<br>
> but no lock for hosted engine disk should be visible.<br>
><br>
> Next steps depend on whether you have important VMs running on the<br>
> cluster and on the Gluster status (I can't help you there<br>
> unfortunately).<br>
><br>
> Best regards<br>
><br>
> --<br>
> Martin Sivak<br>
> SLA / oVirt<br>
><br>
><br>
> On Fri, Mar 10, 2017 at 7:37 AM, Ian Neilsen <<a href="mailto:ian.neilsen@gmail.com">ian.neilsen@gmail.com</a>> wrote:<br>
>> I just noticed this in the vdsm.logs. The agent looks like it is trying to<br>
>> start hosted engine on both machines??<br>
>><br>
>> <on_poweroff>destroy</on_<wbr>poweroff><on_reboot>destroy</<wbr>on_reboot><on_crash>destroy</<wbr>on_crash></domain><br>
>> Thread-7517::ERROR::2017-03-10<br>
>> 01:26:13,053::vm::773::virt.<wbr>vm::(_startUnderlyingVm)<br>
>> vmId=`2419f9fe-4998-4b7a-9fe9-<wbr>151571d20379`::The vm start process failed<br>
>> Traceback (most recent call last):<br>
>> File "/usr/share/vdsm/virt/vm.py", line 714, in _startUnderlyingVm<br>
>> self._run()<br>
>> File "/usr/share/vdsm/virt/vm.py", line 2026, in _run<br>
>> self._connection.createXML(<wbr>domxml, flags),<br>
>> File "/usr/lib/python2.7/site-<wbr>packages/vdsm/<wbr>libvirtconnection.py", line<br>
>> 123, in wrapper ret = f(*args, **kwargs)<br>
>> File "/usr/lib/python2.7/site-<wbr>packages/vdsm/utils.py", line 917, in<br>
>> wrapper return func(inst, *args, **kwargs)<br>
>> File "/usr/lib64/python2.7/site-<wbr>packages/libvirt.py", line 3782, in<br>
>> createXML if ret is None:raise libvirtError('<wbr>virDomainCreateXML() failed',<br>
>> conn=self)<br>
>><br>
>> libvirtError: Failed to acquire lock: Permission denied<br>
>><br>
>> INFO::2017-03-10 01:26:13,054::vm::1330::virt.<wbr>vm::(setDownStatus)<br>
>> vmId=`2419f9fe-4998-4b7a-9fe9-<wbr>151571d20379`::Changed state to Down: Failed<br>
>> to acquire lock: Permission denied (code=1)<br>
>> INFO::2017-03-10 01:26:13,054::guestagent::430:<wbr>:virt.vm::(stop)<br>
>> vmId=`2419f9fe-4998-4b7a-9fe9-<wbr>151571d20379`::Stopping connection<br>
>><br>
>> DEBUG::2017-03-10 01:26:13,054::vmchannels::238:<wbr>:vds::(unregister) Delete<br>
>> fileno 56 from listener.<br>
>> DEBUG::2017-03-10 01:26:13,055::vmchannels::66::<wbr>vds::(_unregister_fd) Failed<br>
>> to unregister FD from epoll (ENOENT): 56<br>
>> DEBUG::2017-03-10 01:26:13,055::__init__::209::<wbr>jsonrpc.Notification::(emit)<br>
>> Sending event {"params": {"2419f9fe-4998-4b7a-9fe9-<wbr>151571d20379": {"status":<br>
>> "Down", "exitReason": 1, "exitMessage": "Failed to acquire lock: Permission<br>
>> denied", "exitCode": 1}, "notify_time": 4308740560}, "jsonrpc": "2.0",<br>
>> "method": "|virt|VM_status|2419f9fe-<wbr>4998-4b7a-9fe9-151571d20379"}<br>
>> VM Channels Listener::DEBUG::2017-03-10<br>
>> 01:26:13,475::vmchannels::142:<wbr>:vds::(_do_del_channels) fileno 56 was removed<br>
>> from listener.<br>
>> DEBUG::2017-03-10 01:26:14,430::check::296::<wbr>storage.check::(_start_<wbr>process)<br>
>> START check<br>
>> u'/rhev/data-center/mnt/<wbr>glusterSD/192.168.3.10:_data/<wbr>a08822ec-3f5b-4dba-ac2d-<wbr>5510f0b4b6a2/dom_md/metadata'<br>
>> cmd=['/usr/bin/taskset', '--cpu-list', '0-39', '/usr/bin/dd',<br>
>> u'if=/rhev/data-center/mnt/<wbr>glusterSD/192.168.3.10:_data/<wbr>a08822ec-3f5b-4dba-ac2d-<wbr>5510f0b4b6a2/dom_md/metadata',<br>
>> 'of=/dev/null', 'bs=4096', 'count=1', 'iflag=direct'] delay=0.00<br>
>> DEBUG::2017-03-10 01:26:14,481::asyncevent::564:<wbr>:storage.asyncevent::(reap)<br>
>> Process <cpopen.CPopen object at 0x3ba6550> terminated (count=1)<br>
>> DEBUG::2017-03-10<br>
>> 01:26:14,481::check::327::<wbr>storage.check::(_check_<wbr>completed) FINISH check<br>
>> u'/rhev/data-center/mnt/<wbr>glusterSD/192.168.3.10:_data/<wbr>a08822ec-3f5b-4dba-ac2d-<wbr>5510f0b4b6a2/dom_md/metadata'<br>
>> rc=0 err=bytearray(b'0+1 records in\n0+1 records out\n300 bytes (300 B)<br>
>> copied, 8.7603e-05 s, 3.4 MB/s\n') elapsed=0.06<br>
>><br>
>><br>
>> On 10 March 2017 at 10:40, Ian Neilsen <<a href="mailto:ian.neilsen@gmail.com">ian.neilsen@gmail.com</a>> wrote:<br>
>>><br>
>>> Hi All<br>
>>><br>
>>> I had a storage issue with my gluster volumes running under ovirt hosted.<br>
>>> I now cannot start the hosted engine manager vm from "hosted-engine<br>
>>> --vm-start".<br>
>>> I've scoured the net to find a way, but can't seem to find anything<br>
>>> concrete.<br>
>>><br>
>>> Running Centos7, ovirt 4.0 and gluster 3.8.9<br>
>>><br>
>>> How do I recover the engine manager. Im at a loss!<br>
>>><br>
>>> Engine Status = score between nodes was 0 for all, now node 1 is reading<br>
>>> 3400, but all others are 0<br>
>>><br>
>>> {"reason": "bad vm status", "health": "bad", "vm": "down", "detail":<br>
>>> "down"}<br>
>>><br>
>>><br>
>>> Logs from agent.log<br>
>>> ==================<br>
>>><br>
>>> INFO::2017-03-09<br>
>>> 19:32:52,600::state_<wbr>decorators::51::ovirt_hosted_<wbr>engine_ha.agent.hosted_engine.<wbr>HostedEngine::(check)<br>
>>> Global maintenance detected<br>
>>> INFO::2017-03-09<br>
>>> 19:32:52,603::hosted_engine::<wbr>612::ovirt_hosted_engine_ha.<wbr>agent.hosted_engine.<wbr>HostedEngine::(_initialize_<wbr>vdsm)<br>
>>> Initializing VDSM<br>
>>> INFO::2017-03-09<br>
>>> 19:32:54,820::hosted_engine::<wbr>639::ovirt_hosted_engine_ha.<wbr>agent.hosted_engine.<wbr>HostedEngine::(_initialize_<wbr>storage_images)<br>
>>> Connecting the storage<br>
>>> INFO::2017-03-09<br>
>>> 19:32:54,821::storage_server::<wbr>219::ovirt_hosted_engine_ha.<wbr>lib.storage_server.<wbr>StorageServer::(connect_<wbr>storage_server)<br>
>>> Connecting storage server<br>
>>> INFO::2017-03-09<br>
>>> 19:32:59,194::storage_server::<wbr>226::ovirt_hosted_engine_ha.<wbr>lib.storage_server.<wbr>StorageServer::(connect_<wbr>storage_server)<br>
>>> Connecting storage server<br>
>>> INFO::2017-03-09<br>
>>> 19:32:59,211::storage_server::<wbr>233::ovirt_hosted_engine_ha.<wbr>lib.storage_server.<wbr>StorageServer::(connect_<wbr>storage_server)<br>
>>> Refreshing the storage domain<br>
>>> INFO::2017-03-09<br>
>>> 19:32:59,328::hosted_engine::<wbr>666::ovirt_hosted_engine_ha.<wbr>agent.hosted_engine.<wbr>HostedEngine::(_initialize_<wbr>storage_images)<br>
>>> Preparing images<br>
>>> INFO::2017-03-09<br>
>>> 19:32:59,328::image::126::<wbr>ovirt_hosted_engine_ha.lib.<wbr>image.Image::(prepare_images)<br>
>>> Preparing images<br>
>>> INFO::2017-03-09<br>
>>> 19:33:01,748::hosted_engine::<wbr>669::ovirt_hosted_engine_ha.<wbr>agent.hosted_engine.<wbr>HostedEngine::(_initialize_<wbr>storage_images)<br>
>>> Reloading vm.conf from the shared storage domain<br>
>>> INFO::2017-03-09<br>
>>> 19:33:01,748::config::206::<wbr>ovirt_hosted_engine_ha.agent.<wbr>hosted_engine.HostedEngine.<wbr>config::(refresh_local_conf_<wbr>file)<br>
>>> Trying to get a fresher copy of vm configuration from the OVF_STORE<br>
>>> WARNING::2017-03-09<br>
>>> 19:33:04,056::ovf_store::107::<wbr>ovirt_hosted_engine_ha.lib.<wbr>ovf.ovf_store.OVFStore::(scan)<br>
>>> Unable to find OVF_STORE<br>
>>> ERROR::2017-03-09<br>
>>> 19:33:04,058::config::235::<wbr>ovirt_hosted_engine_ha.agent.<wbr>hosted_engine.HostedEngine.<wbr>config::(refresh_local_conf_<wbr>file)<br>
>>> Unable to get vm.conf from OVF_STORE, falling back to initial vm.conf<br>
>>><br>
>>> ovirt-ha-agent logs<br>
>>> ================<br>
>>><br>
>>> ovirt-ha-agent<br>
>>> ovirt_hosted_engine_ha.agent.<wbr>hosted_engine.HostedEngine.<wbr>config ERROR Unable<br>
>>> to get vm.conf from OVF_STORE, falling back to initial vm.conf<br>
>>><br>
>>> vdsm<br>
>>> ======<br>
>>><br>
>>> vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof<br>
>>><br>
>>> ovirt-ha-broker<br>
>>> ============<br>
>>><br>
>>> ovirt-ha-broker cpu_load_no_engine.<wbr>EngineHealth ERROR Failed to<br>
>>> getVmStats: 'pid'<br>
>>><br>
>>> --<br>
>>> Ian Neilsen<br>
>>><br>
>>> Mobile: 0424 379 762<br>
>>> Linkedin: <a href="http://au.linkedin.com/in/ianneilsen" rel="noreferrer" target="_blank">http://au.linkedin.com/in/<wbr>ianneilsen</a><br>
>>> Twitter : ineilsen<br>
>><br>
>><br>
>><br>
>><br>
>> --<br>
>> Ian Neilsen<br>
>><br>
>> Mobile: <a href="tel:0424%20379%20762" value="+61424379762">0424 379 762</a><br>
>> Linkedin: <a href="http://au.linkedin.com/in/ianneilsen" rel="noreferrer" target="_blank">http://au.linkedin.com/in/<wbr>ianneilsen</a><br>
>> Twitter : ineilsen<br>
>><br>
>> ______________________________<wbr>_________________<br>
>> Users mailing list<br>
>> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
>> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
>><br>
> ______________________________<wbr>_________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
<br>
<br>
<br>
</div></div><span class="HOEnZb"><font color="#888888">--<br>
Didi<br>
</font></span></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div>Ian Neilsen<br><br>Mobile: 0424 379 762<br>Linkedin: <a href="http://au.linkedin.com/in/ianneilsen" target="_blank">http://au.linkedin.com/in/ianneilsen</a><div>Twitter : ineilsen</div></div></div></div>
</div>