<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Thu, Oct 22, 2015 at 5:38 PM, Simone Tiraboschi <span dir="ltr"><<a href="mailto:stirabos@redhat.com" target="_blank">stirabos@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><div class="gmail_quote"><span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><br><div>In case I want to setup a single host with self hosted engine, could I configure on hypervisor</div><div>a) one NFS share for sh engine</div><div>b) one NFS share for ISO DOMAIN</div><div>c) a local filesystem to be used to create then a local POSIX complant FS storage domain </div><div>and work this way as a replacement of all-in-one?<br></div></div></div></div></blockquote><div><br></div></span><div>Yes but c is just a workaround, using another external NFS share would help a lot if in the future you plan to add o to migrate to a new server.</div></div></div></div></blockquote><div><br></div><div>Why do you see this as a workaround, if I plan to have this for example as a devel personal infra without no other hypervisors?</div><div>I think about better performance directly going local instead of adding overhead of NFS with itself....</div><div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span><font color="#888888"><div><br></div></font></span></div></div></div></blockquote><div><br></div></span><div>Put the host in global maintenance (otherwise the engine VM will be restarted)</div><div>Shutdown the engine VM</div><div>Shutdown the host</div><div> </div></div></div></div></blockquote></span></div></div></div></blockquote></span></div></div></div></blockquote><div><br></div><div>Please note that at some point I had to power off the hypervisor in the previous step, because it was stalled trying to stop two processes:</div><div>"Watchdog Multiplexing Daemon"<br></div><div>and</div><div>"Shared Storage Lease Manager" </div><div><a href="https://drive.google.com/file/d/0BwoPbcrMv8mvTVoyNzhRNGpqN1U/view?usp=sharing">https://drive.google.com/file/d/0BwoPbcrMv8mvTVoyNzhRNGpqN1U/view?usp=sharing</a><br></div><div><br></div><div>It was apparently able to stop the "Watchdog Multiplexing Daemon" after some minutes</div><div><a href="https://drive.google.com/file/d/0BwoPbcrMv8mvZExNNkw5LVBiXzA/view?usp=sharing">https://drive.google.com/file/d/0BwoPbcrMv8mvZExNNkw5LVBiXzA/view?usp=sharing</a><br></div><div><br></div><div>But no way for the Shared Storage Lease Manager and the screen above is when I forced a power off yesterday, after global maintenance and correct shutdown of sh engine and shutdown of hypervisor stalled.</div><div><br></div><div><br></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span><font color="#888888"><div></div><div><br></div></font></span></div></div></div></blockquote></div></div></div></blockquote><div><br></div></span><div>Ok. And for starting all again, is this correct:</div><div><br></div><div>a) power on hypevisor</div><div>b) hosted-engine --set-maintenance --mode=none</div><div><br></div><div>other steps required?</div></div><br></div></div></blockquote><div> </div></span></div>No, that's correct</div></div></blockquote><div><br></div><div><br></div><div>Today after powering on hypervisor and waiting about 6 minutes I then ran:</div><div><br></div><div> [root@ovc71 ~]# ps -ef|grep qemu</div><div>root 2104 1985 0 15:41 pts/0 00:00:00 grep --color=auto qemu</div><div><br></div><div>--> as expected no VM in execution</div><div><br></div><div><div>[root@ovc71 ~]# systemctl status vdsmd</div><div>vdsmd.service - Virtual Desktop Server Manager</div><div> Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)</div><div> Active: active (running) since Fri 2015-10-23 15:34:46 CEST; 3min 25s ago</div><div> Process: 1666 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)</div><div> Main PID: 1745 (vdsm)</div><div> CGroup: /system.slice/vdsmd.service</div><div> ├─1745 /usr/bin/python /usr/share/vdsm/vdsm</div><div> └─1900 /usr/libexec/ioprocess --read-pipe-fd 56 --write-pipe-fd 55 --max-threads 10 --...</div><div><br></div><div>Oct 23 15:34:46 ovc71.localdomain.local python[1745]: DIGEST-MD5 client step 1</div><div>Oct 23 15:34:46 ovc71.localdomain.local python[1745]: DIGEST-MD5 ask_user_info()</div><div>Oct 23 15:34:46 ovc71.localdomain.local python[1745]: DIGEST-MD5 client step 1</div><div>Oct 23 15:34:46 ovc71.localdomain.local python[1745]: DIGEST-MD5 ask_user_info()</div><div>Oct 23 15:34:46 ovc71.localdomain.local python[1745]: DIGEST-MD5 make_client_response()</div><div>Oct 23 15:34:46 ovc71.localdomain.local python[1745]: DIGEST-MD5 client step 2</div><div>Oct 23 15:34:46 ovc71.localdomain.local python[1745]: DIGEST-MD5 parse_server_challenge()</div><div>Oct 23 15:34:46 ovc71.localdomain.local python[1745]: DIGEST-MD5 ask_user_info()</div><div>Oct 23 15:34:46 ovc71.localdomain.local python[1745]: DIGEST-MD5 make_client_response()</div><div>Oct 23 15:34:46 ovc71.localdomain.local python[1745]: DIGEST-MD5 client step 3</div></div><div><br></div><div>--> I think it is expected that vdsmd starts anyway, even in global maintenance, is it correct?</div><div><br></div><div>But then:</div><div><br></div><div>[root@ovc71 ~]# hosted-engine --set-maintenance --mode=none</div><div>Traceback (most recent call last):</div><div> File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main</div><div> "__main__", fname, loader, pkg_name)</div><div> File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code</div><div> exec code in run_globals</div><div> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/set_maintenance.py", line 73, in <module></div><div> if not maintenance.set_mode(sys.argv[1]):</div><div> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/set_maintenance.py", line 61, in set_mode</div><div> value=m_global,</div><div> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 259, in set_maintenance_mode</div><div> str(value))</div><div> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 201, in set_global_md_flag</div><div> with broker.connection(self._retries, self._wait):</div><div> File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__</div><div> return self.gen.next()</div><div> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 99, in connection</div><div> self.connect(retries, wait)</div><div> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 78, in connect</div><div> raise BrokerConnectionError(error_msg)</div><div>ovirt_hosted_engine_ha.lib.exceptions.BrokerConnectionError: Failed to connect to broker, the number of errors has exceeded the limit (1)</div><div><br></div><div>What to do next?</div></div><br></div><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div></div>