<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Sep 23, 2016 at 5:02 PM, Davide Ferrari <span dir="ltr"><<a href="mailto:davide@billymob.com" target="_blank">davide@billymob.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><br><div class="gmail_quote"><span class="">2016-09-23 13:17 GMT+02:00 Sahina Bose <span dir="ltr"><<a href="mailto:sabose@redhat.com" target="_blank">sabose@redhat.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><span></span><div>What are the stability issues you're facing? Data volume if used as a data storage domain should be a replica 3 volume as well.</div></div></div></blockquote><div><br></div></span><div>Basically that after the first host installation+deploy (from the CLI), after I enable the gluster management in the cluster, I have to manually restart vdsmd on host1 to be able to install the other hosts. But maybe I should just wait more time for vdmsd catch up with everything, I don't know.<br></div></div></div></div></blockquote><div><br></div><div>Once gluster management is enabled on cluster, we have not noticed the need to restart vdsm to install other hosts. There was an issue that hosts would not be identified as gluster hosts unless it was activated again. This will be fixed in the next 4.0 release as the patches have been merged already.<br><br></div><div>If you encounter issue again, could you post the hosted-engine deploy logs from the 2nd host?<br> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><br></div><div>Then I have some other problem like a ghost VM stuck on one host after moving the host to maintenance and the VM (the hosted-engine, the only one running in the whole cluster) being correctly migrated to another host, solved only by a manual reboot of the whole host (and consequent HE fencing of the host). I must say that that particular host is giving ECC correction errors in one DIMM, so maybe it could just be an HW related problem.<br><br></div></div></div></div></blockquote><div><br></div><div>vdsm and engine logs would help here<br> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div></div><span class=""><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div> </div></div></div></blockquote><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div></div><span><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><br></div>Deploy the hosted-engine on the first VM (with the engine volume) froom the CLI, then log in Ovirt admin, enable gluster support, install *and deploy* from the GUI host2 and host3 (where the engine bricks are) and then install host4 without deploying. This should get you the 4 hosts online, but the engine will run only on the first 3<br></div></blockquote><div><br></div></span><div>Right. You can add the 4th node to the cluster, but not have any bricks on this volume in which case VMs will be run on this node but will access data from the other 3 nodes. </div></div></div></blockquote></span></div><br></div><div class="gmail_extra">Well, actually I *do* have data bricks on the 4th host, it's just the engine volume that's not present there (but that host is not HE eligible anyway). Am I doing something wrong?<br><br></div></div>
</blockquote></div><br></div><div class="gmail_extra">If you have additional capacity on the other 3 hosts , then yes, you can create a new gluster volume with a brick on the newly added 4th node and bricks from other nodes - this volume can be used as another storage domain. You are not doing anything wrong :) Keep in mind that all gluster volumes used as data storage domains should be replica 3 or replica 3 -arbiter to avoid split-brain and data loss issues.<br><br><br></div></div>