<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Sun, Jul 2, 2017 at 12:21 AM, Doug Ingham <span dir="ltr"><<a href="mailto:dougti@gmail.com" target="_blank">dougti@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><div class="gmail_quote"><span class="m_-686303241814253453gmail-"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div>Only problem I would like to manage is that I have gluster network shared with ovirtmgmt one.<br></div><div>Can I move it now with these updated packages?<br></div></div></div></div></blockquote><div><br></div></span><div>Are the gluster peers configured with the same hostnames/IPs as your hosts within oVirt?<br><br>Once they're configured on the same network, separating them might be a bit difficult. Also, the last time I looked, oVirt still doesn't support managing HCI oVirt/Gluster nodes running each service on a different interface (see below).<br></div><div><br>In theory, the procedure would involve stopping all of the Gluster processes on all of the peers, updating the peer addresses in the gluster configs on all of the nodes, then restarting glusterd & the bricks. I've not tested this however, and it's not a "supported" procedure. I've no idea how oVirt would deal with these changes either.<br></div></div></div></div></blockquote><div><br></div><div>Which version of glusterfs do you have running now? With glusterfs>= 3.9, there's a reset-brick command that can help you do this.<br><br></div><div>It's possible to move to the new interface for gluster.<br><br></div><div>The procedure would be:<br><br></div><div>1. Create a network with "gluster" network role.<br></div><div>2. On each host, use "Setup networks" to associate the gluster network on the desired interface. (This would ensure thet the engine will peer probe this interface's IP address as well, so that it can be used to identify the host in brick defintion)<br></div><div>3. For each of the volume's bricks - change the definition of the brick, so that the new ip address is used. Ensure that there's no pending heal (i.e gluster volume heal info - should list 0 entires) before you start this(see <a href="https://gluster.readthedocs.io/en/latest/release-notes/3.9.0/" target="_blank">https://gluster.readthedocs.<wbr>io/en/latest/release-notes/3.<wbr>9.0/</a> - Introducing reset-brick command) <br><pre><code class="m_-686303241814253453gmail-bash m_-686303241814253453gmail-hljs">gluster volume reset-brick VOLNAME <original-hostname/ip>:<wbr>BRICKPATH start<br></code><code class="m_-686303241814253453gmail-bash m_-686303241814253453gmail-hljs">gluster volume reset-brick VOLNAME </code><code class="m_-686303241814253453gmail-bash m_-686303241814253453gmail-hljs"><code class="m_-686303241814253453gmail-bash m_-686303241814253453gmail-hljs"><original-hostname/ip></code>:<wbr>BRICKPATH </code><code class="m_-686303241814253453gmail-bash m_-686303241814253453gmail-hljs"><code class="m_-686303241814253453gmail-bash m_-686303241814253453gmail-hljs"></code><code class="m_-686303241814253453gmail-bash m_-686303241814253453gmail-hljs"><code class="m_-686303241814253453gmail-bash m_-686303241814253453gmail-hljs"><new-gluster-interface-ip></code></code>:<wbr>BRICKPATH commit force<br><br></code></pre><br></div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><br></div><div><br></div><div>To properly separate my own storage & management networks from the beginning, I configured each host with 2 IPs on different subnets and a different hostname corresponding to each IP. For example, "v0" points to the management interface of the first node, and "s0" points to the storage interface.<br></div></div><br>oVirt's problem is that, whilst it can see the pre-configured bricks and volumes on each host, it can't create any new bricks or volumes because it wants to use the same hostnames it uses to manage the hosts. It also means that it can't fence the hosts correctly, as it doesn't understand that "v0" & "s0" are the same host.<br></div><div class="gmail_extra">This isn't a problem for me though, as I don't need to manage my Gluster instances via the GUI, and automatic fencing can be done via the IPMI interfaces.<br></div><div class="gmail_extra"><br></div><div class="gmail_extra">Last I read, this is a recognised problem, but a fix isn't expect to arrive any time soon.<span class="m_-686303241814253453gmail-HOEnZb"><font color="#888888"><br></font></span></div><span class="m_-686303241814253453gmail-HOEnZb"><font color="#888888"><div class="gmail_extra"><br>-- <br><div class="m_-686303241814253453gmail-m_4383666370052160357gmail_signature">Doug</div>
</div></font></span></div>
<br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
<br></blockquote></div><br></div></div>