This is a multi-part message in MIME format.
--------------82E81368194F4601F67DAE1A
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
On 06/30/2017 04:24 PM, yayo (j) wrote:
2017-06-30 11:01 GMT+02:00 knarra <knarra(a)redhat.com
<mailto:knarra@redhat.com>>:
You do not need to remove the arbiter node as you are getting the
advantage of saving on space by having this config.
Since you have a new you can add this as fourth node and create
another gluster volume (replica 3) out of this node plus the other
two nodes and run vm images there as well.
Hi,
And thanks for the answer. The actual arbiter must be removed because
is too obsolete. So, I needs to add the new "full replicated" node but
I want to know what are the steps for add a new "full replicated" node
To
add a fully replicated node you need to reduce the replica count to
2 and add new brick to the volume so that it becomes replica 3. Reducing
replica count by removing a brick from replica / arbiter cannot be done
from UI currently and this has to be done using gluster CLI.
AFAIR, there was an issue where vm's were going to paused state when
reducing the replica count and increasing it to 3. Not sure if this
still holds good with the latest release.
Any specific reason why you want to move to full replication instead of
using an arbiter node ?
and remove the arbiter node (Also a way to move the arbiter role to
the new node, If needed)
To move arbiter role to a new node you can move the node
to maintenance
, add new node and replace old brick with new brick. You can follow
the steps below to do that.
* Move the node to be replaced into Maintenance mode
* Prepare the replacement node
* Prepare bricks on that node.
* Create replacement brick directories
* Ensure the new directories are owned by the vdsm user and the kvm group.
* # mkdir /rhgs/bricks/engine
* # chmod vdsm:kvm /rhgs/bricks/engine
* # mkdir /rhgs/bricks/data
* # chmod vdsm:kvm /rhgs/bricks/data
* Run the following command from one of the healthy cluster members:
* # gluster peer probe <new_node>
* add the new host to the cluster.
* Add new host address to gluster network
* Click Network Interfaces sub-tab.
* Click Set up Host Networks.
* Drag and drop the glusternw network onto the IP address of the new host.
* Click OK
* Replace the old brick with the brick on the new host
* Click the Bricks sub-tab.
* Verify that brick heal completes successfully.
* In the Hosts tab, right-click on the old host and click Remove.
* Clean old host metadata
* # hosted-engine --clean-metadata --host-id=<old_host_id> --force-clean
. Extra info: I want to know if I can do this on an existing ovirt
gluster Data Domain (called Data01) because we have many vm runnig on it.
When you
move your node to maintenance all the vms running on that node
will be migrated to another node and since you have two nodes up and
running there should not be any problem.
thank you
--------------82E81368194F4601F67DAE1A
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 06/30/2017 04:24 PM, yayo (j)
wrote:<br>
</div>
<blockquote
cite="mid:CAGK=3ky808BwVy2=sEjA2rR0PZviY0fF9CbfWiWu9o0-U=Z04A@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra"><br>
<div class="gmail_quote">2017-06-30 11:01 GMT+02:00 knarra
<span
dir="ltr"><<a moz-do-not-send="true"
href="mailto:knarra@redhat.com"
target="_blank">knarra@redhat.com</a>></span>:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF">You do not need to remove the
arbiter node as you are getting the advantage of saving
on space by having this config. <br>
<br>
Since you have a new you can add this as fourth node and
create another gluster volume (replica 3) out of this
node plus the other two nodes and run vm images there as
well.</div>
</blockquote>
</div>
</div>
<div class="gmail_extra">
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra">Hi,</div>
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra">And thanks for the answer. The actual
arbiter must be removed because is too obsolete. So, I needs
to add the new "full replicated" node but I want to know
what are the steps for add a new "full replicated" node
</div>
</div>
</div>
</blockquote>
To add a fully replicated node you need to reduce the replica count
to 2 and add new brick to the volume so that it becomes replica 3.
Reducing replica count by removing a brick from replica / arbiter
cannot be done from UI currently and this has to be done using
gluster CLI. <br>
AFAIR, there was an issue where vm's were going to paused state
when reducing the replica count and increasing it to 3. Not sure if
this still holds good with the latest release. <br>
<br>
Any specific reason why you want to move to full replication instead
of using an arbiter node ?<br>
<br>
<br>
<br>
<blockquote
cite="mid:CAGK=3ky808BwVy2=sEjA2rR0PZviY0fF9CbfWiWu9o0-U=Z04A@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_extra">and remove the arbiter node (Also a
way to move the arbiter role to the new node, If needed) </div>
</div>
</div>
</blockquote>
To move arbiter role to a new node you can move the node to
maintenance , add new node and replace old brick with new brick.
You can follow the steps below to do that.<br>
<br>
<ul>
<li>Move the node to be replaced into Maintenance mode</li>
<li>Prepare the replacement node</li>
<li>Prepare bricks on that node.</li>
<li>Create replacement brick directories</li>
<li>Ensure the new directories are owned by the vdsm user and the
kvm group.</li>
<li># mkdir /rhgs/bricks/engine</li>
<li># chmod vdsm:kvm /rhgs/bricks/engine</li>
<li># mkdir /rhgs/bricks/data</li>
<li># chmod vdsm:kvm /rhgs/bricks/data</li>
<li>Run the following command from one of the healthy cluster
members:</li>
<li># gluster peer probe <new_node></li>
<li> add the new host to the cluster.</li>
<li>Add new host address to gluster network</li>
<li>Click Network Interfaces sub-tab.</li>
<li>Click Set up Host Networks.</li>
<li>Drag and drop the glusternw network onto the IP address of the
new host.</li>
<li>Click OK</li>
<li>Replace the old brick with the brick on the new host</li>
<li>Click the Bricks sub-tab.</li>
<li>Verify that brick heal completes successfully.</li>
<li>In the Hosts tab, right-click on the old host and click
Remove.</li>
<li>Clean old host metadata</li>
<li># hosted-engine --clean-metadata --host-id=<old_host_id>
--force-clean</li>
</ul>
<br>
<br>
<blockquote
cite="mid:CAGK=3ky808BwVy2=sEjA2rR0PZviY0fF9CbfWiWu9o0-U=Z04A@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_extra">. Extra info: I want to know if I can
do this on an existing ovirt gluster Data Domain (called
Data01) because we have many vm runnig on it.</div>
</div>
</div>
</blockquote>
When you move your node to maintenance all the vms running on that
node will be migrated to another node and since you have two nodes
up and running there should not be any problem.<br>
<blockquote
cite="mid:CAGK=3ky808BwVy2=sEjA2rR0PZviY0fF9CbfWiWu9o0-U=Z04A@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra">thank you </div>
</div>
</div>
</blockquote>
<br>
</body>
</html>
--------------82E81368194F4601F67DAE1A--