[ovirt-users] add fourth full gluster node and remove arbiter: ovirt 4.1 with hosted engine
knarra
knarra at redhat.com
Fri Jun 30 12:09:32 UTC 2017
On 06/30/2017 04:24 PM, yayo (j) wrote:
>
> 2017-06-30 11:01 GMT+02:00 knarra <knarra at redhat.com
> <mailto:knarra at redhat.com>>:
>
> You do not need to remove the arbiter node as you are getting the
> advantage of saving on space by having this config.
>
> Since you have a new you can add this as fourth node and create
> another gluster volume (replica 3) out of this node plus the other
> two nodes and run vm images there as well.
>
>
> Hi,
>
> And thanks for the answer. The actual arbiter must be removed because
> is too obsolete. So, I needs to add the new "full replicated" node but
> I want to know what are the steps for add a new "full replicated" node
To add a fully replicated node you need to reduce the replica count to
2 and add new brick to the volume so that it becomes replica 3. Reducing
replica count by removing a brick from replica / arbiter cannot be done
from UI currently and this has to be done using gluster CLI.
AFAIR, there was an issue where vm's were going to paused state when
reducing the replica count and increasing it to 3. Not sure if this
still holds good with the latest release.
Any specific reason why you want to move to full replication instead of
using an arbiter node ?
> and remove the arbiter node (Also a way to move the arbiter role to
> the new node, If needed)
To move arbiter role to a new node you can move the node to maintenance
, add new node and replace old brick with new brick. You can follow
the steps below to do that.
* Move the node to be replaced into Maintenance mode
* Prepare the replacement node
* Prepare bricks on that node.
* Create replacement brick directories
* Ensure the new directories are owned by the vdsm user and the kvm group.
* # mkdir /rhgs/bricks/engine
* # chmod vdsm:kvm /rhgs/bricks/engine
* # mkdir /rhgs/bricks/data
* # chmod vdsm:kvm /rhgs/bricks/data
* Run the following command from one of the healthy cluster members:
* # gluster peer probe <new_node>
* add the new host to the cluster.
* Add new host address to gluster network
* Click Network Interfaces sub-tab.
* Click Set up Host Networks.
* Drag and drop the glusternw network onto the IP address of the new host.
* Click OK
* Replace the old brick with the brick on the new host
* Click the Bricks sub-tab.
* Verify that brick heal completes successfully.
* In the Hosts tab, right-click on the old host and click Remove.
* Clean old host metadata
* # hosted-engine --clean-metadata --host-id=<old_host_id> --force-clean
> . Extra info: I want to know if I can do this on an existing ovirt
> gluster Data Domain (called Data01) because we have many vm runnig on it.
When you move your node to maintenance all the vms running on that node
will be migrated to another node and since you have two nodes up and
running there should not be any problem.
>
> thank you
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170630/859b1dc8/attachment.html>
More information about the Users
mailing list