[ovirt-users] add fourth full gluster node and remove arbiter: ovirt 4.1 with hosted engine

knarra knarra at redhat.com
Mon Jul 3 13:42:10 UTC 2017


On 07/03/2017 06:58 PM, knarra wrote:
> On 07/03/2017 06:53 PM, yayo (j) wrote:
>> Hi,
>>
>> And sorry for delay
>>
>> 2017-06-30 14:09 GMT+02:00 knarra <knarra at redhat.com 
>> <mailto:knarra at redhat.com>>:
>>
>>     To add a fully replicated node  you need to reduce the replica
>>     count to 2 and add new brick to the volume so that it becomes
>>     replica 3. Reducing replica count by removing a brick from
>>     replica / arbiter cannot be done from UI currently and this has
>>     to be done using gluster CLI.
>>      AFAIR, there was an issue where vm's were going to paused state
>>     when reducing the replica count and increasing it to 3. Not sure
>>     if this still holds good with the latest release.
>>
>>     Any specific reason why you want to move to full replication
>>     instead of using an arbiter node ?
>>
>>
>> We have a new server with the same hard disk size of other two node, 
>> so, why not? Why join the cluster as an arbiter when we can have the 
>> same disk capacity to add extra replication?
>>
>>
>>>     and remove the arbiter node (Also a way to move the arbiter role
>>>     to the new node, If needed)
>>     To move arbiter role to a new node you can move the node to
>>     maintenance , add  new node and replace  old brick with new
>>     brick. You can follow the steps below to do that.
>>
>>       * Move the node to be replaced into Maintenance mode
>>       * Prepare the replacement node
>>       * Prepare bricks on that node.
>>       * Create replacement brick directories
>>       * Ensure the new directories are owned by the vdsm user and the
>>         kvm group.
>>       * # mkdir /rhgs/bricks/engine
>>       * # chmod vdsm:kvm /rhgs/bricks/engine
>>       * # mkdir /rhgs/bricks/data
>>       * # chmod vdsm:kvm /rhgs/bricks/data
>>       * Run the following command from one of the healthy cluster
>>         members:
>>       * # gluster peer probe <new_node>
>>       * add the new host to the cluster.
>>       * Add new host address to gluster network
>>       * Click Network Interfaces sub-tab.
>>       * Click Set up Host Networks.
>>       * Drag and drop the glusternw network onto the IP address of
>>         the new host.
>>       * Click OK
>>       * Replace the old brick with the brick on the new host
>>       * Click the Bricks sub-tab.
>>       * Verify that brick heal completes successfully.
>>       * In the Hosts tab, right-click on the old host and click Remove.
>>       * Clean old host metadata
>>       * # hosted-engine --clean-metadata --host-id=<old_host_id>
>>         --force-clean
>>
>>
>>
>> I need this (reads: I need the arbiter role) if I reduce replica 
>> count then I add the new node as full replica and increasing replica 
>> count again to 3? (As you expained above)
>>
> Above steps hold good if you want to move the arbiter role to a new node.
>
> If you want to move to full replica, reducing the replica count will 
> work fine but increasing it again back to 3 might cause vm pause issues.
So, please poweroff your vms while performing this.
>
>
>
>
>> Thank you
>
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170703/766af8f6/attachment-0001.html>


More information about the Users mailing list