Hi @Dominique,

Once you have added the SSD's to your host, you can follow this link[1]  to do a replace host.

Replace host is essentially 
1) Preparing your node based on your current healthy nodes in your cluster
2) Post adding the node in the cluster, gluster syncs with the nodes, and your cluster will be back to healthy.

[1] https://github.com/gluster/gluster-ansible/blob/master/playbooks/hc-ansible-deployment/README#L57

Regards,
Prajith,


On Tue, Jun 22, 2021 at 10:56 AM Ritesh Chikatwar <rchikatw@redhat.com> wrote:
Adding Prajith,


Will replace host work in this case..? If yes please update your thoughts here

On Tue, Jun 22, 2021 at 9:09 AM Strahil Nikolov via Users <users@ovirt.org> wrote:
I'm not sure about the GUI (but I think it has the option) , but under command line you got several options.

1. Use gluster's remove-brick replica 2 (with flag force)
and then 'add-brick replica 3'
2. Use the old way 'replace-brick'

If you need guidance, please provide the 'gluster volume info <VOLUME>' .

Best Regards,
Strahil Nikolov

On Tue, Jun 22, 2021 at 2:01, Dominique D
yesterday I had a disk failure on my stack of 3 Ovirt 4.4.1 node

on each server I have 3 Bricks (engine, data, vmstore)

brick data 4X600Gig raid0. /dev/gluster_vg_sdb/gluster_lv_data mount /gluster_bricks/data
brick engine 2X1TB raid1 /dev/gluster_vg_sdc/gluster_lv_engine mount /gluster_bricks/engine
brick vmstore 2X1TB raid1 /dev/gluster_vg_sdc/gluster_lv_vmstore mount /gluster_bricks/vmstore

Everything was configured by the gui (hyperconverge and hosted-engine)

It is the raid0 of the 2nd server who broke.

all VMs were automatically moved to the other two servers, I haven't lost any data.

the host2 is now in maintenance mode.

I am going to buy 4 new SSD disks to replace the 4 disks of the defective raid0.

When I'm going to erase the faulty raid0 and create the new raid with the new disks on the raid controler, how do I add in ovirt so that they resynchronize with the other bricks data?

Status of volume: data
Gluster process                            TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 172.16.70.91:/gluster_bricks/data/dat
a                                          49153    0          Y      79168
Brick 172.16.70.92:/gluster_bricks/data/dat
a                                          N/A      N/A        N      N/A
Brick 172.16.70.93:/gluster_bricks/data/dat
a                                          49152    0          Y      3095
Self-heal Daemon on localhost              N/A      N/A        Y      2528
Self-heal Daemon on 172.16.70.91            N/A      N/A        Y      225523
Self-heal Daemon on 172.16.70.93            N/A      N/A        Y      3121
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KHBYPWP23TIKH6KOYBFLBSWLOFWVYVV7/