I'm not sure about the GUI (but I think it has the option) , but under command line you got several options.
On Tue, Jun 22, 2021 at 2:01, Dominique D<dominique.deschenes@gcgenicom.com> wrote:yesterday I had a disk failure on my stack of 3 Ovirt 4.4.1 nodeon each server I have 3 Bricks (engine, data, vmstore)brick data 4X600Gig raid0. /dev/gluster_vg_sdb/gluster_lv_data mount /gluster_bricks/databrick engine 2X1TB raid1 /dev/gluster_vg_sdc/gluster_lv_engine mount /gluster_bricks/enginebrick vmstore 2X1TB raid1 /dev/gluster_vg_sdc/gluster_lv_vmstore mount /gluster_bricks/vmstoreEverything was configured by the gui (hyperconverge and hosted-engine)It is the raid0 of the 2nd server who broke.all VMs were automatically moved to the other two servers, I haven't lost any data.the host2 is now in maintenance mode.I am going to buy 4 new SSD disks to replace the 4 disks of the defective raid0.When I'm going to erase the faulty raid0 and create the new raid with the new disks on the raid controler, how do I add in ovirt so that they resynchronize with the other bricks data?Status of volume: dataGluster process TCP Port RDMA Port Online Pid------------------------------------------------------------------------------Brick 172.16.70.91:/gluster_bricks/data/data 49153 0 Y 79168Brick 172.16.70.92:/gluster_bricks/data/data N/A N/A N N/ABrick 172.16.70.93:/gluster_bricks/data/data 49152 0 Y 3095Self-heal Daemon on localhost N/A N/A Y 2528Self-heal Daemon on 172.16.70.91 N/A N/A Y 225523Self-heal Daemon on 172.16.70.93 N/A N/A Y 3121_______________________________________________Users mailing list -- users@ovirt.orgTo unsubscribe send an email to users-leave@ovirt.orgPrivacy Statement: https://www.ovirt.org/privacy-policy.htmloVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/