Hi Strahil

here :

[root@ovnode2 ~]# gluster volume info data

Volume Name: data
Type: Replicate
Volume ID: c6535ef6-c5c5-4097-8eb4-90b9254570c5
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 172.16.70.91:/gluster_bricks/data/data
Brick2: 172.16.70.92:/gluster_bricks/data/data
Brick3: 172.16.70.93:/gluster_bricks/data/data
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: on


Dominique Deschênes
Ingénieur chargé de projets, Responsable TI
816, boulevard Guimond, Longueuil J4G 1T5
450 670-8383 x105 450 670-2259

         

----- Message reçu -----

De: Strahil Nikolov via Users (users@ovirt.org)
Date: 21/06/21 23:38
À: dominique.deschenes@gcgenicom.com, users@ovirt.org
Objet: [ovirt-users] Re: Disk (brick) failure on my stack

I'm not sure about the GUI (but I think it has the option) , but under command line you got several options.


1. Use gluster's remove-brick replica 2 (with flag force)
and then 'add-brick replica 3'
2. Use the old way 'replace-brick'

If you need guidance, please provide the 'gluster volume info <VOLUME>' .

Best Regards,
Strahil Nikolov

On Tue, Jun 22, 2021 at 2:01, Dominique D
yesterday I had a disk failure on my stack of 3 Ovirt 4.4.1 node

on each server I have 3 Bricks (engine, data, vmstore)

brick data 4X600Gig raid0. /dev/gluster_vg_sdb/gluster_lv_data mount /gluster_bricks/data
brick engine 2X1TB raid1 /dev/gluster_vg_sdc/gluster_lv_engine mount /gluster_bricks/engine
brick vmstore 2X1TB raid1 /dev/gluster_vg_sdc/gluster_lv_vmstore mount /gluster_bricks/vmstore

Everything was configured by the gui (hyperconverge and hosted-engine)

It is the raid0 of the 2nd server who broke.

all VMs were automatically moved to the other two servers, I haven't lost any data.

the host2 is now in maintenance mode.

I am going to buy 4 new SSD disks to replace the 4 disks of the defective raid0.

When I'm going to erase the faulty raid0 and create the new raid with the new disks on the raid controler, how do I add in ovirt so that they resynchronize with the other bricks data?

Status of volume: data
Gluster process                            TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 172.16.70.91:/gluster_bricks/data/dat
a                                          49153    0          Y      79168
Brick 172.16.70.92:/gluster_bricks/data/dat
a                                          N/A      N/A        N      N/A
Brick 172.16.70.93:/gluster_bricks/data/dat
a                                          49152    0          Y      3095
Self-heal Daemon on localhost              N/A      N/A        Y      2528
Self-heal Daemon on 172.16.70.91            N/A      N/A        Y      225523
Self-heal Daemon on 172.16.70.93            N/A      N/A        Y      3121
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org

_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KHBYPWP23TIKH6KOYBFLBSWLOFWVYVV7/