
On February 20, 2020 1:19:41 AM GMT+02:00, eevans@digitaldatatechs.com wrote:
This is just my opinion but I would be to attach external storage and dd the drive using the command to ignore errors. Then replace the drive(s) and dd back to the replacement drive. I don't know if gluster has snapshot or backup of this drive that would rebuild it for you. This is just my opinion.
Eric Evans Digital Data Services LLC. 304.660.9080
-----Original Message----- From: Strahil Nikolov <hunter86_bg@yahoo.com> Sent: Wednesday, February 19, 2020 4:48 PM To: users@ovirt.org; adrianquintero@gmail.com Subject: [ovirt-users] Re: Damaged hard disk in volume replica gluster
On February 19, 2020 10:31:04 PM GMT+02:00, adrianquintero@gmail.com wrote:
Strahil, Let me take a step back and provide a bit more context of what we need
to achieve.
We have a 3 node HCI setup, and host1 has a failed drive (/dev/sdd) that is used entirely for 1 brick ( /gluster_bricks/data2/data2), this
is a Replica 3 brick setup.
The issue we have is that we don't have any more drive bays in our enclosure cages to add an extra disk and use it to replace the bad drive/Brick (/dev/sdd).
What would be the best way to replace the drive/brick in this situation? and what is the order in which the steps need to be completed?
I think I know how to replace a bad brick with a different brick and get things up and running, but in this case as mentioned above I have not more drive bays to allocate a new drive. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HQPQKTP5L UOEUQITPIIIS6TBDRYJ5T5O/
Hm... What about LVM setup.
I will assume some stuff, but this doesn't mean that it is right.
Asumptions: /dev/sdd is the only PV in the VG You have thin pool ontop /dev/sdd When /dev/sdd is missing -> the VG disappeared You have excluded /dev/sdd from multipath.
So, you need to: 1. Remove broken disk (anyway it's useless) 2. Push the new one 3. If you have a raid controller - create a logical lun with only the new disk 4. The new disk will be somwthing like /dev/sdz 5. Create new PV, VG , LV vgcreate OLD_VG /dev/sdz Note: If you use filters - pick the proper disk name lvcreate -l 100%FREE -T OLD_VG/OLD_THINPOOL lvcreate -V <old_size>G -T OLD_VG/OLD_THINPOOL -n OLD_THIN_VOL_NAME 6. Create a filesystem mkfs.xfs /dev/OLD_VG/OLD_THIN_VOL_NAME
Note: you might get errors on all LVM commands complaining about /dev/sdd.To get rid of that , you have to delete that disk: echo 1 > /sys/block/sdd/device/delete
7. Update /etc/fstab or systemd unit if the old FS was not XFS 8. Mount the brick: mount /gluster_bricks/data2 mkdir /gluster_bricks/data2/data2 restorecon -RFvv /gluster_bricks/data2
9. Reset the brick A) You can do it from oVirt UI -> Storage -> Volumes -> Bricks -> Reset Brick (up left) B) gluster volume reset-brick data2 node_name:/gluster_bricks/data2/data2 node_name:/gluster_bricks/data2/data2 commit force C) gluster volume replace-brick data2 node_name:/gluster_bricks/data2/data2 node_name:/gluster_bricks/data2/data2 commit force
Note: I had a bad experience with 'reset-brick' in gluster v3 , so I prefer to use replace-brick Note2: What's the difference between B & C -> replace brick allows the desitination to be another server and/or another directory. Reset-brick doesn't allow that.
Once the brick is up - we should be ready to go.
Best Regards, Strahil Nikolov _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/DMJPC7UHYXAYV2... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KL3HA2PQJVGENZ...
Hi Eric, As this is replica volume , you already have the data at 2 other locations. You don't need the broken's disk data - you can access the other bricks and backup from there. Gluster is not interested of anything but the mount point. You can set a new brick on the same or different path and -> replace-brick option will take care. Beet Regards, Strahil Nikolov