On February 19, 2020 10:31:04 PM GMT+02:00, adrianquintero(a)gmail.com wrote:
Strahil,
Let me take a step back and provide a bit more context of what we need
to achieve.
We have a 3 node HCI setup, and host1 has a failed drive (/dev/sdd)
that is used entirely for 1 brick ( /gluster_bricks/data2/data2), this
is a Replica 3 brick setup.
The issue we have is that we don't have any more drive bays in our
enclosure cages to add an extra disk and use it to replace the bad
drive/Brick (/dev/sdd).
What would be the best way to replace the drive/brick in this
situation? and what is the order in which the steps need to be
completed?
I think I know how to replace a bad brick with a different brick and
get things up and running, but in this case as mentioned above I have
not more drive bays to allocate a new drive.
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HQPQKTP5LUO...
Hm...
What about LVM setup.
I will assume some stuff, but this doesn't mean that it is right.
Asumptions:
/dev/sdd is the only PV in the VG
You have thin pool ontop /dev/sdd
When /dev/sdd is missing -> the VG disappeared
You have excluded /dev/sdd from multipath.
So, you need to:
1. Remove broken disk (anyway it's useless)
2. Push the new one
3. If you have a raid controller - create a logical lun with only the new disk
4. The new disk will be somwthing like /dev/sdz
5. Create new PV, VG , LV
vgcreate OLD_VG /dev/sdz
Note: If you use filters - pick the proper disk name
lvcreate -l 100%FREE -T OLD_VG/OLD_THINPOOL
lvcreate -V <old_size>G -T OLD_VG/OLD_THINPOOL -n OLD_THIN_VOL_NAME
6. Create a filesystem
mkfs.xfs /dev/OLD_VG/OLD_THIN_VOL_NAME
Note: you might get errors on all LVM commands complaining about /dev/sdd.To get rid of
that , you have to delete that disk:
echo 1 > /sys/block/sdd/device/delete
7. Update /etc/fstab or systemd unit if the old FS was not XFS
8. Mount the brick:
mount /gluster_bricks/data2
mkdir /gluster_bricks/data2/data2
restorecon -RFvv /gluster_bricks/data2
9. Reset the brick
A) You can do it from oVirt UI -> Storage -> Volumes -> Bricks -> Reset Brick
(up left)
B) gluster volume reset-brick data2 node_name:/gluster_bricks/data2/data2
node_name:/gluster_bricks/data2/data2 commit force
C) gluster volume replace-brick data2 node_name:/gluster_bricks/data2/data2
node_name:/gluster_bricks/data2/data2 commit force
Note: I had a bad experience with 'reset-brick' in gluster v3 , so I prefer to
use replace-brick
Note2: What's the difference between B & C -> replace brick allows the
desitination to be another server and/or another directory. Reset-brick doesn't allow
that.
Once the brick is up - we should be ready to go.
Best Regards,
Strahil Nikolov