Are you using HW raid ? The stripe size and the ammount of physical disks is very
important for the storage alignment.
Also do you use VDO ?
LVM allignment:
pvcreate --dataalignment alignment_value disk
For example if you use 12 disks in raid6 with 128KiB stripe unit size, alignment_value
will be '1280k' ( 10 disks without the parity multiplied by 128KiB).
vgcreate --physicalextentsize extentsize VOLGROUP physical_volume
With the same example, extentsize should be again '1280k'.
If you will be relying on gluster snapshots, you should create thinpool with 1-2 MiB chunk
size:
lvcreate --thinpool VOLGROUP/thin_pool --size 800g --chunksize 1280k --poolmetadatasize
16G --zero n
This example is again for raid6 with 12 disks and 128KiB stripe size. Note that the
poolmetadata cannot be changed and it's always smart to set it to the max (16G)
For the LV ontop the thinpool you won't need any alignment:
lvcreate --thin --name LV_name --virtualsize LV_size VOLGROUP/thin_pool
FS alignment:Creating your XFS will need alignment again and also inode size of 512.
mkfs.xfs -f -i size=512 -n size=8192 -d su=128k,sw=10 /dev/vg/thinlv
Note: su= HW stripe size, sw= data disks in the raid
I would mount that Filesystem with the following options(SELINUX enabled):
context=system_u:object_r:glusterd_brick_t:s0,noatime,inode64
If you disabled SELINUX , then you won't need the 'context=' option.
Once you have your brick restored you can remove the old and readd the new one:
gluster volume remove-brick <VOLUME> replica 2 <serverX:/PATH/to/failed/brick>
force
Note: ensure that you are removing the brick that has really failed. Full payh to brick
can be taken from gluster volume info. The previous command will turn the volume into
'replica 2' which is sensitive to splitbrain. Don't do any maintenance till
you add and heal the 3rd brick.
Next readd the fixed brick:
gluster volume add-brick <VOLUME> replica 3 <serverX:/PATH/to/new/brick>
You can force a full heal:
gluster volume heal <VOLUME> full
About the disk alignment, you can read:
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.1...
I know that there is an ansible role that can do that for you, but I have never used it
directly.
If you use VDO, I can try to find what I'm using (which might not be the best).
Best Regards,Strahil Nikolov
On Wed, Jun 23, 2021 at 1:58, Dominique
Deschênes<dominique.deschenes(a)gcgenicom.com> wrote:
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RRC5X7DBCDK...