I don't see how the data will be lost.
The only risk I see is to add the 2 new hosts in the Gluster TSP (Cluster) and then stop them for some reason (like maintenance).You will loose quorum (in this hypothetical scenario) and thus all storage will be unavailable.
Overall the processis:
1. Prepare your HW Raid (unless you are using NVMEs -> JBOD) and note down the stripe size and the ammount of data disks (Raid 10 -> split into half, raid 5 -> -1disk due to parity)
2. Add the new device in lvm filter
3. 'pvcreate' with allignment parameters
4. vgcreate
5. Thinpool LV creation with relevant chunk size (between 1MB and 2 MB based on stripe size of HW raid * data disks)
6. 'lvcreate'
7. XFS creation (again alignment is needed + inode parameter set to 512)
8. Mount the brick (if using SELINUX you can use the mount option context= system_u:object_r:sglusterd_t:s0 ) with noatime/relatime
Don't forget to add in /etc/fstab or create the relevant systemd '.mount' unit
9. Add the node in the TSP
From the first node: gluster peer probe <new_node>
10. When you add the 2 new hosts and their bricks are ready to be used
gluster volume add-brick <VOLUME_NAME> replica 3 host2:/path/to/brick host3:/path to brick
11. Wait for the healing to be done
Some sources:
Best Regards,
Strahil Nikolov
On Sun, May 9, 2021 at 7:04, Ernest Clyde Chua
<ernestclydeachua@gmail.com> wrote: