
I don't see how the data will be lost.The only risk I see is to add the 2 new hosts in the Gluster TSP (Cluster) and then stop them for some reason (like maintenance).You will loose quorum (in this hypothetical scenario) and thus all storage will be unavailable. Overall the processis:1. Prepare your HW Raid (unless you are using NVMEs -> JBOD) and note down the stripe size and the ammount of data disks (Raid 10 -> split into half, raid 5 -> -1disk due to parity)2. Add the new device in lvm filter3. 'pvcreate' with allignment parameters4. vgcreate5. Thinpool LV creation with relevant chunk size (between 1MB and 2 MB based on stripe size of HW raid * data disks)6. 'lvcreate'7. XFS creation (again alignment is needed + inode parameter set to 512)8. Mount the brick (if using SELINUX you can use the mount option context= system_u:object_r:sglusterd_t:s0 ) with noatime/relatimeDon't forget to add in /etc/fstab or create the relevant systemd '.mount' unit 9. Add the node in the TSPFrom the first node: gluster peer probe <new_node>10. When you add the 2 new hosts and their bricks are ready to be usedgluster volume add-brick <VOLUME_NAME> replica 3 host2:/path/to/brick host3:/path to brick 11. Wait for the healing to be done Some sources: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/ht... Best Regards,Strahil Nikolov On Sun, May 9, 2021 at 7:04, Ernest Clyde Chua<ernestclydeachua@gmail.com> wrote: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/M76UNA6EBWSWZV...