Changing 1 node Gluster Distributed to replica

Good day, currently we have a 1 node host that also runs a gluster in 1 node distributed mode and we recently decided to upgrade to a 3 node host which also runs gluster and set a replica count of 3. can someone help me how can i safely change the volume type to replicated

I can guide you through the manual approach, but I'm pretty sure that there is an ansible role for that purpose. Best Regards,Strahil Nikolov On Sat, May 8, 2021 at 14:56, Ernest Clyde Chua<ernestclydeachua@gmail.com> wrote: Good day,currently we have a 1 node host that also runs a gluster in 1 node distributed mode and we recently decided to upgrade to a 3 node host which also runs gluster and set a replica count of 3.can someone help me how can i safely change the volume type to replicated_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RFN2TRVWAPA3DW...

Hello Strahil, can you point me in the right direction?, Im worried that our data might lost in the process On Sun, 9 May 2021, 6:08 am Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
I can guide you through the manual approach, but I'm pretty sure that there is an ansible role for that purpose.
Best Regards, Strahil Nikolov
On Sat, May 8, 2021 at 14:56, Ernest Clyde Chua <ernestclydeachua@gmail.com> wrote: Good day, currently we have a 1 node host that also runs a gluster in 1 node distributed mode and we recently decided to upgrade to a 3 node host which also runs gluster and set a replica count of 3. can someone help me how can i safely change the volume type to replicated _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RFN2TRVWAPA3DW...

I don't see how the data will be lost.The only risk I see is to add the 2 new hosts in the Gluster TSP (Cluster) and then stop them for some reason (like maintenance).You will loose quorum (in this hypothetical scenario) and thus all storage will be unavailable. Overall the processis:1. Prepare your HW Raid (unless you are using NVMEs -> JBOD) and note down the stripe size and the ammount of data disks (Raid 10 -> split into half, raid 5 -> -1disk due to parity)2. Add the new device in lvm filter3. 'pvcreate' with allignment parameters4. vgcreate5. Thinpool LV creation with relevant chunk size (between 1MB and 2 MB based on stripe size of HW raid * data disks)6. 'lvcreate'7. XFS creation (again alignment is needed + inode parameter set to 512)8. Mount the brick (if using SELINUX you can use the mount option context= system_u:object_r:sglusterd_t:s0 ) with noatime/relatimeDon't forget to add in /etc/fstab or create the relevant systemd '.mount' unit 9. Add the node in the TSPFrom the first node: gluster peer probe <new_node>10. When you add the 2 new hosts and their bricks are ready to be usedgluster volume add-brick <VOLUME_NAME> replica 3 host2:/path/to/brick host3:/path to brick 11. Wait for the healing to be done Some sources: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/ht... Best Regards,Strahil Nikolov On Sun, May 9, 2021 at 7:04, Ernest Clyde Chua<ernestclydeachua@gmail.com> wrote: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/M76UNA6EBWSWZV...

Good day, The distributed volume was created manually and currently i'm thinking to create a replica on the two new servers which 1 server will hold 2 bricks and replace it later, then recreate the brick for the server hosting 2 bricks into 1, i found the image location /gluster_bricks/data/data/19cdda62-da1c-4821-9e27-2b2585ededff/images and plan to create a backup on a cold storage but not sure how to transfer it to a new instance of engine On Mon, May 10, 2021 at 11:50 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
I don't see how the data will be lost. The only risk I see is to add the 2 new hosts in the Gluster TSP (Cluster) and then stop them for some reason (like maintenance).You will loose quorum (in this hypothetical scenario) and thus all storage will be unavailable.
Overall the processis: 1. Prepare your HW Raid (unless you are using NVMEs -> JBOD) and note down the stripe size and the ammount of data disks (Raid 10 -> split into half, raid 5 -> -1disk due to parity) 2. Add the new device in lvm filter 3. 'pvcreate' with allignment parameters 4. vgcreate 5. Thinpool LV creation with relevant chunk size (between 1MB and 2 MB based on stripe size of HW raid * data disks) 6. 'lvcreate' 7. XFS creation (again alignment is needed + inode parameter set to 512) 8. Mount the brick (if using SELINUX you can use the mount option context= system_u:object_r:sglusterd_t:s0 ) with noatime/relatime Don't forget to add in /etc/fstab or create the relevant systemd '.mount' unit 9. Add the node in the TSP From the first node: gluster peer probe <new_node> 10. When you add the 2 new hosts and their bricks are ready to be used gluster volume add-brick <VOLUME_NAME> replica 3 host2:/path/to/brick host3:/path to brick
11. Wait for the healing to be done
Some sources: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/ht...
Best Regards, Strahil Nikolov
On Sun, May 9, 2021 at 7:04, Ernest Clyde Chua <ernestclydeachua@gmail.com> wrote: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/M76UNA6EBWSWZV...

It's far simpler to:- Create new volume on the new hosts (replica volume) - Create a storage domain from that volume- Live storage migrate the VMs on the new volume- destroy the old volume- reuse the bricks (don't forget to recreate the FS) from the old volume and add them to the new one Best Regards,Strahil Nikolov On Mon, May 10, 2021 at 16:02, Ernest Clyde Chua<ernestclydeachua@gmail.com> wrote: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/DCFY76FCMYD5G3...
participants (2)
-
Ernest Clyde Chua
-
Strahil Nikolov