HI Jayme & Strahil,
Thank you again for your messages.
Reading https://stackoverflow.com/questions/52394849/can-i-change-gluster...,
I think I understand now what Strahil is suggesting.
It sounds to me like you're saying I can reconfigure the existing gluster volume
(don't destroy it) by changing it from a Replica 3 over to a Replica 2/Arbiter 1.
Then once I do that operation, I can THEN go in and put the two hosts into maintenance
mode one at a time to rebuild the RAID arrays, and then heal the volume on the larger
RAID.
So it would look like this:
1) Put 3rd host into maintenance mode & verify no heals are necessary
2) Make the 3rd host the arbiter
2a) remove-brick replica 2
-- if I understand correctly, this will basically just reconfigure the existing volume to
replicate between the 2 bricks, and not all 3 ... is this correct?
2b) add-brick replicat 3 arbiter 1
-- If I understand correctly, this will reconfigure the volume (again), adding the 3rd
server's storage back to the Gluster volume, but only as an arbiter node, correct?
3) Now with everything healthy, the volume is now a Replica 2 / Arbiter 1.... and I can
now stop gluster on each of the 2 servers getting the storage upgrade, rebuild the RAID on
the new storage, reboot, and let gluster heal itself before moving on to the next server.
Do I understand this process right?
Jayme's idea to use my 4th server as a temporary NFS location is not a bad idea. I
could definitely do that, "just in case" the Gluster volume got corrupted.
Unfortunately my backup server has spinning disks instead of SSD drives, but speed
wouldn't be too noticeable, and I could do it over a weekend or something.
Thanks again for your input.
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Saturday, July 10th, 2021 at 6:53 AM, Jayme <jaymef(a)gmail.com> wrote:
Just a thought but depending on resources you might be able to use
your 4th server as nfs storage and live migrate vm disks to it and off of your gluster
volumes. I’ve done this in the past when doing major maintenance on gluster volumes to err
on the side of caution.
On Sat, Jul 10, 2021 at 7:22 AM David White via Users
<users(a)ovirt.org> wrote:
> Hmm.... right as I said that, I just had a thought.
> I DO have a "backup" server in place (that I haven't even started
using yet), that currently has some empty hard drive bays.
>
> It would take some extra work, but I could use that 4th backup
server as a temporary staging ground to begin building the new Gluster configuration. Once
I have that server + 2 of my production servers rebuilt properly, I could then simply
remove and replace this "backup" server with my 3rd server in the cluster.
>
> So this effectively means that I have 2 servers that I can take
down completely at a single time to rebuild gluster, instead of just 1. I think that
simplifies things.
>
> Sent with ProtonMail Secure Email.
>
> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
>
> On Saturday, July 10th, 2021 at 6:14 AM, David White
<dmwhite823(a)protonmail.com> wrote:
>
> > Thank you. And yes, I agree, this needs to occur in a
maintenance window and be done very carefully. :)
> >
> > My only problem with this method is that I need to
*replace* disks in the two servers.
> > I don't have any empty hard drive bays, so will effectively need to put a
host into maintenance mode, remove the drives, and put new drives in.
> >
> > I will NOT be touching the OS drives, however, as those are
on their own separate RAID array.
> >
> > So, essentially, it will need to look something like this:
> >
> > - Put the cluster into global maintenance mode
> > - Put 1 host into full maintenance mode / deactivate it
> > - Stop gluster
> > - Remove the storage
> > - Add the new storage & reconfigure
> > - Start gluster
> > - Re-add the host to the cluster
> >
> > Adding the new storage & reconfiguring is the head
scratcher for me, given that I don't have room for the old hard drives + new hard
drives at the same time.
> >
> > Sent with ProtonMail Secure Email.
> >
> > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> >
> > On Saturday, July 10th, 2021 at 5:55 AM, Strahil Nikolov
<hunter86_bg(a)yahoo.com> wrote:
> >
> > > Hi David,
> > >
> > > any storage operation can cause unexpected situations,
so always plan your activites for low traffic hours and test them on your test environment
in advance.
> > >
> > > I think it's easier if you (command line):
> > >
> > > -verify no heals are pending. Not a single one.- set
the host to maintenance over ovirt- remove the third node from gluster volumes
(remove-brick replica 2)- umount the bricks on the third node- recreate a smaller LV with
'-i maxpct=90 size=512' and mount it with the same options like the rest of the
nodes. Usually I use
'noatime,inode64,context=system_u:object_r:glusterd_brick_t:s0'- add this new
brick (add-brick replica 3 arbiter 1) to the volume- wait for the heals to finish
> > >
> > > Then repeat again for each volume.
> > >
> > > Adding the new disks should be done later.
> > >
> > > Best Regards,Strahil Nikolov
> > >
> > > > On Sat, Jul 10, 2021 at 3:15, David White via
Users<users(a)ovirt.org> wrote:My current hyperconverged environment is replicating
data across all 3 servers.
> > > > I'm running critically low on disk space, and need to add space.
> > > >
> > > > To that end, I've ordered 8x 800GB ssd
drives, and plan to put 4 drives in 1 server, and 4 drives in the other.
> > > >
> > > > What's my best option for reconfiguring the
hyperconverged cluster, to change gluster storage away from Replica 3 to a Replica 2 /
Arbiter model?
> > > > I'd really prefer not to have to reinstall things from scratch,
but I'll do that if I have to.
> > > >
> > > > My most important requirement is that I cannot
have any downtime for my VMs (so I can only reconfigure 1 host at a time).
> > > >
> > > > Sent with ProtonMail Secure Email.
> > > >
> > > > _______________________________________________
> > > >
> > > > Users mailing list -- users(a)ovirt.org
> > > >
> > > > To unsubscribe send an email to
users-leave(a)ovirt.org
> > > >
> _______________________________________________
>
> Users mailing list -- users(a)ovirt.org
>
> To unsubscribe send an email to users-leave(a)ovirt.org
>