Hmm.... right as I said that, I just had a thought.
I DO have a "backup" server in place (that I haven't even started using
yet), that currently has some empty hard drive bays.
It would take some extra work, but I could use that 4th backup server as a temporary
staging ground to begin building the new Gluster configuration. Once I have that server +
2 of my production servers rebuilt properly, I could then simply remove and replace this
"backup" server with my 3rd server in the cluster.
So this effectively means that I have 2 servers that I can take down completely at a
single time to rebuild gluster, instead of just 1. I think that simplifies things.
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Saturday, July 10th, 2021 at 6:14 AM, David White <dmwhite823(a)protonmail.com>
wrote:
Thank you. And yes, I agree, this needs to occur in a maintenance
window and be done very carefully. :)
My only problem with this method is that I need to *replace* disks in
the two servers.
I don't have any empty hard drive bays, so will effectively need to put a host into
maintenance mode, remove the drives, and put new drives in.
I will NOT be touching the OS drives, however, as those are on their
own separate RAID array.
So, essentially, it will need to look something like this:
- Put the cluster into global maintenance mode
- Put 1 host into full maintenance mode / deactivate it
- Stop gluster
- Remove the storage
- Add the new storage & reconfigure
- Start gluster
- Re-add the host to the cluster
Adding the new storage & reconfiguring is the head scratcher for
me, given that I don't have room for the old hard drives + new hard drives at the same
time.
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Saturday, July 10th, 2021 at 5:55 AM, Strahil Nikolov
<hunter86_bg(a)yahoo.com> wrote:
> Hi David,
>
> any storage operation can cause unexpected situations, so always
plan your activites for low traffic hours and test them on your test environment in
advance.
>
> I think it's easier if you (command line):
>
> -verify no heals are pending. Not a single one.- set the host to
maintenance over ovirt- remove the third node from gluster volumes (remove-brick replica
2)- umount the bricks on the third node- recreate a smaller LV with '-i maxpct=90
size=512' and mount it with the same options like the rest of the nodes. Usually I use
'noatime,inode64,context=system_u:object_r:glusterd_brick_t:s0'- add this new
brick (add-brick replica 3 arbiter 1) to the volume- wait for the heals to finish
>
> Then repeat again for each volume.
>
> Adding the new disks should be done later.
>
> Best Regards,Strahil Nikolov
>
> > On Sat, Jul 10, 2021 at 3:15, David White via
Users<users(a)ovirt.org> wrote:My current hyperconverged environment is replicating
data across all 3 servers.
> > I'm running critically low on disk space, and need to add space.
> >
> > To that end, I've ordered 8x 800GB ssd drives, and plan
to put 4 drives in 1 server, and 4 drives in the other.
> >
> > What's my best option for reconfiguring the
hyperconverged cluster, to change gluster storage away from Replica 3 to a Replica 2 /
Arbiter model?
> > I'd really prefer not to have to reinstall things from scratch, but
I'll do that if I have to.
> >
> > My most important requirement is that I cannot have any
downtime for my VMs (so I can only reconfigure 1 host at a time).
> >
> > Sent with ProtonMail Secure Email.
> >
> > _______________________________________________
> >
> > Users mailing list -- users(a)ovirt.org
> >
> > To unsubscribe send an email to users-leave(a)ovirt.org
> >