For the migration of the HCI to another set of NICs, it's possible.
WARNING: Ensure that all hosts can reach themselves on the new IPs. ssh is a good test
(unless it has been hardened).
There are 2 paths to take:
- Add the new ip as a peer and replace-brick to use the new ip/hostname/fqdn
gluster peer probe new_ip
gluster volume replace-brick VOL old_ip:/path_to_brick new_ip:/path_to_brick
Note: You might need 'force' in order to do it.
This one is tricky to cleanup the old IP , you have to tinker inside the gluster configs .
If you are ok to see both IPs for the gluster node , then this one is the easier
- Reduce the replica count with remove brick , gluster peer detach old_ip, and then
gluster peer probe and add brick:
gluster volume remove-brick replica 2 VOL old_ip:/path_to_brick
Most probably it won't work with 'commit' and you might need to use
gluster peer detach old_ip
gluster peer probe new_ip
mkfs.xfs -f -i size=512 /block/device/hosting/the/brick
gluster volume add-brick VOL replica 3 new_ip:/path_to_brick
and wait for the heal to end
Once done, repeat for all nodes.
As you got a test gluster cluster, you can test it there.
На 12 август 2020 г. 6:27:56 GMT+03:00, thomas(a)hoberg.net написа:
>Thanks for putting in the effort!
>I learned a lot of new things.
>I also learned that I need to learn a few more now.
>The table could use some alternating background or a grid: Too easy to
>Environments change over time, e.g. you find you really should have
>split the management, storage, migration and north-south networks, but
>any documentation I find says "do it right from the start".
>So I wonder if there is any easy enough way to split out the storage
>network to a newly added second set of NICs in a HCI environment, or if
>a re-install is really the only reasonable thing to do.
>They have created such a nice looking GUI around the whole network
>configuration stuff, but from what I have experienced, just hitting
>buttons is very dangerous there, while many don't have a description of
>Since you're at it: I have been able to make nested virtualization work
>to a degree.
>3-node HCI (physical) is hosting another 3-node HCI (virtual), virtual
>oVirt Cockpit deployed a working gluster, launched and prepared the
>HostedEngine the nested level, managed to move it on the Gluster and if
>I boot the virtual nodes, their VDSM will launch the nested hosted
>engine, but that machine can't see the network any more. I can connect
>to it via hosted-engine --console, it has an Ethernet interface, but no
>traffic gets through either way: Ideas? What do I look for?
>I am wildy guessing that ovn has nesting issues, so would going the
>Linux bridge approach help there? How do I chose between the two? Must
>I leave my beloved Cockpit wizard and use the script installer?
>Users mailing list -- users(a)ovirt.org
>To unsubscribe send an email to users-leave(a)ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct: