* gluster volume info all
On Sun, Feb 20, 2022 at 14:46, Strahil Nikolov<hunter86_bg@yahoo.com> wrote:In lrder to have an idea how to help you provide the following from all nodes (separate the info per node):ip a sgluster pool listgluster peer statusgluster volume listgluster volume status allgluster volume allBest Regards,Strahil Nikolov
On Sun, Feb 20, 2022 at 7:14, Patrick Hibbs<hibbsncc1701@gmail.com> wrote:OK, where to begin.
As for your Gluster issue, Gluster maintains it's own copy of the
configuration for each brick outside of oVirt / VDSM. As you have
changed the network config manually, you also needed to change the
Gluster config to match as well. The fact that you haven't is the
reason why Gluster failed to restart the volume.
However, In a hyperconverged configuration, oVirt maintains the gluster
configuration in it's database. Manually fixing Gluster's configuration
on the bricks themselves won't fix the engine's copy. (Believe me, I
had to fix this before myself because I didn't use hostnames initially
for the bricks. It's a pain to manually fix the database.) That copy is
used to connect the VM's to their storage. If the engine's copy doesn't
match Gluster's config, you'll have a working Gluster volume but the
hosts won't be able to start VMs.
Essentially, in a hyperconverged configuration oVirt doesn't allow
removal of host with a Gluster brick unless removal of that host won't
break Gluster and prevent the volume from running. (I.e. you can't
remove a host if doing so would cause the volume to loose quorum.)
Your options for fixing Gluster are either:
1. Add enough new bricks to the Gluster volumes so that
removal of an old host (brick) doesn't cause quorum loss.
- OR -
2. Manually update the engine's database with the engine and
all hosts offline to point to the correct hosts, after manually
updating the bricks and bringing back up the volume.
The first option is your safest bet. But that assumes that the volume
is up and can accept new bricks in the first place. If not, you could
potentially still do the first option but it would require reverting
your network configuration changes on each host first.
The second option is one of last resort. This is the reason why I said
updating the interfaces manually instead of using the web interface was
a bad idea. If possible, use the first option. If not, you'd be better
off just hosing the oVirt installation and reinstalling from scratch.
If you *really* need to use the second option, you'll need to follow
these instructions on each brick:
https://serverfault.com/questions/631365/rename-a-glusterfs-peer
and then update the engine database manually to point to the correct
hostnames for each brick. (Keep in mind I am *NOT* recommending that
you do this. This information is provided for educational /
experimental purposes only.)
As for Matthew's solution, the only reason it worked at all was because
you removed and re-added the host from the cluster. Had you not done
that, VDSM would have overwritten your changes on the next host upgrade
/ reinstall, and as you have seen that solution won't completely fix a
host in a hyperconverged configuration.
As to the question about oVirt's Logical Networks, what I meant was
that oVirt doesn't care what the IP configuration is for them, and that
if you wanted to change which network the roles used you needed to do
so elsewhere in the web interface. The only thing that does matter for
each role is that all of the clients using or hosts providing that role
can communicate with each other on that interface. (I.e. If you use
"Network Bob" for storage and migration, then all hosts with a "Network
Bob" interface must be able to communicate with each other over that
interface. If you use "Network Alice" for VM consoles, then all end-
user workstations must be able to commuicate with the "Network Alice"
interface. The exact IPs, vlan IDs, routing tables, and firewall
restrictions for a logical network don't matter as long as each role
can still reach the role on other hosts over the assigned interface.)
-Patrick Hibbs
On Sun, 2022-02-20 at 01:17 +0000, Abe E wrote:
> So upon changing my ovirt nodes (3Hyperconverged Gluster) as well as
> my engines hostname without a hitch I had an issue with 1 node and
> somehow I did something that broke its gluster and it wouldnt
> activate,
> So the gluster service wont start and after trying to open the node
> from webgui to see what its showing in its virtualization tab I was
> able to see that it allows me to run the hyperconverged wizard using
> the existing config. Due to this i lost the engine because well the
> 3rd node is just arbiter and node 2 complained about not having
> shared storage.
>
> This node is the one which I built ovirt gluster from so i assumed it
> would rebuild its gluster.. i accidentally clicked cleanup which got
> rid of my gluster brick mounts :)) then I tried to halt it and
> rebuild using existing configuration. Here is my issue though, am I
> able to rebuild my node?
>
> This is a new lab system so I believe i have all my vms still on my
> external HDDs. If I can restore this 1 node and have it rejoin the
> gluster then great, otherwise whats the best route using the webgui
> (I am remote at the moment) to just wipe all 3 nodes and start all
> over again and work it slowly? Is it simply deleting the partitions
> for the ovirt glusters on each node enough to let me rebuild ?
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-leave@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XDIEDTHA6ZYY45CYPDEU3IJJ4ARSUEIU/https://lists.ovirt.org/archives/list/users@ovirt.org/message/6RDDDCPOQOQSCQTBIJHCDREC5ZSDGZG4/
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: