Gluster volumes not healing (perhaps after host maintenance?)
by David White
I discovered that the servers I purchased did not come with 10Gbps network cards, like I thought they did. So my storage network has been running on a 1Gbps connection for the past week, since I deployed the servers into the datacenter a little over a week ago. I purchased 10Gbps cards, and put one of my hosts into maintenance mode yesterday, prior to replacing the daughter card. It is now back online running fine on the 10Gbps card.
All VMs seem to be working, even when I migrate them onto cha2, which is the host I did maintenance on yesterday morning.
The other two hosts are still running on the 1Gbps connection, but I plan to do maintenance on them next week.
The oVirt manager shows that all 3 hosts are up, and that all of my volumes - and all of my bricks - are up. However, every time I look at the storage, it appears that the self-heal info for 1 of the volumes is 10 minutes, and the self-heal info for another volume is 50+ minutes.
This morning is the first time in the last couple of days that I've paid close attention to the numbers, but I don't see them going down.
When I log into each of the hosts, I do see everything is connected in gluster.
It is interesting to me, in this particular case, though that gluster on cha3 notices the hostname of 10.1.0.10 to be the IP address, and not the hostname (cha1).
The host that I did the maintenance on is cha2.
[root@cha3-storage dwhite]# gluster peer statusNumber of Peers: 2Hostname: 10.1.0.10Uuid: 87a4f344-321a-48b9-adfb-e3d2b56b8e7bState: Peer in Cluster (Connected)Hostname: cha2-storage.mgt.barredowlweb.comUuid: 93e12dee-c37d-43aa-a9e9-f4740b9cab14State: Peer in Cluster (Connected)
When I run `gluster volume heal data`, I see the following:
[root@cha3-storage dwhite]# gluster volume heal data
Launching heal operation to perform index self heal on volume data has been unsuccessful:
Commit failed on cha2-storage.mgt.barredowlweb.com. Please check log file for details.
I get the same results if I run the command on cha2, for any volume:
[root@cha2-storage dwhite]# gluster volume heal data
Launching heal operation to perform index self heal on volume data has been unsuccessful:
Glusterd Syncop Mgmt brick op 'Heal' failed. Please check glustershd log file for details.
[root@cha2-storage dwhite]# gluster volume heal vmstore
Launching heal operation to perform index self heal on volume vmstore has been unsuccessful:
Glusterd Syncop Mgmt brick op 'Heal' failed. Please check glustershd log file for details.
I see a lot of stuff like this on cha2 /var/log/glusterfs/glustershd.log:
[2021-04-24 11:33:01.319888] I [rpc-clnt.c:1975:rpc_clnt_reconfig] 2-engine-client-0: changing port to 49153 (from 0)[2021-04-24 11:33:01.329463] I [MSGID: 114057] [client-handshake.c:1128:select_server_supported_programs] 2-engine-client-0: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}][2021-04-24 11:33:01.330075] W [MSGID: 114043] [client-handshake.c:727:client_setvolume_cbk] 2-engine-client-0: failed to set the volume [{errno=2}, {error=No such file or directory}][2021-04-24 11:33:01.330116] W [MSGID: 114007] [client-handshake.c:752:client_setvolume_cbk] 2-engine-client-0: failed to get from reply dict [{process-uuid}, {errno=22}, {error=Invalid argument}][2021-04-24 11:33:01.330140] E [MSGID: 114044] [client-handshake.c:757:client_setvolume_cbk] 2-engine-client-0: SETVOLUME on remote-host failed [{remote-error=Brick not found}, {errno=2}, {error=No such file or directory}][2021-04-24 11:33:01.330155] I [MSGID: 114051] [client-handshake.c:879:client_setvolume_cbk] 2-engine-client-0: sending CHILD_CONNECTING event [][2021-04-24 11:33:01.640480] I [rpc-clnt.c:1975:rpc_clnt_reconfig] 3-vmstore-client-0: changing port to 49154 (from 0)The message "W [MSGID: 114007] [client-handshake.c:752:client_setvolume_cbk] 3-vmstore-client-0: failed to get from reply dict [{process-uuid}, {errno=22}, {error=Invalid argument}]" repeated 4 times between [2021-04-24 11:32:49.602164] and [2021-04-24 11:33:01.649850][2021-04-24 11:33:01.649867] E [MSGID: 114044] [client-handshake.c:757:client_setvolume_cbk] 3-vmstore-client-0: SETVOLUME on remote-host failed [{remote-error=Brick not found}, {errno=2}, {error=No such file or directory}][2021-04-24 11:33:01.649969] I [MSGID: 114051] [client-handshake.c:879:client_setvolume_cbk] 3-vmstore-client-0: sending CHILD_CONNECTING event [][2021-04-24 11:33:01.650095] I [MSGID: 114018] [client.c:2225:client_rpc_notify] 3-vmstore-client-0: disconnected from client, process will keep trying to connect glusterd until brick's port is available [{conn-name=vmstore-client-0}]
How do I further troubleshoot?
Sent with ProtonMail Secure Email.
2 years, 10 months
poweroff and reboot with ovirt_vm ansible module
by Nathanaël Blanchet
Hello, is there a way to poweroff or reboot (without stopped and running
state) a vm with the ovirt_vm ansible module?
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
2 years, 10 months
[OLVM] Host non responsive after installation
by alan@softdrive.co
I am using Oracle Linux Virtualization Manager, following this guide: https://docs.oracle.com/en/virtualization/oracle-linux-virtualization-man...
After adding a host to the engine, the host becomes non responsive due to network errors:
engine.log
2021-04-27 14:53:02,255Z ERROR [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] (EE-ManagedThreadFactory-engine-Thread-32356) [38586e0e] Host installation failed for host 'c97604b3-5774-4260-92fd-633257aa7498', 'GPU2-2': Network error during communication with the host
Help resolving this would be much appreciated!
2 years, 10 months
Something broke & took down multiple VMs for ~20 minutes
by David White
As the subject suggestions, something in oVirt HCI broke. I have no idea what, and it recovered on its own after about 20 minutes or so.
I believe that the issue was limited to a single host (although I don't know that for sure), as we had two VMs go completely unresponsive, but a 3rd VM remained operational. For a while during the outage, I was able to log into the oVirt admin web portal, and I noticed at least 1-2 of my hosts (I have 3 hosts) showed the problematic VMs as being problematic inside of oVirt.
Reviewing the oVirt Events, I see that this basically started right when the ETL Service Started. There were no events before that point since yesterday, but right when the ETL Service started, it seems like all hell broke loose.
oVirt detected "No faulty multipaths" on any of the hosts, but then very quickly started indicating that hosts, vms, and storage targets were unavailable. See my screenshot below.
Around 30 - 35 minutes later, it appears that the Hosted Engine terminated due to a storage issue, and auto recovered on a different host. There's a 2nd screenshot beneath the first.
Everything came back up shortly before 9am, and has been stable since.
In fact, the Volume replication issues that I saw in my environment after I performed maintenance on 1 of my hosts on Friday are no longer present. It appears that the Hosted Engine sees the storage as being perfectly healthy.
How do I even begin to figure out what happened, and try to prevent it from happening again?
[Screenshot from 2021-04-26 16-36-47.png]
[Screenshot from 2021-04-26 16-44-08.png]
Sent with ProtonMail Secure Email.
2 years, 10 months
pool list vm assign user
by Dominique D
Is there a way to know how to see who the vm of a pool assigned to?
I am able on the portal to see those who are "logged-in user" but the others VM I don't know to whom they are assigned.
2 years, 10 months
oVirt 2021 Spring survey questions
by Sandro Bonazzola
Hi,
it's about the usual time of the year when we ask the community to provide
feedback with a survey.
Any questions you'd like to be asked?
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
2 years, 10 months
error id 980
by ozmen62@hotmail.com
Hi,
First of all, thanks for this great list and the people who try to help eachother.
This time my problem is about engine OVF file or engine storage setting.
How can i came with this idea, because every 2-3 hours engine try to migrate other host(From host1 to host2)
"Invalid status on Data Center B300. Setting status to Non Responsive."
when i check hosts log , see these
ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR Failed extracting VM OVF from the OVF_STORE volume, falling back to initial vm.conf
ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM stopped on localhost
ovirt-ha-agent ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore ERROR Unable to extract HEVM OVF
i believe there is some kind of fc storage path or config problem. My storage setting is good. there is no error about them
Is there anyone who experienced about that to solve it
thanks
2 years, 11 months
Cluster level 4.5
by Don Dupuis
What version of libvirt is required for a host to be put in this cluster
level? I am using CentOS 8.3 and cpu is Cascade Lake Server. It says that
my host is only compatible with cluster version 4.2,4.3 and 4.4. I am doing
a new install of Ovirt 4.4.5. I have tried to update libvirt version but
have run into issues. Currently installed libvirt
is libvirt-6.0.0-28.module_el8.3.0+555+a55c8938.x86_64.
Don
2 years, 11 months
new engine restore from backup
by ozmen62@hotmail.com
Hi,
I've built e new engine and tried to restore from backup
after run engine-setup it returns;
[ ERROR ] Failed to execute stage 'Misc configuration': function getdwhhistorytimekeepingbyvarname(unknown) does not exist
LINE 2: select * from GetDwhHistoryTimekeepingByVarName(
redhat page says seploy new engine, but already done it.
old and new engine are same version
Do you have any idea how can i restore engine.
There is no another backup
2 years, 11 months