[ovirt-users] Ovirt 4.0.6 guests 'Not Responding'
Michal Skrivanek
michal.skrivanek at redhat.com
Tue Feb 7 10:14:07 UTC 2017
> On 6 Feb 2017, at 16:20, Mark Greenall <m.greenall at iontrading.com> wrote:
>
> Hi Pavel,
>
> Thanks for responding. I bounced the VDSMD service, the guests recovered and the monitor and queue full messages also cleared. However, we did keep getting intermittent “Guest x Not Responding “ messages being communicated by the Hosted Engine, in most cases the guests would actually almost immediately recover though. The odd occasion would result in guests staying “Not Responding” and me bouncing the VDSMD service again. The Host had a memory load of around 85% (out of 768GB) and a CPU load of around 65% (48 cores). I have since added another host to that cluster and spread the guests between the two hosts. This seems to have totally cleared the messages (at least for the last 5 days anyway).
>
> I suspect the problem is load related. At what capacity would Ovirt regard a host as being ‘full’?
the above sounds ok, but one of the best indicators is the unix system load
what is the number of VMs (and guest cpus) you’re running on that 48 core host?
also check if the vdsm or libvirt process cpu usage is not exceptionally high
>
> Thanks,
> Mark
>
> From: Pavel Gashev [mailto:Pax at acronis.com <mailto:Pax at acronis.com>]
> Sent: 31 January 2017 15:19
> To: Mark Greenall <m.greenall at iontrading.com <mailto:m.greenall at iontrading.com>>; users at ovirt.org <mailto:users at ovirt.org>
> Subject: Re: [ovirt-users] Ovirt 4.0.6 guests 'Not Responding'
>
> Mark,
>
> Could you please file a bug report?
>
> Restart of vdsmd service would help to resolve the “executor queue full” state.
>
>
> From: <users-bounces at ovirt.org <mailto:users-bounces at ovirt.org>> on behalf of Mark Greenall <m.greenall at iontrading.com <mailto:m.greenall at iontrading.com>>
> Date: Monday 30 January 2017 at 15:26
> To: "users at ovirt.org <mailto:users at ovirt.org>" <users at ovirt.org <mailto:users at ovirt.org>>
> Subject: [ovirt-users] Ovirt 4.0.6 guests 'Not Responding'
>
> Hi,
>
> Host server: Dell PowerEdge R815 (40 cores and 768GB memory)
> Stoage: Dell Equallogic (Firmware V8.1.4)
> OS: Centos 7.3 (although the same thing happens on 7.2)
> Ovirt: 4.0.6.3-1
>
> We have several Ovirt clusters. Two of the hosts (in separate clusters) are showing as up in Hosted Engine but the guests running on them are showing as Not Responding. I can connect to the guests via ssh, etc but can’t interact with them from the Ovirt GUI. It was fine on Saturday (28th Jan) morning but looks like something happened Sunday morning around 07:14 as we suddenly see the following in engine.log on one host:
>
> 2017-01-29 07:14:26,952 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [53ca8dc5] VM 'd0aa990f-e6aa-4e79-93ce-011fe1372fb0'(lnd-ion-lindev-01) moved from 'Up' --> 'NotResponding'
> 2017-01-29 07:14:27,069 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler1) [53ca8dc5] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM lnd-ion-lindev-01 is not responding.
> 2017-01-29 07:14:27,070 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [53ca8dc5] VM '788bfc0e-1712-469e-9a0a-395b8bb3f369'(lnd-ion-windev-02) moved from 'Up' --> 'NotResponding'
> 2017-01-29 07:14:27,088 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler1) [53ca8dc5] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM lnd-ion-windev-02 is not responding.
> 2017-01-29 07:14:27,089 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [53ca8dc5] VM 'd7eaa4ec-d65e-45c0-bc4f-505100658121'(lnd-ion-windev-04) moved from 'Up' --> 'NotResponding'
> 2017-01-29 07:14:27,103 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler1) [53ca8dc5] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM lnd-ion-windev-04 is not responding.
> 2017-01-29 07:14:27,104 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [53ca8dc5] VM '5af875ad-70f9-4f49-9640-ee2b9927348b'(lnd-anv9-sup1) moved from 'Up' --> 'NotResponding'
> 2017-01-29 07:14:27,121 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler1) [53ca8dc5] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM lnd-anv9-sup1 is not responding.
> 2017-01-29 07:14:27,121 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [53ca8dc5] VM 'b3b7c5f3-0b5b-4d8f-9cc8-b758cc1ce3b9'(lnd-db-dev-03) moved from 'Up' --> 'NotResponding'
> 2017-01-29 07:14:27,136 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler1) [53ca8dc5] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM lnd-db-dev-03 is not responding.
> 2017-01-29 07:14:27,137 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [53ca8dc5] VM '6c0a6e17-47c3-4464-939b-e83984dbeaa6'(lnd-db-dev-04) moved from 'Up' --> 'NotResponding'
> 2017-01-29 07:14:27,167 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler1) [53ca8dc5] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM lnd-db-dev-04 is not responding.
> 2017-01-29 07:14:27,168 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [53ca8dc5] VM 'ab15bb08-1244-4dc1-a4f1-f6e94246aa23'(lnd-ion-lindev-05) moved from 'Up' --> 'NotResponding'
>
>
> Checking the vdsm logs this morning on the hosts I see a lot of the following messages:
>
> jsonrpc.Executor/0::WARNING::2017-01-30 09:34:15,989::vm::4890::virt.vm::(_setUnresponsiveIfTimeout) vmId=`ab15bb08-1244-4dc1-a4f1-f6e94246aa23`::monitor became unresponsive (command timeout, age=94854.48)
> jsonrpc.Executor/0::WARNING::2017-01-30 09:34:15,990::vm::4890::virt.vm::(_setUnresponsiveIfTimeout) vmId=`20a51347-ef08-47a9-9982-32b2047991e1`::monitor became unresponsive (command timeout, age=94854.48)
> jsonrpc.Executor/0::WARNING::2017-01-30 09:34:15,991::vm::4890::virt.vm::(_setUnresponsiveIfTimeout) vmId=`2cd8698d-a0f9-43b7-9a89-92a93e920eb7`::monitor became unresponsive (command timeout, age=94854.49)
> jsonrpc.Executor/0::WARNING::2017-01-30 09:34:15,992::vm::4890::virt.vm::(_setUnresponsiveIfTimeout) vmId=`5af875ad-70f9-4f49-9640-ee2b9927348b`::monitor became unresponsive (command timeout, age=94854.49)
>
> and
>
> vdsm.Scheduler::WARNING::2017-01-30 09:36:36,444::periodic::212::virt.periodic.Operation::(_dispatch) could not run <VmDispatcher operation=<class 'vdsm.virt.periodic.DriveWatermarkMonitor'> at 0x295bd50>, executor queue full
> vdsm.Scheduler::WARNING::2017-01-30 09:36:38,446::periodic::212::virt.periodic.Operation::(_dispatch) could not run <VmDispatcher operation=<class 'vdsm.virt.periodic.DriveWatermarkMonitor'> at 0x295bd50>, executor queue full
> vdsm.Scheduler::WARNING::2017-01-30 09:36:38,627::periodic::212::virt.periodic.Operation::(_dispatch) could not run <vdsm.virt.sampling.HostMonitor object at 0x295bdd0>, executor queue full
> vdsm.Scheduler::WARNING::2017-01-30 09:36:38,707::periodic::212::virt.periodic.Operation::(_dispatch) could not run <vdsm.virt.sampling.VMBulkSampler object at 0x295ba90>, executor queue full
> vdsm.Scheduler::WARNING::2017-01-30 09:36:38,929::periodic::212::virt.periodic.Operation::(_dispatch) could not run <VmDispatcher operation=<class 'vdsm.virt.periodic.BlockjobMonitor'> at 0x295ba10>, executor queue full
> vdsm.Scheduler::WARNING::2017-01-30 09:36:40,450::periodic::212::virt.periodic.Operation::(_dispatch) could not run <VmDispatcher operation=<class 'vdsm.virt.periodic.DriveWatermarkMonitor'> at 0x295bd50>, executor queue full
> vdsm.Scheduler::WARNING::2017-01-30 09:36:42,451::periodic::212::virt.periodic.Operation::(_dispatch) could not run <VmDispatcher operation=<class 'vdsm.virt.periodic.DriveWatermarkMonitor'> at 0x295bd50>, executor queue full
> vdsm.Scheduler::WARNING::2017-01-30 09:36:44,452::periodic::212::virt.periodic.Operation::(_dispatch) could not run <VmDispatcher operation=<class 'vdsm.virt.periodic.DriveWatermarkMonitor'> at 0x295bd50>, executor queue full
>
> I’ve also attached logs from time period for one of the hosts in question. This host is in a single node DC and cluster with iSCSI shared storage. I’ve had to make the time window on the logs quite small to fit within the mail size limit. Let me know if you need anything more specific.
>
> Many Thanks,
> Mark
> _______________________________________________
> Users mailing list
> Users at ovirt.org <mailto:Users at ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170207/cc3448ba/attachment-0001.html>
More information about the Users
mailing list