Hello All.
Engine log can be viewed here:
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/3838/art...
I see the following exception there:
2016-12-02 04:29:24,030-05 DEBUG
[org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
(ResponseWorker) [83b6b5d] Message received: {"jsonrpc": "2.0",
"id":
"ec254aad-441b-47e7-a644-aebddcc1d62c", "result": true}
2016-12-02 04:29:24,030-05 ERROR
[org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] (ResponseWorker)
[83b6b5d] Not able to update response for
"ec254aad-441b-47e7-a644-aebddcc1d62c"
2016-12-02 04:29:24,041-05 DEBUG
[org.ovirt.engine.core.utils.timer.FixedDelayJobListener]
(DefaultQuartzScheduler3) [47a31d72] Rescheduling
DEFAULT.org.ovirt.engine.core.bll.gluster.GlusterSyncJob.refreshLightWeightData#-9223372036854775775
as there is no unfired trigger.
2016-12-02 04:29:24,024-05 DEBUG
[org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Exception:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
VDSGenericException: VDSNetworkException: Timeout during xml-rpc call
at
org.ovirt.engine.core.vdsbroker.vdsbroker.FutureVDSCommand.get(FutureVDSCommand.java:73)
[vdsbroker.jar:]
....
2016-12-02 04:29:24,042-05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Timeout waiting for
VDSM response: Internal timeout occured
2016-12-02 04:29:24,044-05 DEBUG
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(default task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] START,
GetCapabilitiesVDSCommand(HostName = lago-basic-suite-master-host0,
VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
hostId='5eb7019e-28a3-4f93-9188-685b6c64a2f5',
vds='Host[lago-basic-suite-master-host0,5eb7019e-28a3-4f93-9188-685b6c64a2f5]'}),
log id: 58f448b8
2016-12-02 04:29:24,044-05 DEBUG
[org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (default
task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] SEND
destination:jms.topic.vdsm_requests
reply-to:jms.topic.vdsm_responses
content-length:105
Please note that this runs on localhost with local bridge. So it is not
likely to be network itself.
Anton.
On Fri, Dec 2, 2016 at 10:43 AM, Anton Marchukov <amarchuk(a)redhat.com>
wrote:
FYI. Experimental flow for master currently fails to run a VM. The
tests
times out while waiting for 180 seconds:
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_
master/3838/testReport/(root)/004_basic_sanity/vm_run/
This is reproducible over 23 runs of this happened tonight, sounds like a
regression to me:
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/
I will update here with additional information once I find it.
Last successful run was with this patch:
https://gerrit.ovirt.org/#/c/66416/ (vdsm: API: move vm parameters fixup
in a method)
Known to start failing around this patch:
https://gerrit.ovirt.org/#/c/67647/ (vdsmapi: fix a typo in string
formatting)
Please notes that we do not have gating implemented yet, so everything
that was merged in between those patches might have caused this (not
necessary in vdsm project).
Anton.
--
Anton Marchukov
Senior Software Engineer - RHEV CI - Red Hat
--
Anton Marchukov
Senior Software Engineer - RHEV CI - Red Hat