And its still failing from Friday,
Since we don't have official Centos 7.3 repos yet ( hopefully we'll have it
this week, but as of this moment its not published yet ) , we have to
either revert the offending patch
or send a quick fix.
Right now all experimental flows for master are not working and nightly
rpms are not refreshed with new RPMs.
On Fri, Dec 2, 2016 at 9:41 PM, Yaniv Kaul <ykaul(a)redhat.com> wrote:
On Dec 2, 2016 2:11 PM, "Anton Marchukov" <amarchuk(a)redhat.com> wrote:
Hello Martin.
Do by outdated you mean the old libvirt? If so that is that livirt
available in CentOS 7.2? There is no 7.3 yet.
Right, this is the issue.
Y.
Anton.
On Fri, Dec 2, 2016 at 1:07 PM, Martin Polednik <mpolednik(a)redhat.com>
wrote:
> On 02/12/16 10:55 +0100, Anton Marchukov wrote:
>
>> Hello All.
>>
>> Engine log can be viewed here:
>>
>>
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
>> ster/3838/artifact/exported-artifacts/basic_suite_master.sh-
>> el7/exported-artifacts/test_logs/basic-suite-master/post-004
>> _basic_sanity.py/lago-basic-suite-master-engine/_var_log_ovi
>> rt-engine/engine.log
>>
>> I see the following exception there:
>>
>> 2016-12-02 04:29:24,030-05 DEBUG
>> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
>> (ResponseWorker) [83b6b5d] Message received: {"jsonrpc":
"2.0", "id":
>> "ec254aad-441b-47e7-a644-aebddcc1d62c", "result": true}
>> 2016-12-02 04:29:24,030-05 ERROR
>> [org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] (ResponseWorker)
>> [83b6b5d] Not able to update response for
>> "ec254aad-441b-47e7-a644-aebddcc1d62c"
>> 2016-12-02 04:29:24,041-05 DEBUG
>> [org.ovirt.engine.core.utils.timer.FixedDelayJobListener]
>> (DefaultQuartzScheduler3) [47a31d72] Rescheduling
>> DEFAULT.org.ovirt.engine.core.bll.gluster.GlusterSyncJob.ref
>> reshLightWeightData#-9223372036854775775
>> as there is no unfired trigger.
>> 2016-12-02 04:29:24,024-05 DEBUG
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Exception:
>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
>> VDSGenericException: VDSNetworkException: Timeout during xml-rpc call
>> at org.ovirt.engine.core.vdsbroker.vdsbroker.FutureVDSCommand.g
>> et(FutureVDSCommand.java:73)
>> [vdsbroker.jar:]
>>
>> ...
>>
>> 2016-12-02 04:29:24,042-05 ERROR
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default
>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] Timeout waiting for
>> VDSM response: Internal timeout occured
>> 2016-12-02 04:29:24,044-05 DEBUG
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
>> (default task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] START,
>> GetCapabilitiesVDSCommand(HostName = lago-basic-suite-master-host0,
>> VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
>> hostId='5eb7019e-28a3-4f93-9188-685b6c64a2f5',
>> vds='Host[lago-basic-suite-master-host0,5eb7019e-28a3-4f93-9
>> 188-685b6c64a2f5]'}),
>> log id: 58f448b8
>> 2016-12-02 04:29:24,044-05 DEBUG
>> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (default
>> task-12) [d932871a-af4f-4fc9-9ee5-f7a0126a7b85] SEND
>> destination:jms.topic.vdsm_requests
>> reply-to:jms.topic.vdsm_responses
>> content-length:105
>>
>>
>> Please note that this runs on localhost with local bridge. So it is not
>> likely to be network itself.
>>
>
> The main issue I see is that the VM run command has actually failed
> due to libvirt no accepting /dev/urandom as RNG source[1]. This was
> done as engine patch and according to git log, posted around Mon Nov
> 28. Also adding Jakub - this should either not happen from engine's
> point of view or the lago host is outdated.
>
> [1]
>
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
> ster/3838/artifact/exported-artifacts/basic_suite_master.sh-
> el7/exported-artifacts/test_logs/basic-suite-master/post-004
> _basic_sanity.py/lago-basic-suite-master-host0/_var_log_vdsm/vdsm.log
>
>
> Anton.
>>
>> On Fri, Dec 2, 2016 at 10:43 AM, Anton Marchukov <amarchuk(a)redhat.com>
>> wrote:
>>
>> FYI. Experimental flow for master currently fails to run a VM. The tests
>>> times out while waiting for 180 seconds:
>>>
>>>
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_
>>> master/3838/testReport/(root)/004_basic_sanity/vm_run/
>>>
>>> This is reproducible over 23 runs of this happened tonight, sounds like
>>> a
>>> regression to me:
>>>
>>>
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/
>>>
>>> I will update here with additional information once I find it.
>>>
>>> Last successful run was with this patch:
>>>
>>>
https://gerrit.ovirt.org/#/c/66416/ (vdsm: API: move vm parameters
>>> fixup
>>> in a method)
>>>
>>> Known to start failing around this patch:
>>>
>>>
https://gerrit.ovirt.org/#/c/67647/ (vdsmapi: fix a typo in string
>>> formatting)
>>>
>>> Please notes that we do not have gating implemented yet, so everything
>>> that was merged in between those patches might have caused this (not
>>> necessary in vdsm project).
>>>
>>> Anton.
>>> --
>>> Anton Marchukov
>>> Senior Software Engineer - RHEV CI - Red Hat
>>>
>>>
>>>
>>
>> --
>> Anton Marchukov
>> Senior Software Engineer - RHEV CI - Red Hat
>>
>
> _______________________________________________
>> Devel mailing list
>> Devel(a)ovirt.org
>>
http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
--
Anton Marchukov
Senior Software Engineer - RHEV CI - Red Hat
_______________________________________________
Devel mailing list
Devel(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel
_______________________________________________
Devel mailing list
Devel(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel
--
Eyal Edri
Associate Manager
RHV DevOps
EMEA ENG Virtualization R&D
Red Hat Israel
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)