I am checking the failed jobs
However, Please note that I think you are confusing issues.
Currently, we (CI) have a problem in the job that syncs the package to the
snapshot repo. this jobs run nightly and we had no way of knowing it would
fail until today.
Before today, we had several regressions which lasted for two weeks which
means no package was build at all.
So different issues
On Thu, Nov 15, 2018 at 10:54 AM Dan Kenigsberg <danken(a)redhat.com> wrote:
On Thu, Nov 15, 2018 at 12:45 PM Eyal Edri <eedri(a)redhat.com>
wrote:
>
>
>
> On Thu, Nov 15, 2018 at 12:43 PM Dan Kenigsberg <danken(a)redhat.com>
wrote:
>>
>> On Wed, Nov 14, 2018 at 5:07 PM Dan Kenigsberg <danken(a)redhat.com>
wrote:
>> >
>> > On Wed, Nov 14, 2018 at 12:42 PM Dominik Holler <dholler(a)redhat.com>
wrote:
>> > >
>> > > On Wed, 14 Nov 2018 11:24:10 +0100
>> > > Michal Skrivanek <mskrivan(a)redhat.com> wrote:
>> > >
>> > > > > On 14 Nov 2018, at 10:50, Dominik Holler
<dholler(a)redhat.com>
wrote:
>> > > > >
>> > > > > On Wed, 14 Nov 2018 09:27:39 +0100
>> > > > > Dominik Holler <dholler(a)redhat.com> wrote:
>> > > > >
>> > > > >> On Tue, 13 Nov 2018 13:01:09 +0100
>> > > > >> Martin Perina <mperina(a)redhat.com> wrote:
>> > > > >>
>> > > > >>> On Tue, Nov 13, 2018 at 12:49 PM Michal Skrivanek
<
mskrivan(a)redhat.com>
>> > > > >>> wrote:
>> > > > >>>
>> > > > >>>>
>> > > > >>>>
>> > > > >>>> On 13 Nov 2018, at 12:20, Dominik Holler
<dholler(a)redhat.com>
wrote:
>> > > > >>>>
>> > > > >>>> On Tue, 13 Nov 2018 11:56:37 +0100
>> > > > >>>> Martin Perina <mperina(a)redhat.com> wrote:
>> > > > >>>>
>> > > > >>>> On Tue, Nov 13, 2018 at 11:02 AM Dafna Ron
<dron(a)redhat.com>
wrote:
>> > > > >>>>
>> > > > >>>> Martin? can you please look at the patch that
Dominik sent?
>> > > > >>>> We need to resolve this as we have not had an
engine build
for the last 11
>> > > > >>>> days
>> > > > >>>>
>> > > > >>>>
>> > > > >>>> Yesterday I've merged Dominik's revert
patch
>> > > > >>>>
https://gerrit.ovirt.org/95377
>> > > > >>>> which should switch cluster level back to 4.2.
Below
mentioned change
>> > > > >>>>
https://gerrit.ovirt.org/95310 is relevant only
to cluster
level 4.3, am I
>> > > > >>>> right Michal?
>> > > > >>>>
>> > > > >>>> The build mentioned
>> > > > >>>>
>> > > > >>>>
https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_cha...
>> > > > >>>> is from yesterday. Are we sure that it was
executed only
after #95377 was
>> > > > >>>> merged? I'd like to see the results from
latest
>> > > > >>>>
>> > > > >>>>
https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_cha...
>> > > > >>>> but unfortunately it already waits more than an
hour for
available hosts
>> > > > >>>> ...
>> > > > >>>>
>> > > > >>>>
>> > > > >>>>
>> > > > >>>>
>> > > > >>>>
>> > > > >>>>
https://gerrit.ovirt.org/#/c/95283/ results in
>> > > > >>>>
>> > > > >>>>
http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_...
>> > > > >>>> which is used in
>> > > > >>>>
>> > > > >>>>
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-te...
>> > > > >>>> results in run_vms succeeding.
>> > > > >>>>
>> > > > >>>> The next merged change
>> > > > >>>>
https://gerrit.ovirt.org/#/c/95310/ results in
>> > > > >>>>
>> > > > >>>>
http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_...
>> > > > >>>> which is used in
>> > > > >>>>
>> > > > >>>>
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-te...
>> > > > >>>> results in run_vms failing with
>> > > > >>>> 2018-11-12 17:35:10,109-05 INFO
>> > > > >>>> [org.ovirt.engine.core.bll.RunVmOnceCommand]
(default task-1)
>> > > > >>>> [6930b632-5593-4481-bf2a-a1d8b14a583a] Running
command:
RunVmOnceCommand
>> > > > >>>> internal: false. Entities affected : ID:
>> > > > >>>> d10aa133-b9b6-455d-8137-ab822d1c1971 Type:
VMAction group
RUN_VM with role
>> > > > >>>> type USER
>> > > > >>>> 2018-11-12 17:35:10,113-05 DEBUG
>> > > > >>>>
[org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
>> > > > >>>> (default task-1)
[6930b632-5593-4481-bf2a-a1d8b14a583a]
method:
>> > > > >>>> getVmManager, params:
[d10aa133-b9b6-455d-8137-ab822d1c1971], timeElapsed:
>> > > > >>>> 4ms
>> > > > >>>> 2018-11-12 17:35:10,128-05 DEBUG
>> > > > >>>>
[org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
>> > > > >>>> (default task-1)
[6930b632-5593-4481-bf2a-a1d8b14a583a]
method:
>> > > > >>>> getAllForClusterWithStatus, params:
[2ca9ccd8-61f0-470c-ba3f-07766202f260,
>> > > > >>>> Up], timeElapsed: 7ms
>> > > > >>>> 2018-11-12 17:35:10,129-05 INFO
>> > > > >>>>
[org.ovirt.engine.core.bll.scheduling.SchedulingManager]
(default task-1)
>> > > > >>>> [6930b632-5593-4481-bf2a-a1d8b14a583a] Candidate
host
>> > > > >>>> 'lago-basic-suite-master-host-1'
('282860ab-8873-4702-a2be-100a6da111af')
>> > > > >>>> was filtered out by
'VAR__FILTERTYPE__INTERNAL' filter
'CPU-Level'
>> > > > >>>> (correlation id:
6930b632-5593-4481-bf2a-a1d8b14a583a)
>> > > > >>>> 2018-11-12 17:35:10,129-05 INFO
>> > > > >>>>
[org.ovirt.engine.core.bll.scheduling.SchedulingManager]
(default task-1)
>> > > > >>>> [6930b632-5593-4481-bf2a-a1d8b14a583a] Candidate
host
>> > > > >>>> 'lago-basic-suite-master-host-0'
('c48eca36-ea98-46b2-8473-f184833e68a8')
>> > > > >>>> was filtered out by
'VAR__FILTERTYPE__INTERNAL' filter
'CPU-Level'
>> > > > >>>> (correlation id:
6930b632-5593-4481-bf2a-a1d8b14a583a)
>> > > > >>>> 2018-11-12 17:35:10,130-05 ERROR
[org.ovirt.engine.core.bll.RunVmCommand]
>> > > > >>>> (default task-1)
[6930b632-5593-4481-bf2a-a1d8b14a583a]
Can't find VDS to
>> > > > >>>> run the VM
'd10aa133-b9b6-455d-8137-ab822d1c1971' on, so
this VM will not
>> > > > >>>> be run.
>> > > > >>>> in
>> > > > >>>>
>> > > > >>>>
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-te...
>> > > > >>>>
>> > > > >>>> Is this helpful for you?
>> > > > >>>>
>> > > > >>>>
>> > > > >>>>
>> > > > >>>> actually, there ire two issues
>> > > > >>>> 1) cluster is still 4.3 even after Martin’s
revert.
>> > > > >>>>
>> > > > >>>
>> > > > >>>
https://gerrit.ovirt.org/#/c/95409/ should align
cluster
level with dc level
>> > > > >>>
>> > > > >>
>> > > > >> This change aligns the cluster level, but
>> > > > >>
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-te...
>> > > > >> consuming build result from
>> > > > >>
https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_cha...
>> > > > >> looks like that this does not solve the issue:
>> > > > >> File
"/home/jenkins/workspace/ovirt-system-tests_manual/ovirt-system-tests/basic-suite-master/test-scenarios/004_basic_sanity.py",
line 698, in run_vms
>> > > > >> api.vms.get(VM0_NAME).start(start_params)
>> > > > >> File
"/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/brokers.py", line
31193, in start
>> > > > >> headers={"Correlation-Id":correlation_id}
>> > > > >> File
"/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line
122, in request
>> > > > >> persistent_auth=self.__persistent_auth
>> > > > >> File
"/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
line 79, in do_request
>> > > > >> persistent_auth)
>> > > > >> File
"/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
line 162, in __do_request
>> > > > >> raise errors.RequestError(response_code,
response_reason,
response_body)
>> > > > >> RequestError:
>> > > > >> status: 400
>> > > > >> reason: Bad Request
>> > > > >>
>> > > > >> engine.log:
>> > > > >> 2018-11-14 03:10:36,802-05 INFO
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-3)
[99e282ea-577a-4dab-857b-285b1df5e6f6] Candidate host
'lago-basic-suite-master-host-0'
('4dbfb937-ac4b-4cef-8ae3-124944829add')
was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level'
(correlation id: 99e282ea-577a-4dab-857b-285b1df5e6f6)
>> > > > >> 2018-11-14 03:10:36,802-05 INFO
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-3)
[99e282ea-577a-4dab-857b-285b1df5e6f6] Candidate host
'lago-basic-suite-master-host-1'
('731e5055-706e-4310-a062-045e32ffbfeb')
was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU-Level'
(correlation id: 99e282ea-577a-4dab-857b-285b1df5e6f6)
>> > > > >> 2018-11-14 03:10:36,802-05 ERROR
[org.ovirt.engine.core.bll.RunVmCommand] (default task-3)
[99e282ea-577a-4dab-857b-285b1df5e6f6] Can't find VDS to run the VM
'dc1e1e92-1e5c-415e-8ac2-b919017adf40' on, so this VM will not be run.
>> > > > >>
>> > > > >>
>> > > > >
>> > > > >
>> > > > >
https://gerrit.ovirt.org/#/c/95283/ results in
>> > > > >
http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_...
>> > > > > which is used in
>> > > > >
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-te...
>> > > > > results in run_vms succeeding.
>> > > > >
>> > > > > The next merged change
>> > > > >
https://gerrit.ovirt.org/#/c/95310/ results in
>> > > > >
http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_...
>> > > > > which is used in
>> > > > >
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-te...
>> > > > > results in run_vms failing with
>> > > > > File
"/home/jenkins/workspace/ovirt-system-tests_manual/ovirt-system-tests/basic-suite-master/test-scenarios/004_basic_sanity.py",
line 698, in run_vms
>> > > > > api.vms.get(VM0_NAME).start(start_params)
>> > > > > File
"/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/brokers.py", line
31193, in start
>> > > > > headers={"Correlation-Id":correlation_id}
>> > > > > File
"/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line
122, in request
>> > > > > persistent_auth=self.__persistent_auth
>> > > > > File
"/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
line 79, in do_request
>> > > > > persistent_auth)
>> > > > > File
"/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
line 162, in __do_request
>> > > > > raise errors.RequestError(response_code,
response_reason,
response_body)
>> > > > > RequestError:
>> > > > > status: 400
>> > > > > reason: Bad Request
>> > > > >
>> > > > >
>> > > > > So even if the Cluster Level should be 4.2 now,
>> > > > > still
https://gerrit.ovirt.org/#/c/95310/ seems influence
the
behavior.
>> > > >
>> > > > I really do not see how it can affect 4.2.
>> > >
>> > > Me neither.
>> > >
>> > > > Are you sure the cluster is really 4.2? Sadly it’s not being
logged at all
>> > >
>> > > screenshot from local execution
https://imgur.com/a/yiWBw3c
>> > >
>> > > > But if it really seem to matter (and since it needs a fix anyway
for 4.3) feel free to revert it of course
>> > > >
>> > >
>> > > I will post a revert change and check if this changes the behavior.
>> >
>> > Dominik, thanks for the research and for Martin's and your
>> > reverts/fixes. Finally Engine passes OST
>> >
https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/11153/
>> > and QE can expect a build tomorrow, after 2 weeks of droughts.
>>
>> unfortunately, the drought continues.
>
>
> Sorry, missing the content or meaning, what does drought means?
Pardon my flowery language. I mean 2 weeks of no ovirt-engine builds.
>
>>
>> Barrak tells me that something is broken in the nightly cron job
>> copying the the tested repo onto the master-snapshot one.
>
>
> Dafna, can you check this?
>
>>
>>
>> +Edri: please make it a priority to have it fixed.
>
>
>
> --
>
> Eyal edri
>
>
> MANAGER
>
> RHV/CNV DevOps
>
> EMEA VIRTUALIZATION R&D
>
>
> Red Hat EMEA
>
> TRIED. TESTED. TRUSTED.
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)