
Should we disable or remove the 4.1 jobs from upstream? On Sun, May 6, 2018 at 11:55 AM, <jenkins@jenkins.phx.ovirt.org> wrote:
Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic- suite-4.1/ Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic- suite-4.1/373/ Build Number: 373 Build Status: Still Failing Triggered By: Started by timer
------------------------------------- Changes Since Last Success: ------------------------------------- Changes for Build #364 [Eyal Edri] update the default template to autogenerate tool
Changes for Build #365 [Eyal Edri] update the default template to autogenerate tool
Changes for Build #366 [Eyal Edri] update the default template to autogenerate tool
Changes for Build #367 [Eyal Edri] update the default template to autogenerate tool
Changes for Build #368 [Eyal Edri] update the default template to autogenerate tool
Changes for Build #369 [Eyal Edri] update the default template to autogenerate tool
Changes for Build #370 [Eyal Edri] update the default template to autogenerate tool
Changes for Build #371 [Eyal Edri] update the default template to autogenerate tool
Changes for Build #372 [Eyal Edri] update the default template to autogenerate tool
Changes for Build #373 [Eyal Edri] update the default template to autogenerate tool
----------------- Failed Tests: ----------------- No tests ran. _______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- Eyal edri MANAGER RHV DevOps EMEA VIRTUALIZATION R&D Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)

On Sun, May 6, 2018 at 11:57 AM, Eyal Edri <eedri@redhat.com> wrote:
Should we disable or remove the 4.1 jobs from upstream?
On Sun, May 6, 2018 at 11:55 AM, <jenkins@jenkins.phx.ovirt.org> wrote:
Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-4.1/ Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-4.1/373/
This seems to be due to our 4.1-snapshot repo now including vdsm-4.20 for some reason: http://resources.ovirt.org/pub/ovirt-4.1-snapshot/rpm/el7/x86_64/ Seems like the last build of the job was ran accidentally: http://jenkins.ovirt.org/job/vdsm_4.1_build-artifacts-el7-x86_64/ We should probably revert the repo to the content of the previous build. Barak warned multiple times against doing such fixes manually. Please handle, or suggest how to handle. Thanks, -- Didi

I was referring to the fact 4.1 is EOL, any reason to keep it ? On Mon, May 7, 2018, 13:12 Yedidyah Bar David <didi@redhat.com> wrote:
On Sun, May 6, 2018 at 11:57 AM, Eyal Edri <eedri@redhat.com> wrote:
Should we disable or remove the 4.1 jobs from upstream?
On Sun, May 6, 2018 at 11:55 AM, <jenkins@jenkins.phx.ovirt.org> wrote:
Project:
http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-4.1/
Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-4.1/373/
This seems to be due to our 4.1-snapshot repo now including vdsm-4.20 for some reason:
http://resources.ovirt.org/pub/ovirt-4.1-snapshot/rpm/el7/x86_64/
Seems like the last build of the job was ran accidentally:
http://jenkins.ovirt.org/job/vdsm_4.1_build-artifacts-el7-x86_64/
We should probably revert the repo to the content of the previous build.
Barak warned multiple times against doing such fixes manually. Please handle, or suggest how to handle. Thanks, -- Didi

On Mon, May 7, 2018 at 1:24 PM, Eyal Edri <eedri@redhat.com> wrote:
I was referring to the fact 4.1 is EOL, any reason to keep it ?
Even if we drop the 4.1 jobs, we should definitely fix the 4.1-snapshot repos. We need them to be functional to test upgrades.
On Mon, May 7, 2018, 13:12 Yedidyah Bar David <didi@redhat.com> wrote:
On Sun, May 6, 2018 at 11:57 AM, Eyal Edri <eedri@redhat.com> wrote:
Should we disable or remove the 4.1 jobs from upstream?
On Sun, May 6, 2018 at 11:55 AM, <jenkins@jenkins.phx.ovirt.org> wrote:
Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-4.1/ Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-4.1/373/
This seems to be due to our 4.1-snapshot repo now including vdsm-4.20 for some reason:
http://resources.ovirt.org/pub/ovirt-4.1-snapshot/rpm/el7/x86_64/
Seems like the last build of the job was ran accidentally:
http://jenkins.ovirt.org/job/vdsm_4.1_build-artifacts-el7-x86_64/
We should probably revert the repo to the content of the previous build.
Barak warned multiple times against doing such fixes manually. Please handle, or suggest how to handle. Thanks, -- Didi
-- Didi

On Mon, May 7, 2018 at 1:12 PM, Yedidyah Bar David <didi@redhat.com> wrote:
On Sun, May 6, 2018 at 11:57 AM, Eyal Edri <eedri@redhat.com> wrote:
Should we disable or remove the 4.1 jobs from upstream?
On Sun, May 6, 2018 at 11:55 AM, <jenkins@jenkins.phx.ovirt.org> wrote:
Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-
suite-4.1/
Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic- suite-4.1/373/
This seems to be due to our 4.1-snapshot repo now including vdsm-4.20 for some reason:
http://resources.ovirt.org/pub/ovirt-4.1-snapshot/rpm/el7/x86_64/
Seems like the last build of the job was ran accidentally:
http://jenkins.ovirt.org/job/vdsm_4.1_build-artifacts-el7-x86_64/
I see it was Lev who triggered a manual build of 4.2 from the 4.1 job, any reason for it?
We should probably revert the repo to the content of the previous build.
Barak warned multiple times against doing such fixes manually. Please handle, or suggest how to handle. Thanks,
Usually the solution is to bump to higher version, but I don't see how it can help here. I don't see another way to resolve this other than removing the bad VDSM from 4.1 repo and regenerating the repos. Barak?
-- Didi
-- Eyal edri MANAGER RHV DevOps EMEA VIRTUALIZATION R&D Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)

2018-05-07 16:02 GMT+02:00 Eyal Edri <eedri@redhat.com>:
On Mon, May 7, 2018 at 1:12 PM, Yedidyah Bar David <didi@redhat.com> wrote:
On Sun, May 6, 2018 at 11:57 AM, Eyal Edri <eedri@redhat.com> wrote:
Should we disable or remove the 4.1 jobs from upstream?
On Sun, May 6, 2018 at 11:55 AM, <jenkins@jenkins.phx.ovirt.org> wrote:
Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-sui
te-4.1/
Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-sui te-4.1/373/
This seems to be due to our 4.1-snapshot repo now including vdsm-4.20 for some reason:
http://resources.ovirt.org/pub/ovirt-4.1-snapshot/rpm/el7/x86_64/
Seems like the last build of the job was ran accidentally:
http://jenkins.ovirt.org/job/vdsm_4.1_build-artifacts-el7-x86_64/
I see it was Lev who triggered a manual build of 4.2 from the 4.1 job, any reason for it?
just a mistake
We should probably revert the repo to the content of the previous build.
Barak warned multiple times against doing such fixes manually. Please handle, or suggest how to handle. Thanks,
Usually the solution is to bump to higher version, but I don't see how it can help here. I don't see another way to resolve this other than removing the bad VDSM from 4.1 repo and regenerating the repos.
Barak?
I don't see any other way as well.
-- Didi
--
Eyal edri
MANAGER
RHV DevOps
EMEA VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
-- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://red.ht/sig> <https://redhat.com/summit>

On 7 May 2018 at 17:15, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
2018-05-07 16:02 GMT+02:00 Eyal Edri <eedri@redhat.com>:
On Mon, May 7, 2018 at 1:12 PM, Yedidyah Bar David <didi@redhat.com> wrote:
On Sun, May 6, 2018 at 11:57 AM, Eyal Edri <eedri@redhat.com> wrote:
Should we disable or remove the 4.1 jobs from upstream?
On Sun, May 6, 2018 at 11:55 AM, <jenkins@jenkins.phx.ovirt.org>
wrote:
Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-sui
te-4.1/
Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-sui te-4.1/373/
This seems to be due to our 4.1-snapshot repo now including vdsm-4.20 for some reason:
http://resources.ovirt.org/pub/ovirt-4.1-snapshot/rpm/el7/x86_64/
Seems like the last build of the job was ran accidentally:
http://jenkins.ovirt.org/job/vdsm_4.1_build-artifacts-el7-x86_64/
I see it was Lev who triggered a manual build of 4.2 from the 4.1 job, any reason for it?
just a mistake
We should probably revert the repo to the content of the previous build.
Barak warned multiple times against doing such fixes manually. Please handle, or suggest how to handle. Thanks,
Usually the solution is to bump to higher version, but I don't see how it can help here. I don't see another way to resolve this other than removing the bad VDSM from 4.1 repo and regenerating the repos.
Barak?
I don't see any other way as well.
Yeah, in this case this seems to be the only choice. Hopefully one of the workarounds in the Lago/OST code will also manage to remove it from the host caches otherwise removing it would be painful. This problem is unique to 4.1 because it is still publishing from build jobs directly and not (only) from tested. Just building manually with a job is usually insufficient to get a package into one of the repos....
-- Didi
--
Eyal edri
MANAGER
RHV DevOps
EMEA VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://red.ht/sig> <https://redhat.com/summit>
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

On 7 May 2018 at 18:28, Barak Korren <bkorren@redhat.com> wrote:
On 7 May 2018 at 17:15, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
2018-05-07 16:02 GMT+02:00 Eyal Edri <eedri@redhat.com>:
On Mon, May 7, 2018 at 1:12 PM, Yedidyah Bar David <didi@redhat.com> wrote:
On Sun, May 6, 2018 at 11:57 AM, Eyal Edri <eedri@redhat.com> wrote:
Should we disable or remove the 4.1 jobs from upstream?
On Sun, May 6, 2018 at 11:55 AM, <jenkins@jenkins.phx.ovirt.org>
wrote:
Project: http://jenkins.ovirt.org/job/o
virt-system-tests_he-basic-suite-4.1/
Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-sui te-4.1/373/
This seems to be due to our 4.1-snapshot repo now including vdsm-4.20 for some reason:
http://resources.ovirt.org/pub/ovirt-4.1-snapshot/rpm/el7/x86_64/
Seems like the last build of the job was ran accidentally:
http://jenkins.ovirt.org/job/vdsm_4.1_build-artifacts-el7-x86_64/
I see it was Lev who triggered a manual build of 4.2 from the 4.1 job, any reason for it?
just a mistake
We should probably revert the repo to the content of the previous build.
Barak warned multiple times against doing such fixes manually. Please handle, or suggest how to handle. Thanks,
Usually the solution is to bump to higher version, but I don't see how it can help here. I don't see another way to resolve this other than removing the bad VDSM from 4.1 repo and regenerating the repos.
Barak?
I don't see any other way as well.
Yeah, in this case this seems to be the only choice. Hopefully one of the workarounds in the Lago/OST code will also manage to remove it from the host caches otherwise removing it would be painful.
This problem is unique to 4.1 because it is still publishing from build jobs directly and not (only) from tested. Just building manually with a job is usually insufficient to get a package into one of the repos....
Hmm.... and we probably need to remove the unwanted build or we'll get it back the next time the publisher runs...
-- Didi
--
Eyal edri
MANAGER
RHV DevOps
EMEA VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://red.ht/sig> <https://redhat.com/summit>
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

On Mon, May 7, 2018 at 6:29 PM, Barak Korren <bkorren@redhat.com> wrote:
On 7 May 2018 at 18:28, Barak Korren <bkorren@redhat.com> wrote:
On 7 May 2018 at 17:15, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
2018-05-07 16:02 GMT+02:00 Eyal Edri <eedri@redhat.com>:
On Mon, May 7, 2018 at 1:12 PM, Yedidyah Bar David <didi@redhat.com> wrote:
On Sun, May 6, 2018 at 11:57 AM, Eyal Edri <eedri@redhat.com> wrote:
Should we disable or remove the 4.1 jobs from upstream?
On Sun, May 6, 2018 at 11:55 AM, <jenkins@jenkins.phx.ovirt.org>
wrote:
> > Project: http://jenkins.ovirt.org/job/o virt-system-tests_he-basic-suite-4.1/ > Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-sui te-4.1/373/
This seems to be due to our 4.1-snapshot repo now including vdsm-4.20 for some reason:
http://resources.ovirt.org/pub/ovirt-4.1-snapshot/rpm/el7/x86_64/
Seems like the last build of the job was ran accidentally:
http://jenkins.ovirt.org/job/vdsm_4.1_build-artifacts-el7-x86_64/
I see it was Lev who triggered a manual build of 4.2 from the 4.1 job, any reason for it?
just a mistake
We should probably revert the repo to the content of the previous build.
Barak warned multiple times against doing such fixes manually. Please handle, or suggest how to handle. Thanks,
Usually the solution is to bump to higher version, but I don't see how it can help here. I don't see another way to resolve this other than removing the bad VDSM from 4.1 repo and regenerating the repos.
Barak?
I don't see any other way as well.
Yeah, in this case this seems to be the only choice. Hopefully one of the workarounds in the Lago/OST code will also manage to remove it from the host caches otherwise removing it would be painful.
This problem is unique to 4.1 because it is still publishing from build jobs directly and not (only) from tested. Just building manually with a job is usually insufficient to get a package into one of the repos....
Hmm.... and we probably need to remove the unwanted build or we'll get it back the next time the publisher runs...
I removed the vdsm build job with the wrong version. Who can handle the removal of the wrong RPM from snapshot repo?
-- Didi
--
Eyal edri
MANAGER
RHV DevOps
EMEA VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://red.ht/sig> <https://redhat.com/summit>
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
-- Eyal edri MANAGER RHV DevOps EMEA VIRTUALIZATION R&D Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)

On Mon, May 7, 2018 at 7:57 PM, Eyal Edri <eedri@redhat.com> wrote:
On Mon, May 7, 2018 at 6:29 PM, Barak Korren <bkorren@redhat.com> wrote:
On 7 May 2018 at 18:28, Barak Korren <bkorren@redhat.com> wrote:
On 7 May 2018 at 17:15, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
2018-05-07 16:02 GMT+02:00 Eyal Edri <eedri@redhat.com>:
On Mon, May 7, 2018 at 1:12 PM, Yedidyah Bar David <didi@redhat.com> wrote:
On Sun, May 6, 2018 at 11:57 AM, Eyal Edri <eedri@redhat.com> wrote: > > Should we disable or remove the 4.1 jobs from upstream? > > On Sun, May 6, 2018 at 11:55 AM, <jenkins@jenkins.phx.ovirt.org> wrote: >> >> Project: http://jenkins.ovirt.org/job/o virt-system-tests_he-basic-suite-4.1/ >> Build: http://jenkins.ovirt.org/job/o virt-system-tests_he-basic-suite-4.1/373/
This seems to be due to our 4.1-snapshot repo now including vdsm-4.20 for some reason:
http://resources.ovirt.org/pub/ovirt-4.1-snapshot/rpm/el7/x86_64/
Seems like the last build of the job was ran accidentally:
http://jenkins.ovirt.org/job/vdsm_4.1_build-artifacts-el7-x86_64/
I see it was Lev who triggered a manual build of 4.2 from the 4.1 job, any reason for it?
just a mistake
We should probably revert the repo to the content of the previous build.
Barak warned multiple times against doing such fixes manually. Please handle, or suggest how to handle. Thanks,
Usually the solution is to bump to higher version, but I don't see how it can help here. I don't see another way to resolve this other than removing the bad VDSM from 4.1 repo and regenerating the repos.
Barak?
I don't see any other way as well.
Yeah, in this case this seems to be the only choice. Hopefully one of the workarounds in the Lago/OST code will also manage to remove it from the host caches otherwise removing it would be painful.
This problem is unique to 4.1 because it is still publishing from build jobs directly and not (only) from tested. Just building manually with a job is usually insufficient to get a package into one of the repos....
Hmm.... and we probably need to remove the unwanted build or we'll get it back the next time the publisher runs...
I removed the vdsm build job with the wrong version. Who can handle the removal of the wrong RPM from snapshot repo?
I will, now.
-- Didi
--
Eyal edri
MANAGER
RHV DevOps
EMEA VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://red.ht/sig> <https://redhat.com/summit>
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
--
Eyal edri
MANAGER
RHV DevOps
EMEA VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
-- Didi

On Tue, May 8, 2018 at 8:44 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Mon, May 7, 2018 at 7:57 PM, Eyal Edri <eedri@redhat.com> wrote:
On Mon, May 7, 2018 at 6:29 PM, Barak Korren <bkorren@redhat.com> wrote:
On 7 May 2018 at 18:28, Barak Korren <bkorren@redhat.com> wrote:
On 7 May 2018 at 17:15, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
2018-05-07 16:02 GMT+02:00 Eyal Edri <eedri@redhat.com>:
On Mon, May 7, 2018 at 1:12 PM, Yedidyah Bar David <didi@redhat.com> wrote:
> On Sun, May 6, 2018 at 11:57 AM, Eyal Edri <eedri@redhat.com> wrote: > > > > Should we disable or remove the 4.1 jobs from upstream? > > > > On Sun, May 6, 2018 at 11:55 AM, <jenkins@jenkins.phx.ovirt.org> > wrote: > >> > >> Project: http://jenkins.ovirt.org/job/o > virt-system-tests_he-basic-suite-4.1/ > >> Build: http://jenkins.ovirt.org/job/o > virt-system-tests_he-basic-suite-4.1/373/ > > > This seems to be due to our 4.1-snapshot repo now including vdsm-4.20 > for some reason: > > http://resources.ovirt.org/pub/ovirt-4.1-snapshot/rpm/el7/x86_64/ > > Seems like the last build of the job was ran accidentally: > > http://jenkins.ovirt.org/job/vdsm_4.1_build-artifacts-el7-x86_64/
I see it was Lev who triggered a manual build of 4.2 from the 4.1 job, any reason for it?
just a mistake
> > > We should probably revert the repo to the content of the previous > build. > > Barak warned multiple times against doing such fixes manually. Please > handle, or suggest how to handle. Thanks, >
Usually the solution is to bump to higher version, but I don't see how it can help here. I don't see another way to resolve this other than removing the bad VDSM from 4.1 repo and regenerating the repos.
Barak?
I don't see any other way as well.
Yeah, in this case this seems to be the only choice. Hopefully one of the workarounds in the Lago/OST code will also manage to remove it from the host caches otherwise removing it would be painful.
This problem is unique to 4.1 because it is still publishing from build jobs directly and not (only) from tested. Just building manually with a job is usually insufficient to get a package into one of the repos....
Hmm.... and we probably need to remove the unwanted build or we'll get it back the next time the publisher runs...
I removed the vdsm build job with the wrong version. Who can handle the removal of the wrong RPM from snapshot repo?
I will, now.
Done.
> -- > Didi >
--
Eyal edri
MANAGER
RHV DevOps
EMEA VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://red.ht/sig> <https://redhat.com/summit>
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
--
Eyal edri
MANAGER
RHV DevOps
EMEA VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
-- Didi
-- Didi

On Tue, May 8, 2018 at 1:30 PM, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, May 8, 2018 at 8:44 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Mon, May 7, 2018 at 7:57 PM, Eyal Edri <eedri@redhat.com> wrote:
On Mon, May 7, 2018 at 6:29 PM, Barak Korren <bkorren@redhat.com> wrote:
On 7 May 2018 at 18:28, Barak Korren <bkorren@redhat.com> wrote:
On 7 May 2018 at 17:15, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
2018-05-07 16:02 GMT+02:00 Eyal Edri <eedri@redhat.com>:
> > > On Mon, May 7, 2018 at 1:12 PM, Yedidyah Bar David <didi@redhat.com> > wrote: > >> On Sun, May 6, 2018 at 11:57 AM, Eyal Edri <eedri@redhat.com> >> wrote: >> > >> > Should we disable or remove the 4.1 jobs from upstream? >> > >> > On Sun, May 6, 2018 at 11:55 AM, <jenkins@jenkins.phx.ovirt.org> >> wrote: >> >> >> >> Project: http://jenkins.ovirt.org/job/o >> virt-system-tests_he-basic-suite-4.1/ >> >> Build: http://jenkins.ovirt.org/job/o >> virt-system-tests_he-basic-suite-4.1/373/ >> >> >> This seems to be due to our 4.1-snapshot repo now including >> vdsm-4.20 >> for some reason: >> >> http://resources.ovirt.org/pub/ovirt-4.1-snapshot/rpm/el7/x86_64/ >> >> Seems like the last build of the job was ran accidentally: >> >> http://jenkins.ovirt.org/job/vdsm_4.1_build-artifacts-el7-x86_64/ > > > I see it was Lev who triggered a manual build of 4.2 from the 4.1 > job, any reason for it? >
just a mistake
> > >> >> >> We should probably revert the repo to the content of the previous >> build. >> >> Barak warned multiple times against doing such fixes manually. >> Please >> handle, or suggest how to handle. Thanks, >> > > Usually the solution is to bump to higher version, but I don't see > how it can help here. > I don't see another way to resolve this other than removing the bad > VDSM from 4.1 repo and regenerating the repos. > > Barak? >
I don't see any other way as well.
Yeah, in this case this seems to be the only choice. Hopefully one of the workarounds in the Lago/OST code will also manage to remove it from the host caches otherwise removing it would be painful.
This problem is unique to 4.1 because it is still publishing from build jobs directly and not (only) from tested. Just building manually with a job is usually insufficient to get a package into one of the repos....
Hmm.... and we probably need to remove the unwanted build or we'll get it back the next time the publisher runs...
I removed the vdsm build job with the wrong version. Who can handle the removal of the wrong RPM from snapshot repo?
I will, now.
Done.
It's still failing, still checking why: *10:31:42* Error: Package: vdsm-4.19.45-1.el7.centos.x86_64 (alocalsync)*10:31:42* Requires: vdsm-python = 4.19.45-1.el7.centos*10:31:42* Installing: vdsm-python-4.19.50-2.git781418b.el7.centos.noarch (alocalsync)*10:31:42* vdsm-python = 4.19.50-2.git781418b.el7.centos
> > >> -- >> Didi >> > > > > -- > > Eyal edri > > > MANAGER > > RHV DevOps > > EMEA VIRTUALIZATION R&D > > > Red Hat EMEA <https://www.redhat.com/> > <https://red.ht/sig> TRIED. TESTED. TRUSTED. > <https://redhat.com/trusted> > phone: +972-9-7692018 > irc: eedri (on #tlv #rhev-dev #rhev-integ) >
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://red.ht/sig> <https://redhat.com/summit>
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
--
Eyal edri
MANAGER
RHV DevOps
EMEA VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
-- Didi
-- Didi
-- Didi

On Tue, May 8, 2018 at 2:56 PM, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, May 8, 2018 at 1:30 PM, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, May 8, 2018 at 8:44 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Mon, May 7, 2018 at 7:57 PM, Eyal Edri <eedri@redhat.com> wrote:
On Mon, May 7, 2018 at 6:29 PM, Barak Korren <bkorren@redhat.com> wrote:
On 7 May 2018 at 18:28, Barak Korren <bkorren@redhat.com> wrote:
On 7 May 2018 at 17:15, Sandro Bonazzola <sbonazzo@redhat.com> wrote: > > > > 2018-05-07 16:02 GMT+02:00 Eyal Edri <eedri@redhat.com>: >> >> >> >> On Mon, May 7, 2018 at 1:12 PM, Yedidyah Bar David <didi@redhat.com> wrote: >>> >>> On Sun, May 6, 2018 at 11:57 AM, Eyal Edri <eedri@redhat.com> wrote: >>> > >>> > Should we disable or remove the 4.1 jobs from upstream? >>> > >>> > On Sun, May 6, 2018 at 11:55 AM, <jenkins@jenkins.phx.ovirt.org> wrote: >>> >> >>> >> Project: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-4.1/ >>> >> Build: http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-4.1/373/ >>> >>> >>> This seems to be due to our 4.1-snapshot repo now including vdsm-4.20 >>> for some reason: >>> >>> http://resources.ovirt.org/pub/ovirt-4.1-snapshot/rpm/el7/x86_64/ >>> >>> Seems like the last build of the job was ran accidentally: >>> >>> http://jenkins.ovirt.org/job/vdsm_4.1_build-artifacts-el7-x86_64/ >> >> >> I see it was Lev who triggered a manual build of 4.2 from the 4.1 job, any reason for it? > > > just a mistake > > > >> >> >>> >>> >>> >>> We should probably revert the repo to the content of the previous build. >>> >>> Barak warned multiple times against doing such fixes manually. Please >>> handle, or suggest how to handle. Thanks, >> >> >> Usually the solution is to bump to higher version, but I don't see how it can help here. >> I don't see another way to resolve this other than removing the bad VDSM from 4.1 repo and regenerating the repos. >> >> Barak? > > > I don't see any other way as well. >
Yeah, in this case this seems to be the only choice. Hopefully one of the workarounds in the Lago/OST code will also manage to remove it from the host caches otherwise removing it would be painful.
This problem is unique to 4.1 because it is still publishing from build jobs directly and not (only) from tested. Just building manually with a job is usually insufficient to get a package into one of the repos....
> >
Hmm.... and we probably need to remove the unwanted build or we'll get it back the next time the publisher runs...
I removed the vdsm build job with the wrong version. Who can handle the removal of the wrong RPM from snapshot repo?
I will, now.
Done.
It's still failing, still checking why:
10:31:42 Error: Package: vdsm-4.19.45-1.el7.centos.x86_64 (alocalsync) 10:31:42 Requires: vdsm-python = 4.19.45-1.el7.centos 10:31:42 Installing: vdsm-python-4.19.50-2.git781418b.el7.centos.noarch (alocalsync) 10:31:42 vdsm-python = 4.19.50-2.git781418b.el7.centos
vdsm-4.19.50-2.git781418b was simply missing for x86_64, not sure why. Did 'ci re-merge please' in https://gerrit.ovirt.org/#/c/89298/ , which has this hash, although it now also has a tag v4.19.51 - probably it was tagged later - so jenkins builds with the tag in the name (and not hash). However, check-merge fails [1], perhaps due to updates in lago or whatever: 12:42:06 Error occured, aborting 12:42:06 Traceback (most recent call last): 12:42:06 File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 969, in main 12:42:06 cli_plugins[args.verb].do_run(args) 12:42:06 File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 184, in do_run 12:42:06 self._do_run(**vars(args)) 12:42:07 File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 194, in do_init 12:42:07 do_build=not skip_build, 12:42:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1111, in virt_conf_from_stream 12:42:07 do_build=do_build 12:42:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1226, in virt_conf 12:42:07 template_store=template_store, 12:42:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1131, in _prepare_domains_images 12:42:07 template_store=template_store, 12:42:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1153, in _prepare_domain_image 12:42:07 template_store=template_store, 12:42:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1181, in _create_disks 12:42:07 template_store=template_store, 12:42:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 667, in _create_disk 12:42:07 template_store=template_store, 12:42:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 796, in _handle_template 12:42:07 template_repo=template_repo, 12:42:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 924, in _handle_lago_template 12:42:07 template = template_repo.get_by_name(template_spec['template_name']) 12:42:07 File "/usr/lib/python2.7/site-packages/lago/templates.py", line 388, in get_by_name 12:42:07 spec = self._dom.get('templates', {})[name] 12:42:07 KeyError: 'el7-base' Perhaps someone can have a look, and/or we can decide to give up and disable the he-4.1 job. Adding also edwardh. [1] http://jenkins.ovirt.org/job/vdsm_4.1_check-merged-el7-x86_64/438/ Best regards, -- Didi

2018-05-08 15:01 GMT+02:00 Yedidyah Bar David <didi@redhat.com>:
On Tue, May 8, 2018 at 2:56 PM, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, May 8, 2018 at 1:30 PM, Yedidyah Bar David <didi@redhat.com>
On Tue, May 8, 2018 at 8:44 AM, Yedidyah Bar David <didi@redhat.com>
wrote:
On Mon, May 7, 2018 at 7:57 PM, Eyal Edri <eedri@redhat.com> wrote:
On Mon, May 7, 2018 at 6:29 PM, Barak Korren <bkorren@redhat.com>
wrote:
On 7 May 2018 at 18:28, Barak Korren <bkorren@redhat.com> wrote: > > > > On 7 May 2018 at 17:15, Sandro Bonazzola <sbonazzo@redhat.com>
wrote:
>> >> >> >> 2018-05-07 16:02 GMT+02:00 Eyal Edri <eedri@redhat.com>: >>> >>> >>> >>> On Mon, May 7, 2018 at 1:12 PM, Yedidyah Bar David < didi@redhat.com> wrote: >>>> >>>> On Sun, May 6, 2018 at 11:57 AM, Eyal Edri <eedri@redhat.com> wrote: >>>> > >>>> > Should we disable or remove the 4.1 jobs from upstream? >>>> > >>>> > On Sun, May 6, 2018 at 11:55 AM, < jenkins@jenkins.phx.ovirt.org> wrote: >>>> >> >>>> >> Project: http://jenkins.ovirt.org/job/ ovirt-system-tests_he-basic-suite-4.1/ >>>> >> Build: http://jenkins.ovirt.org/job/ ovirt-system-tests_he-basic-suite-4.1/373/ >>>> >>>> >>>> This seems to be due to our 4.1-snapshot repo now including vdsm-4.20 >>>> for some reason: >>>> >>>> http://resources.ovirt.org/pub/ovirt-4.1-snapshot/rpm/ el7/x86_64/ >>>> >>>> Seems like the last build of the job was ran accidentally: >>>> >>>> http://jenkins.ovirt.org/job/vdsm_4.1_build-artifacts-el7- x86_64/ >>> >>> >>> I see it was Lev who triggered a manual build of 4.2 from the 4.1 job, any reason for it? >> >> >> just a mistake >> >> >> >>> >>> >>>> >>>> >>>> >>>> We should probably revert the repo to the content of the
>>>> >>>> Barak warned multiple times against doing such fixes manually. Please >>>> handle, or suggest how to handle. Thanks, >>> >>> >>> Usually the solution is to bump to higher version, but I don't see how it can help here. >>> I don't see another way to resolve this other than removing the bad VDSM from 4.1 repo and regenerating the repos. >>> >>> Barak? >> >> >> I don't see any other way as well. >> > > Yeah, in this case this seems to be the only choice. > Hopefully one of the workarounds in the Lago/OST code will also manage to remove it from the host caches otherwise removing it would be
> > This problem is unique to 4.1 because it is still publishing from build jobs directly and not (only) from tested. > Just building manually with a job is usually insufficient to get a
wrote: previous build. painful. package into one of the repos....
> >> >>
Hmm.... and we probably need to remove the unwanted build or we'll get it back the next time the publisher runs...
I removed the vdsm build job with the wrong version. Who can handle the removal of the wrong RPM from snapshot repo?
I will, now.
Done.
It's still failing, still checking why:
10:31:42 Error: Package: vdsm-4.19.45-1.el7.centos.x86_64 (alocalsync) 10:31:42 Requires: vdsm-python = 4.19.45-1.el7.centos 10:31:42 Installing: vdsm-python-4.19.50-2.git781418b.el7.centos.noarch (alocalsync) 10:31:42 vdsm-python = 4.19.50-2.git781418b.el7.centos
vdsm-4.19.50-2.git781418b was simply missing for x86_64, not sure why.
Did 'ci re-merge please' in https://gerrit.ovirt.org/#/c/89298/ , which has this hash, although it now also has a tag v4.19.51 - probably it was tagged later - so jenkins builds with the tag in the name (and not hash). However, check-merge fails [1], perhaps due to updates in lago or whatever:
12:42:06 Error occured, aborting 12:42:06 Traceback (most recent call last): 12:42:06 File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 969, in main 12:42:06 cli_plugins[args.verb].do_run(args) 12:42:06 File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 184, in do_run 12:42:06 self._do_run(**vars(args)) 12:42:07 File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 194, in do_init 12:42:07 do_build=not skip_build, 12:42:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1111, in virt_conf_from_stream 12:42:07 do_build=do_build 12:42:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1226, in virt_conf 12:42:07 template_store=template_store, 12:42:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1131, in _prepare_domains_images 12:42:07 template_store=template_store, 12:42:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1153, in _prepare_domain_image 12:42:07 template_store=template_store, 12:42:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1181, in _create_disks 12:42:07 template_store=template_store, 12:42:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 667, in _create_disk 12:42:07 template_store=template_store, 12:42:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 796, in _handle_template 12:42:07 template_repo=template_repo, 12:42:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 924, in _handle_lago_template 12:42:07 template = template_repo.get_by_name(template_spec['template_name']) 12:42:07 File "/usr/lib/python2.7/site-packages/lago/templates.py", line 388, in get_by_name 12:42:07 spec = self._dom.get('templates', {})[name] 12:42:07 KeyError: 'el7-base'
Perhaps someone can have a look, and/or we can decide to give up and disable the he-4.1 job.
+1 for dropping the job.
Adding also edwardh.
[1] http://jenkins.ovirt.org/job/vdsm_4.1_check-merged-el7-x86_64/438/
Best regards, -- Didi
-- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://red.ht/sig> <https://redhat.com/summit>

On Wed, May 9, 2018 at 11:37 AM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
2018-05-08 15:01 GMT+02:00 Yedidyah Bar David <didi@redhat.com>:
On Tue, May 8, 2018 at 2:56 PM, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, May 8, 2018 at 1:30 PM, Yedidyah Bar David <didi@redhat.com>
On Tue, May 8, 2018 at 8:44 AM, Yedidyah Bar David <didi@redhat.com>
wrote:
On Mon, May 7, 2018 at 7:57 PM, Eyal Edri <eedri@redhat.com> wrote:
On Mon, May 7, 2018 at 6:29 PM, Barak Korren <bkorren@redhat.com>
wrote:
> > > > On 7 May 2018 at 18:28, Barak Korren <bkorren@redhat.com> wrote: >> >> >> >> On 7 May 2018 at 17:15, Sandro Bonazzola <sbonazzo@redhat.com> wrote: >>> >>> >>> >>> 2018-05-07 16:02 GMT+02:00 Eyal Edri <eedri@redhat.com>: >>>> >>>> >>>> >>>> On Mon, May 7, 2018 at 1:12 PM, Yedidyah Bar David < didi@redhat.com> wrote: >>>>> >>>>> On Sun, May 6, 2018 at 11:57 AM, Eyal Edri <eedri@redhat.com> wrote: >>>>> > >>>>> > Should we disable or remove the 4.1 jobs from upstream? >>>>> > >>>>> > On Sun, May 6, 2018 at 11:55 AM, < jenkins@jenkins.phx.ovirt.org> wrote: >>>>> >> >>>>> >> Project: http://jenkins.ovirt.org/job/o virt-system-tests_he-basic-suite-4.1/ >>>>> >> Build: http://jenkins.ovirt.org/job/o virt-system-tests_he-basic-suite-4.1/373/ >>>>> >>>>> >>>>> This seems to be due to our 4.1-snapshot repo now including vdsm-4.20 >>>>> for some reason: >>>>> >>>>> http://resources.ovirt.org/pub/ovirt-4.1-snapshot/rpm/el7/ x86_64/ >>>>> >>>>> Seems like the last build of the job was ran accidentally: >>>>> >>>>> http://jenkins.ovirt.org/job/vdsm_4.1_build-artifacts-el7-x8 6_64/ >>>> >>>> >>>> I see it was Lev who triggered a manual build of 4.2 from the 4.1 job, any reason for it? >>> >>> >>> just a mistake >>> >>> >>> >>>> >>>> >>>>> >>>>> >>>>> >>>>> We should probably revert the repo to the content of the
>>>>> >>>>> Barak warned multiple times against doing such fixes manually. Please >>>>> handle, or suggest how to handle. Thanks, >>>> >>>> >>>> Usually the solution is to bump to higher version, but I don't see how it can help here. >>>> I don't see another way to resolve this other than removing the bad VDSM from 4.1 repo and regenerating the repos. >>>> >>>> Barak? >>> >>> >>> I don't see any other way as well. >>> >> >> Yeah, in this case this seems to be the only choice. >> Hopefully one of the workarounds in the Lago/OST code will also manage to remove it from the host caches otherwise removing it would be
>> >> This problem is unique to 4.1 because it is still publishing from build jobs directly and not (only) from tested. >> Just building manually with a job is usually insufficient to get a
wrote: previous build. painful. package into one of the repos....
>> >>> >>> > > Hmm.... and we probably need to remove the unwanted build or we'll get it back the next time the publisher runs...
I removed the vdsm build job with the wrong version. Who can handle the removal of the wrong RPM from snapshot repo?
I will, now.
Done.
It's still failing, still checking why:
10:31:42 Error: Package: vdsm-4.19.45-1.el7.centos.x86_64 (alocalsync) 10:31:42 Requires: vdsm-python = 4.19.45-1.el7.centos 10:31:42 Installing: vdsm-python-4.19.50-2.git781418b.el7.centos.noarch (alocalsync) 10:31:42 vdsm-python = 4.19.50-2.git781418b.el7.centos
vdsm-4.19.50-2.git781418b was simply missing for x86_64, not sure why.
Did 'ci re-merge please' in https://gerrit.ovirt.org/#/c/89298/ , which has this hash, although it now also has a tag v4.19.51 - probably it was tagged later - so jenkins builds with the tag in the name (and not hash). However, check-merge fails [1], perhaps due to updates in lago or whatever:
12:42:06 Error occured, aborting 12:42:06 Traceback (most recent call last): 12:42:06 File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 969, in main 12:42:06 cli_plugins[args.verb].do_run(args) 12:42:06 File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 184, in do_run 12:42:06 self._do_run(**vars(args)) 12:42:07 File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 194, in do_init 12:42:07 do_build=not skip_build, 12:42:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1111, in virt_conf_from_stream 12:42:07 do_build=do_build 12:42:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1226, in virt_conf 12:42:07 template_store=template_store, 12:42:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1131, in _prepare_domains_images 12:42:07 template_store=template_store, 12:42:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1153, in _prepare_domain_image 12:42:07 template_store=template_store, 12:42:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1181, in _create_disks 12:42:07 template_store=template_store, 12:42:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 667, in _create_disk 12:42:07 template_store=template_store, 12:42:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 796, in _handle_template 12:42:07 template_repo=template_repo, 12:42:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 924, in _handle_lago_template 12:42:07 template = template_repo.get_by_name(template_spec['template_name']) 12:42:07 File "/usr/lib/python2.7/site-packages/lago/templates.py", line 388, in get_by_name 12:42:07 spec = self._dom.get('templates', {})[name] 12:42:07 KeyError: 'el7-base'
Perhaps someone can have a look, and/or we can decide to give up and disable the he-4.1 job.
+1 for dropping the job.
https://gerrit.ovirt.org/91083
Adding also edwardh.
[1] http://jenkins.ovirt.org/job/vdsm_4.1_check-merged-el7-x86_64/438/
Best regards, -- Didi
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://red.ht/sig> <https://redhat.com/summit>
-- Didi
participants (4)
-
Barak Korren
-
Eyal Edri
-
Sandro Bonazzola
-
Yedidyah Bar David