
On Tue, May 8, 2018 at 1:30 PM, Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, May 8, 2018 at 8:44 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Mon, May 7, 2018 at 7:57 PM, Eyal Edri <eedri@redhat.com> wrote:
On Mon, May 7, 2018 at 6:29 PM, Barak Korren <bkorren@redhat.com> wrote:
On 7 May 2018 at 18:28, Barak Korren <bkorren@redhat.com> wrote:
On 7 May 2018 at 17:15, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
2018-05-07 16:02 GMT+02:00 Eyal Edri <eedri@redhat.com>:
> > > On Mon, May 7, 2018 at 1:12 PM, Yedidyah Bar David <didi@redhat.com> > wrote: > >> On Sun, May 6, 2018 at 11:57 AM, Eyal Edri <eedri@redhat.com> >> wrote: >> > >> > Should we disable or remove the 4.1 jobs from upstream? >> > >> > On Sun, May 6, 2018 at 11:55 AM, <jenkins@jenkins.phx.ovirt.org> >> wrote: >> >> >> >> Project: http://jenkins.ovirt.org/job/o >> virt-system-tests_he-basic-suite-4.1/ >> >> Build: http://jenkins.ovirt.org/job/o >> virt-system-tests_he-basic-suite-4.1/373/ >> >> >> This seems to be due to our 4.1-snapshot repo now including >> vdsm-4.20 >> for some reason: >> >> http://resources.ovirt.org/pub/ovirt-4.1-snapshot/rpm/el7/x86_64/ >> >> Seems like the last build of the job was ran accidentally: >> >> http://jenkins.ovirt.org/job/vdsm_4.1_build-artifacts-el7-x86_64/ > > > I see it was Lev who triggered a manual build of 4.2 from the 4.1 > job, any reason for it? >
just a mistake
> > >> >> >> We should probably revert the repo to the content of the previous >> build. >> >> Barak warned multiple times against doing such fixes manually. >> Please >> handle, or suggest how to handle. Thanks, >> > > Usually the solution is to bump to higher version, but I don't see > how it can help here. > I don't see another way to resolve this other than removing the bad > VDSM from 4.1 repo and regenerating the repos. > > Barak? >
I don't see any other way as well.
Yeah, in this case this seems to be the only choice. Hopefully one of the workarounds in the Lago/OST code will also manage to remove it from the host caches otherwise removing it would be painful.
This problem is unique to 4.1 because it is still publishing from build jobs directly and not (only) from tested. Just building manually with a job is usually insufficient to get a package into one of the repos....
Hmm.... and we probably need to remove the unwanted build or we'll get it back the next time the publisher runs...
I removed the vdsm build job with the wrong version. Who can handle the removal of the wrong RPM from snapshot repo?
I will, now.
Done.
It's still failing, still checking why: *10:31:42* Error: Package: vdsm-4.19.45-1.el7.centos.x86_64 (alocalsync)*10:31:42* Requires: vdsm-python = 4.19.45-1.el7.centos*10:31:42* Installing: vdsm-python-4.19.50-2.git781418b.el7.centos.noarch (alocalsync)*10:31:42* vdsm-python = 4.19.50-2.git781418b.el7.centos
> > >> -- >> Didi >> > > > > -- > > Eyal edri > > > MANAGER > > RHV DevOps > > EMEA VIRTUALIZATION R&D > > > Red Hat EMEA <https://www.redhat.com/> > <https://red.ht/sig> TRIED. TESTED. TRUSTED. > <https://redhat.com/trusted> > phone: +972-9-7692018 > irc: eedri (on #tlv #rhev-dev #rhev-integ) >
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://red.ht/sig> <https://redhat.com/summit>
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
--
Eyal edri
MANAGER
RHV DevOps
EMEA VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
-- Didi
-- Didi
-- Didi