
Hi, I'm seeing the HE OST jobs failing since yesterday during reposync frm the 4.1 repos. This is an example for the error I'm seeing: 10:qemu-kvm-ev-2.6.0-28.el7.10.1.x86_64: [Errno 256] No more mirrors to try. 10:qemu-img-ev-2.6.0-28.el7.10.1.x86_64: [Errno 256] No more mirrors to try. ovirt-scheduler-proxy-0.1.7-1.el7.centos.noarch: [Errno 256] No more mirrors to try. java-ovirt-engine-sdk4-4.1.3-1.el7.centos.noarch: [Errno 256] No more mirrors to try. ovirt-provider-ovn-1.0-8.el7.centos.noarch: [Errno 256] No more mirrors to try. 10:qemu-kvm-tools-ev-2.6.0-28.el7.10.1.x86_64: [Errno 256] No more mirrors to try. ovirt-provider-ovn-driver-1.0-8.el7.centos.noarch: [Errno 256] No more mirrors to try. ovirt-image-uploader-4.0.1-1.el7.centos.noarch: [Errno 256] No more mirrors to try. ovirt-engine-setup-plugin-live-4.1.0-1.el7.centos.noarch: [Errno 256] No more mirrors to try. unboundid-ldapsdk-3.2.0-1.el7.noarch: [Errno 256] No more mirrors to try. ovirt-engine-wildfly-10.1.0-1.el7.x86_64: [Errno 256] No more mirrors to try. 10:qemu-kvm-common-ev-2.6.0-28.el7.10.1.x86_64: [Errno 256] No more mirrors to try. ovirt-engine-wildfly-overlay-10.0.0-1.el7.noarch: [Errno 256] No more mirrors to try. This looks like we have package files missing in the repo without having properly updated repo MD files. I'm running the same reposync in Docker ATM to see if this reproduces. Did anyone intentionally remove any packages? Do we need to update some procedure to make sure people remember to run 'createrepo' or use repoman? -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

Hi Barak, That is a bit strange as the package is in the repo: http://plain.resources.ovirt.org/pub/ovirt-4.1/rpm/el7/x86_64/qemu-kvm-ev-2.... Need to check if repodata is not corrupted Thanks in advance, On Wed, Aug 2, 2017 at 11:33 AM, Barak Korren <bkorren@redhat.com> wrote:
Hi,
I'm seeing the HE OST jobs failing since yesterday during reposync frm the 4.1 repos.
This is an example for the error I'm seeing:
10:qemu-kvm-ev-2.6.0-28.el7.10.1.x86_64: [Errno 256] No more mirrors to try. 10:qemu-img-ev-2.6.0-28.el7.10.1.x86_64: [Errno 256] No more mirrors to try. ovirt-scheduler-proxy-0.1.7-1.el7.centos.noarch: [Errno 256] No more mirrors to try. java-ovirt-engine-sdk4-4.1.3-1.el7.centos.noarch: [Errno 256] No more mirrors to try. ovirt-provider-ovn-1.0-8.el7.centos.noarch: [Errno 256] No more mirrors to try. 10:qemu-kvm-tools-ev-2.6.0-28.el7.10.1.x86_64: [Errno 256] No more mirrors to try. ovirt-provider-ovn-driver-1.0-8.el7.centos.noarch: [Errno 256] No more mirrors to try. ovirt-image-uploader-4.0.1-1.el7.centos.noarch: [Errno 256] No more mirrors to try. ovirt-engine-setup-plugin-live-4.1.0-1.el7.centos.noarch: [Errno 256] No more mirrors to try. unboundid-ldapsdk-3.2.0-1.el7.noarch: [Errno 256] No more mirrors to try. ovirt-engine-wildfly-10.1.0-1.el7.x86_64: [Errno 256] No more mirrors to try. 10:qemu-kvm-common-ev-2.6.0-28.el7.10.1.x86_64: [Errno 256] No more mirrors to try. ovirt-engine-wildfly-overlay-10.0.0-1.el7.noarch: [Errno 256] No more mirrors to try.
This looks like we have package files missing in the repo without having properly updated repo MD files.
I'm running the same reposync in Docker ATM to see if this reproduces.
Did anyone intentionally remove any packages? Do we need to update some procedure to make sure people remember to run 'createrepo' or use repoman?
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
-- Lev Veyde Software Engineer, RHCE | RHCVA | MCITP Red Hat Israel <https://www.redhat.com> lev@redhat.com | lveyde@redhat.com <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>

Hi Barak, No, the repodata seems to be fine... We'll need to debug it further. On Wed, Aug 2, 2017 at 2:12 PM, Lev Veyde <lveyde@redhat.com> wrote:
Hi Barak,
That is a bit strange as the package is in the repo: http://plain.resources.ovirt.org/pub/ovirt-4.1/rpm/el7/x86_ 64/qemu-kvm-ev-2.6.0-28.el7.10.1.x86_64.rpm
Need to check if repodata is not corrupted
Thanks in advance,
On Wed, Aug 2, 2017 at 11:33 AM, Barak Korren <bkorren@redhat.com> wrote:
Hi,
I'm seeing the HE OST jobs failing since yesterday during reposync frm the 4.1 repos.
This is an example for the error I'm seeing:
10:qemu-kvm-ev-2.6.0-28.el7.10.1.x86_64: [Errno 256] No more mirrors to try. 10:qemu-img-ev-2.6.0-28.el7.10.1.x86_64: [Errno 256] No more mirrors to try. ovirt-scheduler-proxy-0.1.7-1.el7.centos.noarch: [Errno 256] No more mirrors to try. java-ovirt-engine-sdk4-4.1.3-1.el7.centos.noarch: [Errno 256] No more mirrors to try. ovirt-provider-ovn-1.0-8.el7.centos.noarch: [Errno 256] No more mirrors to try. 10:qemu-kvm-tools-ev-2.6.0-28.el7.10.1.x86_64: [Errno 256] No more mirrors to try. ovirt-provider-ovn-driver-1.0-8.el7.centos.noarch: [Errno 256] No more mirrors to try. ovirt-image-uploader-4.0.1-1.el7.centos.noarch: [Errno 256] No more mirrors to try. ovirt-engine-setup-plugin-live-4.1.0-1.el7.centos.noarch: [Errno 256] No more mirrors to try. unboundid-ldapsdk-3.2.0-1.el7.noarch: [Errno 256] No more mirrors to try. ovirt-engine-wildfly-10.1.0-1.el7.x86_64: [Errno 256] No more mirrors to try. 10:qemu-kvm-common-ev-2.6.0-28.el7.10.1.x86_64: [Errno 256] No more mirrors to try. ovirt-engine-wildfly-overlay-10.0.0-1.el7.noarch: [Errno 256] No more mirrors to try.
This looks like we have package files missing in the repo without having properly updated repo MD files.
I'm running the same reposync in Docker ATM to see if this reproduces.
Did anyone intentionally remove any packages? Do we need to update some procedure to make sure people remember to run 'createrepo' or use repoman?
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
--
Lev Veyde
Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
lev@redhat.com | lveyde@redhat.com <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
-- Lev Veyde Software Engineer, RHCE | RHCVA | MCITP Red Hat Israel <https://www.redhat.com> lev@redhat.com | lveyde@redhat.com <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>

Hi Barak, Looks like a network connectivity issue: *09:10:03* qemu-img-ev-2.6.0-28.el7.10.1. FAILED *09:10:03* (83/142): qemu-img-ev-2.6. 88% [============== ] 45 MB/s | 1.3 GB 00:03 ETA qemu-img-ev-2.6.0-28.el7.10.1. FAILED *09:10:03* (83/142): qemu-img-ev-2.6. 88% [============== ] 45 MB/s | 1.3 GB 00:03 ETA qemu-kvm-common-ev-2.6.0-28.el FAILED *09:10:03* (83/142): qemu-kvm-common- 88% [============== ] 45 MB/s | 1.3 GB 00:03 ETA qemu-kvm-common-ev-2.6.0-28.el FAILED *09:10:03* (83/142): qemu-kvm-common- 88% [============== ] 45 MB/s | 1.3 GB 00:03 ETA qemu-kvm-ev-2.6.0-28.el7.10.1. FAILED *09:10:03* (83/142): qemu-kvm-ev-2.6. 88% [============== ] 45 MB/s | 1.3 GB 00:03 ETA On Wed, Aug 2, 2017 at 2:17 PM, Lev Veyde <lveyde@redhat.com> wrote:
Hi Barak,
No, the repodata seems to be fine...
We'll need to debug it further.
On Wed, Aug 2, 2017 at 2:12 PM, Lev Veyde <lveyde@redhat.com> wrote:
Hi Barak,
That is a bit strange as the package is in the repo: http://plain.resources.ovirt.org/pub/ovirt-4.1/rpm/el7/x86_6 4/qemu-kvm-ev-2.6.0-28.el7.10.1.x86_64.rpm
Need to check if repodata is not corrupted
Thanks in advance,
On Wed, Aug 2, 2017 at 11:33 AM, Barak Korren <bkorren@redhat.com> wrote:
Hi,
I'm seeing the HE OST jobs failing since yesterday during reposync frm the 4.1 repos.
This is an example for the error I'm seeing:
10:qemu-kvm-ev-2.6.0-28.el7.10.1.x86_64: [Errno 256] No more mirrors to try. 10:qemu-img-ev-2.6.0-28.el7.10.1.x86_64: [Errno 256] No more mirrors to try. ovirt-scheduler-proxy-0.1.7-1.el7.centos.noarch: [Errno 256] No more mirrors to try. java-ovirt-engine-sdk4-4.1.3-1.el7.centos.noarch: [Errno 256] No more mirrors to try. ovirt-provider-ovn-1.0-8.el7.centos.noarch: [Errno 256] No more mirrors to try. 10:qemu-kvm-tools-ev-2.6.0-28.el7.10.1.x86_64: [Errno 256] No more mirrors to try. ovirt-provider-ovn-driver-1.0-8.el7.centos.noarch: [Errno 256] No more mirrors to try. ovirt-image-uploader-4.0.1-1.el7.centos.noarch: [Errno 256] No more mirrors to try. ovirt-engine-setup-plugin-live-4.1.0-1.el7.centos.noarch: [Errno 256] No more mirrors to try. unboundid-ldapsdk-3.2.0-1.el7.noarch: [Errno 256] No more mirrors to try. ovirt-engine-wildfly-10.1.0-1.el7.x86_64: [Errno 256] No more mirrors to try. 10:qemu-kvm-common-ev-2.6.0-28.el7.10.1.x86_64: [Errno 256] No more mirrors to try. ovirt-engine-wildfly-overlay-10.0.0-1.el7.noarch: [Errno 256] No more mirrors to try.
This looks like we have package files missing in the repo without having properly updated repo MD files.
I'm running the same reposync in Docker ATM to see if this reproduces.
Did anyone intentionally remove any packages? Do we need to update some procedure to make sure people remember to run 'createrepo' or use repoman?
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
--
Lev Veyde
Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
lev@redhat.com | lveyde@redhat.com <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
--
Lev Veyde
Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
lev@redhat.com | lveyde@redhat.com <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
-- Lev Veyde Software Engineer, RHCE | RHCVA | MCITP Red Hat Israel <https://www.redhat.com> lev@redhat.com | lveyde@redhat.com <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>

On 2 August 2017 at 14:24, Lev Veyde <lveyde@redhat.com> wrote:
Hi Barak,
Looks like a network connectivity issue:
3 times in a row from a local sever? For the same files? While we have 100 other jobs downloading from the same server? While the exact same process is downloading successfully? No. that is not it. I was thinking we might have RPM files that have been changed, but not finding different MD5 sums between what we have cached on the slave and what is in the repo, will try to flush local cache and rerun. -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

On 2 August 2017 at 16:02, Barak Korren <bkorren@redhat.com> wrote:
On 2 August 2017 at 14:24, Lev Veyde <lveyde@redhat.com> wrote:
Hi Barak,
Looks like a network connectivity issue:
3 times in a row from a local sever? For the same files? While we have 100 other jobs downloading from the same server? While the exact same process is downloading successfully? No. that is not it.
I was thinking we might have RPM files that have been changed, but not finding different MD5 sums between what we have cached on the slave and what is in the repo, will try to flush local cache and rerun.
Ah, I see you already got a successful run today, so recreating metadata worked? Or did we just got lucky and ran it on a slave with an empty cache? -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

Hi Barak, Not sure if it was it or something else... but yes I did re-create the metadata even though I could successfully access the repo and even get that specific RPM package info, including the right size. Personally I'm not sure what really happened here, the download in the job was failing several times at exactly 88%, but I was able to download (w/ wget) the RPM without any issue from my test VM. It means that if it was due to corrupted yum repo data and yum uses the filesize it gets from the DB and not the the size it receives from the httpd, then I would expect to see the yum info to report the RPM size of about 2.84 MB. But it reported it as 2.5 MB which is it's size. So to summarize, not really sure what went wrong here, and we need to keep an eye on this to see if it's really resolved. Thanks in advance, On Wed, Aug 2, 2017 at 4:05 PM, Barak Korren <bkorren@redhat.com> wrote:
On 2 August 2017 at 16:02, Barak Korren <bkorren@redhat.com> wrote:
On 2 August 2017 at 14:24, Lev Veyde <lveyde@redhat.com> wrote:
Hi Barak,
Looks like a network connectivity issue:
3 times in a row from a local sever? For the same files? While we have
other jobs downloading from the same server? While the exact same
100 process is
downloading successfully? No. that is not it.
I was thinking we might have RPM files that have been changed, but not finding different MD5 sums between what we have cached on the slave and what is in the repo, will try to flush local cache and rerun.
Ah, I see you already got a successful run today, so recreating metadata worked? Or did we just got lucky and ran it on a slave with an empty cache?
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
-- Lev Veyde Software Engineer, RHCE | RHCVA | MCITP Red Hat Israel <https://www.redhat.com> lev@redhat.com | lveyde@redhat.com <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
participants (2)
-
Barak Korren
-
Lev Veyde