Hi Barak,

Not sure if it was it or something else... but yes I did re-create the metadata even though I could successfully access the repo and even get that specific RPM package info, including the right size.

Personally I'm not sure what really happened here, the download in the job was failing several times at exactly 88%, but I was able to download (w/ wget) the RPM without any issue from my test VM.
It means that if it was due to corrupted yum repo data and yum uses the filesize it gets from the DB and not the the size it receives from the httpd, then I would expect to see the yum info to report the RPM size of about 2.84 MB. But it reported it as 2.5 MB which is it's size.

So to summarize, not really sure what went wrong here, and we need to keep an eye on this to see if it's really resolved.

Thanks in advance,


On Wed, Aug 2, 2017 at 4:05 PM, Barak Korren <bkorren@redhat.com> wrote:
On 2 August 2017 at 16:02, Barak Korren <bkorren@redhat.com> wrote:
>
> On 2 August 2017 at 14:24, Lev Veyde <lveyde@redhat.com> wrote:
>>
>> Hi Barak,
>>
>> Looks like a network connectivity issue:
>>
>
> 3 times in a row from a local sever? For the same files? While we have 100
> other jobs downloading from the same server? While the exact same process is
> downloading successfully? No. that is not it.
>
> I was thinking we might have RPM files that have been changed, but not
> finding different MD5 sums between what we have cached on the slave and what
> is in the repo, will try to flush local cache and rerun.

Ah, I see you already got a successful run today, so recreating metadata worked?
Or did we just got lucky and ran it on a slave with an empty cache?

--
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted



--

Lev Veyde

Software Engineer, RHCE | RHCVA | MCITP

Red Hat Israel

lev@redhat.com | lveyde@redhat.com