On Thu, Oct 19, 2017 at 10:50 AM, Allon Mureinik <amureini(a)redhat.com>
wrote:
The missing deps issue happened again this morning [1]:
Traceback (most recent call last):
File "/tmp/ovirt-q04eYYi5Ym/pythonlib/otopi/context.py", line 133, in
_executeMethod
method['method']()
File "/tmp/ovirt-q04eYYi5Ym/otopi-plugins/otopi/packagers/yumpackager.py",
line 256, in _packages
if self._miniyum.buildTransaction():
File "/tmp/ovirt-q04eYYi5Ym/pythonlib/otopi/miniyum.py", line 920, in
buildTransaction
raise yum.Errors.YumBaseError(msg)
YumBaseError: [u'vdsm-4.20.3-205.git15d3b78.el7.centos.x86_64 requires
libvirt-daemon-kvm >= 3.2.0-14.el7_4.3',
u'10:qemu-kvm-ev-2.9.0-16.el7_4.5.1.x86_64 requires ipxe-roms-qemu >=
20170123-1.git4e85b27.el7_4.1']
2017-10-19 01:36:37,275-0400 ERROR otopi.context context._executeMethod:152 Failed to
execute stage 'Package installation':
[u'vdsm-4.20.3-205.git15d3b78.el7.centos.x86_64 requires libvirt-daemon-kvm >=
3.2.0-14.el7_4.3', u'10:qemu-kvm-ev-2.9.0-16.el7_4.5.1.x86_64 requires
ipxe-roms-qemu >= 20170123-1.git4e85b27.el7_4.1']
We need to fix the missing packages (broken repo?) issue ASAP, as it would mast any other
real problems we may have there
We're looking into it now, it's strange that official qemu-kvm-ev is
requiring a version of ipxe-roms-qemu with git sha
20170123-1.git4e85b27.el7_4.1.
It looks like the same pkg is coming from centos base, updateds and
kvm-commons, and some repos include older version without the '4.1' suffix.
However, its strange that some jobs does pass, e.g - last finished run from
1.5 hours ago:
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_chan...
[1]
http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tes...
On Thu, Oct 19, 2017 at 10:29 AM, Martin Perina <mperina(a)redhat.com>
wrote:
>
>
> On Thu, Oct 19, 2017 at 7:35 AM, Dan Kenigsberg <danken(a)redhat.com>
> wrote:
>
>> On Wed, Oct 18, 2017 at 2:40 PM, Daniel Belenky <dbelenky(a)redhat.com>
>> wrote:
>>
>>> Hi all,
>>>
>>> *The following test is failing:* 002_bootstrap.verify_add_hosts
>>> *All logs from failing job
>>>
<
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_chan...
>>> *Only 2 engine patches participated in the test, so the suspected
>>> patches are:*
>>>
>>> 1. *https://gerrit.ovirt.org/#/c/82542/2*
>>> <
https://gerrit.ovirt.org/#/c/82542/2>
>>> 2. *https://gerrit.ovirt.org/#/c/82545/3
>>> <
https://gerrit.ovirt.org/#/c/82545/3>*
>>>
>>> Due to the fact that when this error first introduced we had another
>>> error, the CI can't automatically detect the specific patch.
>>>
>>> *Error snippet from logs: **ovirt-host-deploy-ansible log
>>>
<
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_chan...
>>> (Full log)*
>>>
>>> TASK [ovirt-host-deploy-firewalld : Enable firewalld rules]
********************
>>> failed: [lago-basic-suite-master-host-0] (item={u'service':
u'glusterfs'}) => {"changed": false, "failed": true,
"item": {"service": "glusterfs"}, "msg":
"ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE:
'glusterfs' not among existing services Permanent and Non-Permanent(immediate)
operation, Services are defined by port/tcp relationship and named as they are in
/etc/services (on most systems)"}
>>>
>>>
>>> *Error from HOST 0 firewalld
>>> log: lago-basic-suite-master-host-0/_var_log/firewalld/
>>>
<
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_chan...
(Full
>>> log)*
>>>
>>> 2017-10-15 16:51:24 ERROR: INVALID_SERVICE: 'glusterfs' not among
existing services
>>>
>>>
>> Ondra, would such an error propagate through the playbook to Engine and
>> fail the add-host flow? (I think it should!)
>>
>
> We didn't do that so far, because of EL 7.3
> . We need firewalld from 7.4 to have all available services in place (I
> don't remember but I think imageio service was the one delivered only in
> firewalld from 7.4). So up until now we ingore non-existent firewalld
> service, but if needed we can turn this on and fail host deploy.
>
>
>
>>
>> Do you know which package provide the glusterfs firewalld service, and
>> why it is missing from the host?
>>
>
> So we have used 'glusterfs' firewalld service per Sahina
> recommendation, which is included in glusterfs-server package from version
> 3.7.6 [1]. But this package is not installed when installing packages for
> cluster with gluster capabilities enabled. So now I'm confused: don't we
> need glusterfs-server package? If not and we need those ports open because
> they are used by services from different already installed glusterfs
> packages, shouldn't the firewalld configuration be moved from
> glusterfs-server to glusterfs package?
>
>
> [1]
https://bugzilla.redhat.com/show_bug.cgi?id=1057295
>
>
>
_______________________________________________
Devel mailing list
Devel(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel
--
Eyal edri
MANAGER
RHV DevOps
EMEA VIRTUALIZATION R&D
Red Hat EMEA <
https://www.redhat.com/>
<
https://red.ht/sig> TRIED. TESTED. TRUSTED. <
https://redhat.com/trusted>
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)