Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ bootstrap.verify_add_hosts ] [ 18/10/17 ]

On Thu, Oct 19, 2017 at 7:35 AM, Dan Kenigsberg <danken@redhat.com> wrote:
On Wed, Oct 18, 2017 at 2:40 PM, Daniel Belenky <dbelenky@redhat.com> wrote:
Hi all,
*The following test is failing:* 002_bootstrap.verify_add_hosts *All logs from failing job <http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/3166/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/>* *Only 2 engine patches participated in the test, so the suspected patches are:*
1. *https://gerrit.ovirt.org/#/c/82542/2* <https://gerrit.ovirt.org/#/c/82542/2> 2. *https://gerrit.ovirt.org/#/c/82545/3 <https://gerrit.ovirt.org/#/c/82545/3>*
Due to the fact that when this error first introduced we had another error, the CI can't automatically detect the specific patch.
*Error snippet from logs: **ovirt-host-deploy-ansible log <http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/3166/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20171015165106-lago-basic-suite-master-host-0-74ed9407.log/*view*/> (Full log)*
TASK [ovirt-host-deploy-firewalld : Enable firewalld rules] ******************** failed: [lago-basic-suite-master-host-0] (item={u'service': u'glusterfs'}) => {"changed": false, "failed": true, "item": {"service": "glusterfs"}, "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Permanent and Non-Permanent(immediate) operation, Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"}
*Error from HOST 0 firewalld log: lago-basic-suite-master-host-0/_var_log/firewalld/ <http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/3166/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-host-0/_var_log/firewalld/*view*/> (Full log)*
2017-10-15 16:51:24 ERROR: INVALID_SERVICE: 'glusterfs' not among existing services
Ondra, would such an error propagate through the playbook to Engine and fail the add-host flow? (I think it should!)
We didn't do that so far, because of EL 7.3 . We need firewalld from 7.4 to have all available services in place (I don't remember but I think imageio service was the one delivered only in firewalld from 7.4). So up until now we ingore non-existent firewalld service, but if needed we can turn this on and fail host deploy.
Do you know which package provide the glusterfs firewalld service, and why it is missing from the host?
So we have used 'glusterfs' firewalld service per Sahina recommendation, which is included in glusterfs-server package from version 3.7.6 [1]. But this package is not installed when installing packages for cluster with gluster capabilities enabled. So now I'm confused: don't we need glusterfs-server package? If not and we need those ports open because they are used by services from different already installed glusterfs packages, shouldn't the firewalld configuration be moved from glusterfs-server to glusterfs package? [1] https://bugzilla.redhat.com/show_bug.cgi?id=1057295

The missing deps issue happened again this morning [1]: Traceback (most recent call last): File "/tmp/ovirt-q04eYYi5Ym/pythonlib/otopi/context.py", line 133, in _executeMethod method['method']() File "/tmp/ovirt-q04eYYi5Ym/otopi-plugins/otopi/packagers/yumpackager.py", line 256, in _packages if self._miniyum.buildTransaction(): File "/tmp/ovirt-q04eYYi5Ym/pythonlib/otopi/miniyum.py", line 920, in buildTransaction raise yum.Errors.YumBaseError(msg) YumBaseError: [u'vdsm-4.20.3-205.git15d3b78.el7.centos.x86_64 requires libvirt-daemon-kvm >= 3.2.0-14.el7_4.3', u'10:qemu-kvm-ev-2.9.0-16.el7_4.5.1.x86_64 requires ipxe-roms-qemu >= 20170123-1.git4e85b27.el7_4.1'] 2017-10-19 01:36:37,275-0400 ERROR otopi.context context._executeMethod:152 Failed to execute stage 'Package installation': [u'vdsm-4.20.3-205.git15d3b78.el7.centos.x86_64 requires libvirt-daemon-kvm >= 3.2.0-14.el7_4.3', u'10:qemu-kvm-ev-2.9.0-16.el7_4.5.1.x86_64 requires ipxe-roms-qemu >= 20170123-1.git4e85b27.el7_4.1'] We need to fix the missing packages (broken repo?) issue ASAP, as it would mast any other real problems we may have there [1] http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_... On Thu, Oct 19, 2017 at 10:29 AM, Martin Perina <mperina@redhat.com> wrote:
On Thu, Oct 19, 2017 at 7:35 AM, Dan Kenigsberg <danken@redhat.com> wrote:
On Wed, Oct 18, 2017 at 2:40 PM, Daniel Belenky <dbelenky@redhat.com> wrote:
Hi all,
*The following test is failing:* 002_bootstrap.verify_add_hosts *All logs from failing job <http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/3166/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/>* *Only 2 engine patches participated in the test, so the suspected patches are:*
1. *https://gerrit.ovirt.org/#/c/82542/2* <https://gerrit.ovirt.org/#/c/82542/2> 2. *https://gerrit.ovirt.org/#/c/82545/3 <https://gerrit.ovirt.org/#/c/82545/3>*
Due to the fact that when this error first introduced we had another error, the CI can't automatically detect the specific patch.
*Error snippet from logs: **ovirt-host-deploy-ansible log <http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/3166/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20171015165106-lago-basic-suite-master-host-0-74ed9407.log/*view*/> (Full log)*
TASK [ovirt-host-deploy-firewalld : Enable firewalld rules] ******************** failed: [lago-basic-suite-master-host-0] (item={u'service': u'glusterfs'}) => {"changed": false, "failed": true, "item": {"service": "glusterfs"}, "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Permanent and Non-Permanent(immediate) operation, Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"}
*Error from HOST 0 firewalld log: lago-basic-suite-master-host-0/_var_log/firewalld/ <http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/3166/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-host-0/_var_log/firewalld/*view*/> (Full log)*
2017-10-15 16:51:24 ERROR: INVALID_SERVICE: 'glusterfs' not among existing services
Ondra, would such an error propagate through the playbook to Engine and fail the add-host flow? (I think it should!)
We didn't do that so far, because of EL 7.3 . We need firewalld from 7.4 to have all available services in place (I don't remember but I think imageio service was the one delivered only in firewalld from 7.4). So up until now we ingore non-existent firewalld service, but if needed we can turn this on and fail host deploy.
Do you know which package provide the glusterfs firewalld service, and why it is missing from the host?
So we have used 'glusterfs' firewalld service per Sahina recommendation, which is included in glusterfs-server package from version 3.7.6 [1]. But this package is not installed when installing packages for cluster with gluster capabilities enabled. So now I'm confused: don't we need glusterfs-server package? If not and we need those ports open because they are used by services from different already installed glusterfs packages, shouldn't the firewalld configuration be moved from glusterfs-server to glusterfs package?

On Thu, Oct 19, 2017 at 10:50 AM, Allon Mureinik <amureini@redhat.com> wrote:
The missing deps issue happened again this morning [1]:
Traceback (most recent call last): File "/tmp/ovirt-q04eYYi5Ym/pythonlib/otopi/context.py", line 133, in _executeMethod method['method']() File "/tmp/ovirt-q04eYYi5Ym/otopi-plugins/otopi/packagers/yumpackager.py", line 256, in _packages if self._miniyum.buildTransaction(): File "/tmp/ovirt-q04eYYi5Ym/pythonlib/otopi/miniyum.py", line 920, in buildTransaction raise yum.Errors.YumBaseError(msg) YumBaseError: [u'vdsm-4.20.3-205.git15d3b78.el7.centos.x86_64 requires libvirt-daemon-kvm >= 3.2.0-14.el7_4.3', u'10:qemu-kvm-ev-2.9.0-16.el7_4.5.1.x86_64 requires ipxe-roms-qemu >= 20170123-1.git4e85b27.el7_4.1'] 2017-10-19 01:36:37,275-0400 ERROR otopi.context context._executeMethod:152 Failed to execute stage 'Package installation': [u'vdsm-4.20.3-205.git15d3b78.el7.centos.x86_64 requires libvirt-daemon-kvm >= 3.2.0-14.el7_4.3', u'10:qemu-kvm-ev-2.9.0-16.el7_4.5.1.x86_64 requires ipxe-roms-qemu >= 20170123-1.git4e85b27.el7_4.1']
We need to fix the missing packages (broken repo?) issue ASAP, as it would mast any other real problems we may have there
We're looking into it now, it's strange that official qemu-kvm-ev is requiring a version of ipxe-roms-qemu with git sha 20170123-1.git4e85b27.el7_4.1. It looks like the same pkg is coming from centos base, updateds and kvm-commons, and some repos include older version without the '4.1' suffix. However, its strange that some jobs does pass, e.g - last finished run from 1.5 hours ago: http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-...
[1] http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_...
On Thu, Oct 19, 2017 at 10:29 AM, Martin Perina <mperina@redhat.com> wrote:
On Thu, Oct 19, 2017 at 7:35 AM, Dan Kenigsberg <danken@redhat.com> wrote:
On Wed, Oct 18, 2017 at 2:40 PM, Daniel Belenky <dbelenky@redhat.com> wrote:
Hi all,
*The following test is failing:* 002_bootstrap.verify_add_hosts *All logs from failing job <http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/3166/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/>* *Only 2 engine patches participated in the test, so the suspected patches are:*
1. *https://gerrit.ovirt.org/#/c/82542/2* <https://gerrit.ovirt.org/#/c/82542/2> 2. *https://gerrit.ovirt.org/#/c/82545/3 <https://gerrit.ovirt.org/#/c/82545/3>*
Due to the fact that when this error first introduced we had another error, the CI can't automatically detect the specific patch.
*Error snippet from logs: **ovirt-host-deploy-ansible log <http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/3166/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20171015165106-lago-basic-suite-master-host-0-74ed9407.log/*view*/> (Full log)*
TASK [ovirt-host-deploy-firewalld : Enable firewalld rules] ******************** failed: [lago-basic-suite-master-host-0] (item={u'service': u'glusterfs'}) => {"changed": false, "failed": true, "item": {"service": "glusterfs"}, "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Permanent and Non-Permanent(immediate) operation, Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"}
*Error from HOST 0 firewalld log: lago-basic-suite-master-host-0/_var_log/firewalld/ <http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/3166/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-host-0/_var_log/firewalld/*view*/> (Full log)*
2017-10-15 16:51:24 ERROR: INVALID_SERVICE: 'glusterfs' not among existing services
Ondra, would such an error propagate through the playbook to Engine and fail the add-host flow? (I think it should!)
We didn't do that so far, because of EL 7.3 . We need firewalld from 7.4 to have all available services in place (I don't remember but I think imageio service was the one delivered only in firewalld from 7.4). So up until now we ingore non-existent firewalld service, but if needed we can turn this on and fail host deploy.
Do you know which package provide the glusterfs firewalld service, and why it is missing from the host?
So we have used 'glusterfs' firewalld service per Sahina recommendation, which is included in glusterfs-server package from version 3.7.6 [1]. But this package is not installed when installing packages for cluster with gluster capabilities enabled. So now I'm confused: don't we need glusterfs-server package? If not and we need those ports open because they are used by services from different already installed glusterfs packages, shouldn't the firewalld configuration be moved from glusterfs-server to glusterfs package?
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
-- Eyal edri MANAGER RHV DevOps EMEA VIRTUALIZATION R&D Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)

On 19 October 2017 at 11:43, Eyal Edri <eedri@redhat.com> wrote:
On Thu, Oct 19, 2017 at 10:50 AM, Allon Mureinik <amureini@redhat.com> wrote:
The missing deps issue happened again this morning [1]:
Traceback (most recent call last): File "/tmp/ovirt-q04eYYi5Ym/pythonlib/otopi/context.py", line 133, in _executeMethod method['method']() File "/tmp/ovirt-q04eYYi5Ym/otopi-plugins/otopi/packagers/yumpackager.py", line 256, in _packages if self._miniyum.buildTransaction(): File "/tmp/ovirt-q04eYYi5Ym/pythonlib/otopi/miniyum.py", line 920, in buildTransaction raise yum.Errors.YumBaseError(msg) YumBaseError: [u'vdsm-4.20.3-205.git15d3b78.el7.centos.x86_64 requires libvirt-daemon-kvm >= 3.2.0-14.el7_4.3', u'10:qemu-kvm-ev-2.9.0-16.el7_4.5.1.x86_64 requires ipxe-roms-qemu >= 20170123-1.git4e85b27.el7_4.1'] 2017-10-19 01:36:37,275-0400 ERROR otopi.context context._executeMethod:152 Failed to execute stage 'Package installation': [u'vdsm-4.20.3-205.git15d3b78.el7.centos.x86_64 requires libvirt-daemon-kvm >= 3.2.0-14.el7_4.3', u'10:qemu-kvm-ev-2.9.0-16.el7_4.5.1.x86_64 requires ipxe-roms-qemu >= 20170123-1.git4e85b27.el7_4.1']
We need to fix the missing packages (broken repo?) issue ASAP, as it would mast any other real problems we may have there
We're looking into it now, it's strange that official qemu-kvm-ev is requiring a version of ipxe-roms-qemu with git sha 20170123-1.git4e85b27.el7_4.1. It looks like the same pkg is coming from centos base, updateds and kvm-commons, and some repos include older version without the '4.1' suffix.
However, its strange that some jobs does pass, e.g - last finished run from 1.5 hours ago:
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ ovirt-master_change-queue-tester/3362/
There is nothing strange about this. The failure Alon linked to is in OST check-patch for Denial's patch [1] that closes the external repos, so it is expected to fail on issues like this. This is not merged so doesn't cause any issues for "normal" (E.g. change-queues) OST runs. [1] : https://gerrit.ovirt.org/c/82602/ -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

On Thu, Oct 19, 2017 at 11:51 AM, Barak Korren <bkorren@redhat.com> wrote:
On 19 October 2017 at 11:43, Eyal Edri <eedri@redhat.com> wrote:
On Thu, Oct 19, 2017 at 10:50 AM, Allon Mureinik <amureini@redhat.com> wrote:
The missing deps issue happened again this morning [1]:
Traceback (most recent call last): File "/tmp/ovirt-q04eYYi5Ym/pythonlib/otopi/context.py", line 133, in _executeMethod method['method']() File "/tmp/ovirt-q04eYYi5Ym/otopi-plugins/otopi/packagers/yumpackager.py", line 256, in _packages if self._miniyum.buildTransaction(): File "/tmp/ovirt-q04eYYi5Ym/pythonlib/otopi/miniyum.py", line 920, in buildTransaction raise yum.Errors.YumBaseError(msg) YumBaseError: [u'vdsm-4.20.3-205.git15d3b78.el7.centos.x86_64 requires libvirt-daemon-kvm >= 3.2.0-14.el7_4.3', u'10:qemu-kvm-ev-2.9.0-16.el7_4.5.1.x86_64 requires ipxe-roms-qemu >= 20170123-1.git4e85b27.el7_4.1'] 2017-10-19 01:36:37,275-0400 ERROR otopi.context context._executeMethod:152 Failed to execute stage 'Package installation': [u'vdsm-4.20.3-205.git15d3b78.el7.centos.x86_64 requires libvirt-daemon-kvm >= 3.2.0-14.el7_4.3', u'10:qemu-kvm-ev-2.9.0-16.el7_4.5.1.x86_64 requires ipxe-roms-qemu >= 20170123-1.git4e85b27.el7_4.1']
We need to fix the missing packages (broken repo?) issue ASAP, as it would mast any other real problems we may have there
We're looking into it now, it's strange that official qemu-kvm-ev is requiring a version of ipxe-roms-qemu with git sha 20170123-1.git4e85b27.el7_4.1. It looks like the same pkg is coming from centos base, updateds and kvm-commons, and some repos include older version without the '4.1' suffix.
However, its strange that some jobs does pass, e.g - last finished run from 1.5 hours ago:
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovir t-master_change-queue-tester/3362/
There is nothing strange about this. The failure Alon linked to is in OST check-patch for Denial's patch [1] that closes the external repos, so it is expected to fail on issues like this. This is not merged so doesn't cause any issues for "normal" (E.g. change-queues) OST runs.
I can't see that missing pkg error anymore actually, so whatever it was it might be fixed ( it failed also on another patch, not just Daniel's ). There might be a second issue here with the error Daniel sent on glusterfs-server and firewalld, I think we should focus on investigating that.
[1] : https://gerrit.ovirt.org/c/82602/
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
-- Eyal edri MANAGER RHV DevOps EMEA VIRTUALIZATION R&D Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)

On Thu, Oct 19, 2017 at 10:53 AM, Eyal Edri <eedri@redhat.com> wrote:
On Thu, Oct 19, 2017 at 11:51 AM, Barak Korren <bkorren@redhat.com> wrote:
On 19 October 2017 at 11:43, Eyal Edri <eedri@redhat.com> wrote:
On Thu, Oct 19, 2017 at 10:50 AM, Allon Mureinik <amureini@redhat.com> wrote:
The missing deps issue happened again this morning [1]:
Traceback (most recent call last): File "/tmp/ovirt-q04eYYi5Ym/pythonlib/otopi/context.py", line 133, in _executeMethod method['method']() File "/tmp/ovirt-q04eYYi5Ym/otopi-plugins/otopi/packagers/yumpackager.py", line 256, in _packages if self._miniyum.buildTransaction(): File "/tmp/ovirt-q04eYYi5Ym/pythonlib/otopi/miniyum.py", line 920, in buildTransaction raise yum.Errors.YumBaseError(msg) YumBaseError: [u'vdsm-4.20.3-205.git15d3b78.el7.centos.x86_64 requires libvirt-daemon-kvm >= 3.2.0-14.el7_4.3', u'10:qemu-kvm-ev-2.9.0-16.el7_4.5.1.x86_64 requires ipxe-roms-qemu >= 20170123-1.git4e85b27.el7_4.1'] 2017-10-19 01:36:37,275-0400 ERROR otopi.context context._executeMethod:152 Failed to execute stage 'Package installation': [u'vdsm-4.20.3-205.git15d3b78.el7.centos.x86_64 requires libvirt-daemon-kvm >= 3.2.0-14.el7_4.3', u'10:qemu-kvm-ev-2.9.0-16.el7_4.5.1.x86_64 requires ipxe-roms-qemu >= 20170123-1.git4e85b27.el7_4.1']
We need to fix the missing packages (broken repo?) issue ASAP, as it would mast any other real problems we may have there
We're looking into it now, it's strange that official qemu-kvm-ev is requiring a version of ipxe-roms-qemu with git sha 20170123-1.git4e85b27.el7_4.1. It looks like the same pkg is coming from centos base, updateds and kvm-commons, and some repos include older version without the '4.1' suffix.
However, its strange that some jobs does pass, e.g - last finished run from 1.5 hours ago:
http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovir t-master_change-queue-tester/3362/
There is nothing strange about this. The failure Alon linked to is in OST check-patch for Denial's patch [1] that closes the external repos, so it is expected to fail on issues like this. This is not merged so doesn't cause any issues for "normal" (E.g. change-queues) OST runs.
I can't see that missing pkg error anymore actually, so whatever it was it might be fixed ( it failed also on another patch, not just Daniel's ). There might be a second issue here with the error Daniel sent on glusterfs-server and firewalld, I think we should focus on investigating that.
This is not an issue, which should the reason for jobs to fail (we are ignoring this error during firewalld setup), because AFAIK we are not doing any gluster related tests in basic OST. Anyway the discussion about missing gluster firewalld service continues ...
[1] : https://gerrit.ovirt.org/c/82602/
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
--
Eyal edri
MANAGER
RHV DevOps
EMEA VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> phone: +972-9-7692018 <+972%209-769-2018> irc: eedri (on #tlv #rhev-dev #rhev-integ)
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

On 19 October 2017 at 10:50, Allon Mureinik <amureini@redhat.com> wrote:
The missing deps issue happened again this morning [1]:
Why are you looking at OST check-patch job? it has little to do with how OST runs when it is used to check other projects (For example it runs all suits as opposed to just the stable ones, and it does not use or repo protection mecahnisms...). Also that particular patch plays with repo configuration so it is expected to fail on repo issues... OST is stable ATM for all projects except engine, here are some passing examples: - http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-... - http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-... - http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-... I think that is strong enough evidence that the issue is in engine code and not in OST/repos/other places people like to point to. -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

On Thu, Oct 19, 2017 at 10:29 AM, Martin Perina <mperina@redhat.com> wrote:
On Thu, Oct 19, 2017 at 7:35 AM, Dan Kenigsberg <danken@redhat.com> wrote:
On Wed, Oct 18, 2017 at 2:40 PM, Daniel Belenky <dbelenky@redhat.com> wrote:
Hi all,
The following test is failing: 002_bootstrap.verify_add_hosts All logs from failing job Only 2 engine patches participated in the test, so the suspected patches are:
https://gerrit.ovirt.org/#/c/82542/2 https://gerrit.ovirt.org/#/c/82545/3
Due to the fact that when this error first introduced we had another error, the CI can't automatically detect the specific patch.
Error snippet from logs: ovirt-host-deploy-ansible log (Full log)
TASK [ovirt-host-deploy-firewalld : Enable firewalld rules] ******************** failed: [lago-basic-suite-master-host-0] (item={u'service': u'glusterfs'}) => {"changed": false, "failed": true, "item": {"service": "glusterfs"}, "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Permanent and Non-Permanent(immediate) operation, Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"}
Error from HOST 0 firewalld log: lago-basic-suite-master-host-0/_var_log/firewalld/ (Full log)
2017-10-15 16:51:24 ERROR: INVALID_SERVICE: 'glusterfs' not among existing services
Ondra, would such an error propagate through the playbook to Engine and fail the add-host flow? (I think it should!)
We didn't do that so far, because of EL 7.3 . We need firewalld from 7.4 to have all available services in place (I don't remember but I think imageio service was the one delivered only in firewalld from 7.4). So up until now we ingore non-existent firewalld service, but if needed we can turn this on and fail host deploy.
Ok, so for now your "luckily" off the hook and not the reason of failure.
Do you know which package provide the glusterfs firewalld service, and why it is missing from the host?
So we have used 'glusterfs' firewalld service per Sahina recommendation, which is included in glusterfs-server package from version 3.7.6 [1]. But this package is not installed when installing packages for cluster with gluster capabilities enabled. So now I'm confused: don't we need glusterfs-server package? If not and we need those ports open because they are used by services from different already installed glusterfs packages, shouldn't the firewalld configuration be moved from glusterfs-server to glusterfs package?
glusterfs-cli.rpm is required to consume gluster storage (virt use case), but I don't recall that it needs open ports. glusterfs-server.rpm is required to provide gluster storage (gluster use case). If I recall correctly, firewalld feature has differentiated between the two; opening needed ports only when relevant.

On Thu, Oct 19, 2017 at 10:58 AM, Dan Kenigsberg <danken@redhat.com> wrote:
On Thu, Oct 19, 2017 at 10:29 AM, Martin Perina <mperina@redhat.com> wrote:
On Thu, Oct 19, 2017 at 7:35 AM, Dan Kenigsberg <danken@redhat.com>
On Wed, Oct 18, 2017 at 2:40 PM, Daniel Belenky <dbelenky@redhat.com> wrote:
Hi all,
The following test is failing: 002_bootstrap.verify_add_hosts All logs from failing job Only 2 engine patches participated in the test, so the suspected
wrote: patches
are:
https://gerrit.ovirt.org/#/c/82542/2 https://gerrit.ovirt.org/#/c/82545/3
Due to the fact that when this error first introduced we had another error, the CI can't automatically detect the specific patch.
Error snippet from logs: ovirt-host-deploy-ansible log (Full log)
TASK [ovirt-host-deploy-firewalld : Enable firewalld rules] ******************** failed: [lago-basic-suite-master-host-0] (item={u'service': u'glusterfs'}) => {"changed": false, "failed": true, "item": {"service": "glusterfs"}, "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Permanent and Non-Permanent(immediate) operation, Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"}
Error from HOST 0 firewalld log: lago-basic-suite-master-host-0/_var_log/firewalld/ (Full log)
2017-10-15 16:51:24 ERROR: INVALID_SERVICE: 'glusterfs' not among existing services
Ondra, would such an error propagate through the playbook to Engine and fail the add-host flow? (I think it should!)
We didn't do that so far, because of EL 7.3 . We need firewalld from 7.4 to have all available services in place (I don't remember but I think imageio service was the one delivered only in firewalld from 7.4). So up until now we ingore non-existent firewalld service, but if needed we can turn this on and fail host deploy.
Ok, so for now your "luckily" off the hook and not the reason of failure.
Do you know which package provide the glusterfs firewalld service, and
why
it is missing from the host?
So we have used 'glusterfs' firewalld service per Sahina recommendation, which is included in glusterfs-server package from version 3.7.6 [1]. But this package is not installed when installing packages for cluster with gluster capabilities enabled. So now I'm confused: don't we need glusterfs-server package? If not and we need those ports open because they are used by services from different already installed glusterfs packages, shouldn't the firewalld configuration be moved from glusterfs-server to glusterfs package?
glusterfs-cli.rpm is required to consume gluster storage (virt use case), but I don't recall that it needs open ports.
It was there even for IPTables, if gluster support is enabled on cluster, then gluster specific ports were enabled even with IPTables. FirewallD feature continues to use that.
glusterfs-server.rpm is required to provide gluster storage (gluster use case). If I recall correctly, firewalld feature has differentiated between the two; opening needed ports only when relevant.
Right, but if gluster services are configured for firewalld it means that the host has been added to the cluster with gluster feature enabled and not only virt

Taken from the ansible-playbook log of host-0: TASK [ovirt-host-deploy-firewalld : Enable firewalld rules] ******************** failed: [lago-basic-suite-master-host-0] (item={u'service': u'glusterfs'}) => {"changed": false, "failed": true, "item": {"service": "glusterfs"}, "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Permanent and Non-Permanent(immediate) operation, Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"} Shouldn't we fail the the playbook on firewall configuration failure? On Thu, Oct 19, 2017 at 12:04 PM, Martin Perina <mperina@redhat.com> wrote:
On Thu, Oct 19, 2017 at 10:58 AM, Dan Kenigsberg <danken@redhat.com> wrote:
On Thu, Oct 19, 2017 at 10:29 AM, Martin Perina <mperina@redhat.com> wrote:
On Thu, Oct 19, 2017 at 7:35 AM, Dan Kenigsberg <danken@redhat.com>
On Wed, Oct 18, 2017 at 2:40 PM, Daniel Belenky <dbelenky@redhat.com> wrote:
Hi all,
The following test is failing: 002_bootstrap.verify_add_hosts All logs from failing job Only 2 engine patches participated in the test, so the suspected
wrote: patches
are:
https://gerrit.ovirt.org/#/c/82542/2 https://gerrit.ovirt.org/#/c/82545/3
Due to the fact that when this error first introduced we had another error, the CI can't automatically detect the specific patch.
Error snippet from logs: ovirt-host-deploy-ansible log (Full log)
TASK [ovirt-host-deploy-firewalld : Enable firewalld rules] ******************** failed: [lago-basic-suite-master-host-0] (item={u'service': u'glusterfs'}) => {"changed": false, "failed": true, "item": {"service": "glusterfs"}, "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Permanent and Non-Permanent(immediate) operation, Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"}
Error from HOST 0 firewalld log: lago-basic-suite-master-host-0/_var_log/firewalld/ (Full log)
2017-10-15 16:51:24 ERROR: INVALID_SERVICE: 'glusterfs' not among existing services
Ondra, would such an error propagate through the playbook to Engine and fail the add-host flow? (I think it should!)
We didn't do that so far, because of EL 7.3 . We need firewalld from 7.4 to have all available services in place (I don't remember but I think imageio service was the one delivered only in firewalld from 7.4). So up until now we ingore non-existent firewalld service, but if needed we can turn this on and fail host deploy.
Ok, so for now your "luckily" off the hook and not the reason of failure.
Do you know which package provide the glusterfs firewalld service, and
why
it is missing from the host?
So we have used 'glusterfs' firewalld service per Sahina recommendation, which is included in glusterfs-server package from version 3.7.6 [1]. But this package is not installed when installing packages for cluster with gluster capabilities enabled. So now I'm confused: don't we need glusterfs-server package? If not and we need those ports open because they are used by services from different already installed glusterfs packages, shouldn't the firewalld configuration be moved from glusterfs-server to glusterfs package?
glusterfs-cli.rpm is required to consume gluster storage (virt use case), but I don't recall that it needs open ports.
It was there even for IPTables, if gluster support is enabled on cluster, then gluster specific ports were enabled even with IPTables. FirewallD feature continues to use that.
glusterfs-server.rpm is required to provide gluster storage (gluster use case). If I recall correctly, firewalld feature has differentiated between the two; opening needed ports only when relevant.
Right, but if gluster services are configured for firewalld it means that the host has been added to the cluster with gluster feature enabled and not only virt
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel
-- *GAL bEN HAIM* RHV DEVOPS

On Thu, Oct 19, 2017 at 11:04 AM, Martin Perina <mperina@redhat.com> wrote:
On Thu, Oct 19, 2017 at 10:58 AM, Dan Kenigsberg <danken@redhat.com> wrote:
On Thu, Oct 19, 2017 at 10:29 AM, Martin Perina <mperina@redhat.com> wrote:
On Thu, Oct 19, 2017 at 7:35 AM, Dan Kenigsberg <danken@redhat.com>
On Wed, Oct 18, 2017 at 2:40 PM, Daniel Belenky <dbelenky@redhat.com> wrote:
Hi all,
The following test is failing: 002_bootstrap.verify_add_hosts All logs from failing job Only 2 engine patches participated in the test, so the suspected
wrote: patches
are:
https://gerrit.ovirt.org/#/c/82542/2 https://gerrit.ovirt.org/#/c/82545/3
Due to the fact that when this error first introduced we had another error, the CI can't automatically detect the specific patch.
Error snippet from logs: ovirt-host-deploy-ansible log (Full log)
TASK [ovirt-host-deploy-firewalld : Enable firewalld rules] ******************** failed: [lago-basic-suite-master-host-0] (item={u'service': u'glusterfs'}) => {"changed": false, "failed": true, "item": {"service": "glusterfs"}, "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Permanent and Non-Permanent(immediate) operation, Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"}
Error from HOST 0 firewalld log: lago-basic-suite-master-host-0/_var_log/firewalld/ (Full log)
2017-10-15 16:51:24 ERROR: INVALID_SERVICE: 'glusterfs' not among existing services
Ondra, would such an error propagate through the playbook to Engine and fail the add-host flow? (I think it should!)
We didn't do that so far, because of EL 7.3 . We need firewalld from 7.4 to have all available services in place (I don't remember but I think imageio service was the one delivered only in firewalld from 7.4). So up until now we ingore non-existent firewalld service, but if needed we can turn this on and fail host deploy.
Ok, so for now your "luckily" off the hook and not the reason of failure.
Do you know which package provide the glusterfs firewalld service, and
why
it is missing from the host?
So we have used 'glusterfs' firewalld service per Sahina recommendation, which is included in glusterfs-server package from version 3.7.6 [1]. But this package is not installed when installing packages for cluster with gluster capabilities enabled. So now I'm confused: don't we need glusterfs-server package? If not and we need those ports open because they are used by services from different already installed glusterfs packages, shouldn't the firewalld configuration be moved from glusterfs-server to glusterfs package?
glusterfs-cli.rpm is required to consume gluster storage (virt use case), but I don't recall that it needs open ports.
It was there even for IPTables, if gluster support is enabled on cluster, then gluster specific ports were enabled even with IPTables. FirewallD feature continues to use that.
glusterfs-server.rpm is required to provide gluster storage (gluster use case). If I recall correctly, firewalld feature has differentiated between the two; opening needed ports only when relevant.
Right, but if gluster services are configured for firewalld it means that the host has been added to the cluster with gluster feature enabled and not only virt

So the real issue on adding a host is the same as I've today described in [2] and most probably caused by [3] (I reverted engine in my dev env prior this patch and host deploy finished successfully). Allon, do you have time to post a fix? If not I'll try to dig into your change and related networking code to post it ... [2] https://bugzilla.redhat.com/show_bug.cgi?id=1504005 [3] https://gerrit.ovirt.org/#/c/82545/ On Thu, Oct 19, 2017 at 11:12 AM, Martin Perina <mperina@redhat.com> wrote:
On Thu, Oct 19, 2017 at 11:04 AM, Martin Perina <mperina@redhat.com> wrote:
On Thu, Oct 19, 2017 at 10:58 AM, Dan Kenigsberg <danken@redhat.com> wrote:
On Thu, Oct 19, 2017 at 10:29 AM, Martin Perina <mperina@redhat.com> wrote:
On Thu, Oct 19, 2017 at 7:35 AM, Dan Kenigsberg <danken@redhat.com>
On Wed, Oct 18, 2017 at 2:40 PM, Daniel Belenky <dbelenky@redhat.com> wrote:
Hi all,
The following test is failing: 002_bootstrap.verify_add_hosts All logs from failing job Only 2 engine patches participated in the test, so the suspected
wrote: patches
are:
https://gerrit.ovirt.org/#/c/82542/2 https://gerrit.ovirt.org/#/c/82545/3
Due to the fact that when this error first introduced we had another error, the CI can't automatically detect the specific patch.
Error snippet from logs: ovirt-host-deploy-ansible log (Full log)
TASK [ovirt-host-deploy-firewalld : Enable firewalld rules] ******************** failed: [lago-basic-suite-master-host-0] (item={u'service': u'glusterfs'}) => {"changed": false, "failed": true, "item": {"service": "glusterfs"}, "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Permanent and Non-Permanent(immediate) operation, Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"}
Error from HOST 0 firewalld log: lago-basic-suite-master-host-0/_var_log/firewalld/ (Full log)
2017-10-15 16:51:24 ERROR: INVALID_SERVICE: 'glusterfs' not among existing services
Ondra, would such an error propagate through the playbook to Engine and fail the add-host flow? (I think it should!)
We didn't do that so far, because of EL 7.3 . We need firewalld from 7.4 to have all available services in place (I don't remember but I think imageio service was the one delivered only in firewalld from 7.4). So up until now we ingore non-existent firewalld service, but if needed we can turn this on and fail host deploy.
Ok, so for now your "luckily" off the hook and not the reason of failure.
Do you know which package provide the glusterfs firewalld service,
and why
it is missing from the host?
So we have used 'glusterfs' firewalld service per Sahina recommendation, which is included in glusterfs-server package from version 3.7.6 [1]. But this package is not installed when installing packages for cluster with gluster capabilities enabled. So now I'm confused: don't we need glusterfs-server package? If not and we need those ports open because they are used by services from different already installed glusterfs packages, shouldn't the firewalld configuration be moved from glusterfs-server to glusterfs package?
glusterfs-cli.rpm is required to consume gluster storage (virt use case), but I don't recall that it needs open ports.
It was there even for IPTables, if gluster support is enabled on cluster, then gluster specific ports were enabled even with IPTables. FirewallD feature continues to use that.
glusterfs-server.rpm is required to provide gluster storage (gluster use case). If I recall correctly, firewalld feature has differentiated between the two; opening needed ports only when relevant.
Right, but if gluster services are configured for firewalld it means that the host has been added to the cluster with gluster feature enabled and not only virt

Bloody hell. The original was also completely broken, and worked by chance. Damn it. This should fix it: https://gerrit.ovirt.org/#/c/82989/ On Thu, Oct 19, 2017 at 3:49 PM, Martin Perina <mperina@redhat.com> wrote:
So the real issue on adding a host is the same as I've today described in [2] and most probably caused by [3] (I reverted engine in my dev env prior this patch and host deploy finished successfully).
Allon, do you have time to post a fix? If not I'll try to dig into your change and related networking code to post it ...
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1504005 [3] https://gerrit.ovirt.org/#/c/82545/
On Thu, Oct 19, 2017 at 11:12 AM, Martin Perina <mperina@redhat.com> wrote:
On Thu, Oct 19, 2017 at 11:04 AM, Martin Perina <mperina@redhat.com> wrote:
On Thu, Oct 19, 2017 at 10:58 AM, Dan Kenigsberg <danken@redhat.com> wrote:
On Thu, Oct 19, 2017 at 10:29 AM, Martin Perina <mperina@redhat.com> wrote:
On Thu, Oct 19, 2017 at 7:35 AM, Dan Kenigsberg <danken@redhat.com>
On Wed, Oct 18, 2017 at 2:40 PM, Daniel Belenky <dbelenky@redhat.com
wrote: > > Hi all, > > The following test is failing: 002_bootstrap.verify_add_hosts > All logs from failing job > Only 2 engine patches participated in the test, so the suspected
wrote: patches
> are: > > https://gerrit.ovirt.org/#/c/82542/2 > https://gerrit.ovirt.org/#/c/82545/3 > > Due to the fact that when this error first introduced we had another > error, the CI can't automatically detect the specific patch. > > Error snippet from logs: ovirt-host-deploy-ansible log (Full log) > > TASK [ovirt-host-deploy-firewalld : Enable firewalld rules] > ******************** > failed: [lago-basic-suite-master-host-0] (item={u'service': > u'glusterfs'}) => {"changed": false, "failed": true, "item": {"service": > "glusterfs"}, "msg": "ERROR: Exception caught: > org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not > among existing services Permanent and Non-Permanent(immediate) operation, > Services are defined by port/tcp relationship and named as they are in > /etc/services (on most systems)"} > > > Error from HOST 0 firewalld log: > lago-basic-suite-master-host-0/_var_log/firewalld/ (Full log) > > 2017-10-15 16:51:24 ERROR: INVALID_SERVICE: 'glusterfs' not among > existing services
Ondra, would such an error propagate through the playbook to Engine and fail the add-host flow? (I think it should!)
We didn't do that so far, because of EL 7.3 . We need firewalld from 7.4 to have all available services in place (I don't remember but I think imageio service was the one delivered only in firewalld from 7.4). So up until now we ingore non-existent firewalld service, but if needed we can turn this on and fail host deploy.
Ok, so for now your "luckily" off the hook and not the reason of failure.
Do you know which package provide the glusterfs firewalld service,
and why
it is missing from the host?
So we have used 'glusterfs' firewalld service per Sahina recommendation, which is included in glusterfs-server package from version 3.7.6 [1]. But this package is not installed when installing packages for cluster with gluster capabilities enabled. So now I'm confused: don't we need glusterfs-server package? If not and we need those ports open because they are used by services from different already installed glusterfs packages, shouldn't the firewalld configuration be moved from glusterfs-server to glusterfs package?
glusterfs-cli.rpm is required to consume gluster storage (virt use case), but I don't recall that it needs open ports.
It was there even for IPTables, if gluster support is enabled on cluster, then gluster specific ports were enabled even with IPTables. FirewallD feature continues to use that.
glusterfs-server.rpm is required to provide gluster storage (gluster use case). If I recall correctly, firewalld feature has differentiated between the two; opening needed ports only when relevant.
Right, but if gluster services are configured for firewalld it means that the host has been added to the cluster with gluster feature enabled and not only virt

Fix merged based on Alona and Martin's reviews. It seems to do the trick with my testing on my local engine, let's hope that's really it. On Thu, Oct 19, 2017 at 4:37 PM, Allon Mureinik <amureini@redhat.com> wrote:
Bloody hell. The original was also completely broken, and worked by chance. Damn it.
This should fix it: https://gerrit.ovirt.org/#/c/82989/
On Thu, Oct 19, 2017 at 3:49 PM, Martin Perina <mperina@redhat.com> wrote:
So the real issue on adding a host is the same as I've today described in [2] and most probably caused by [3] (I reverted engine in my dev env prior this patch and host deploy finished successfully).
Allon, do you have time to post a fix? If not I'll try to dig into your change and related networking code to post it ...
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1504005 [3] https://gerrit.ovirt.org/#/c/82545/
On Thu, Oct 19, 2017 at 11:12 AM, Martin Perina <mperina@redhat.com> wrote:
On Thu, Oct 19, 2017 at 11:04 AM, Martin Perina <mperina@redhat.com> wrote:
On Thu, Oct 19, 2017 at 10:58 AM, Dan Kenigsberg <danken@redhat.com> wrote:
On Thu, Oct 19, 2017 at 10:29 AM, Martin Perina <mperina@redhat.com> wrote:
On Thu, Oct 19, 2017 at 7:35 AM, Dan Kenigsberg <danken@redhat.com>
> > On Wed, Oct 18, 2017 at 2:40 PM, Daniel Belenky < dbelenky@redhat.com> > wrote: >> >> Hi all, >> >> The following test is failing: 002_bootstrap.verify_add_hosts >> All logs from failing job >> Only 2 engine patches participated in the test, so the suspected
wrote: patches
>> are: >> >> https://gerrit.ovirt.org/#/c/82542/2 >> https://gerrit.ovirt.org/#/c/82545/3 >> >> Due to the fact that when this error first introduced we had another >> error, the CI can't automatically detect the specific patch. >> >> Error snippet from logs: ovirt-host-deploy-ansible log (Full log) >> >> TASK [ovirt-host-deploy-firewalld : Enable firewalld rules] >> ******************** >> failed: [lago-basic-suite-master-host-0] (item={u'service': >> u'glusterfs'}) => {"changed": false, "failed": true, "item": {"service": >> "glusterfs"}, "msg": "ERROR: Exception caught: >> org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not >> among existing services Permanent and Non-Permanent(immediate) operation, >> Services are defined by port/tcp relationship and named as they are in >> /etc/services (on most systems)"} >> >> >> Error from HOST 0 firewalld log: >> lago-basic-suite-master-host-0/_var_log/firewalld/ (Full log) >> >> 2017-10-15 16:51:24 ERROR: INVALID_SERVICE: 'glusterfs' not among >> existing services > > > Ondra, would such an error propagate through the playbook to Engine and > fail the add-host flow? (I think it should!)
We didn't do that so far, because of EL 7.3 . We need firewalld from 7.4 to have all available services in place (I don't remember but I think imageio service was the one delivered only in firewalld from 7.4). So up until now we ingore non-existent firewalld service, but if needed we can turn this on and fail host deploy.
Ok, so for now your "luckily" off the hook and not the reason of failure.
> > > Do you know which package provide the glusterfs firewalld service, and why > it is missing from the host?
So we have used 'glusterfs' firewalld service per Sahina recommendation, which is included in glusterfs-server package from version 3.7.6 [1]. But this package is not installed when installing packages for cluster with gluster capabilities enabled. So now I'm confused: don't we need glusterfs-server package? If not and we need those ports open because they are used by services from different already installed glusterfs packages, shouldn't the firewalld configuration be moved from glusterfs-server to glusterfs package?
glusterfs-cli.rpm is required to consume gluster storage (virt use case), but I don't recall that it needs open ports.
It was there even for IPTables, if gluster support is enabled on cluster, then gluster specific ports were enabled even with IPTables. FirewallD feature continues to use that.
glusterfs-server.rpm is required to provide gluster storage (gluster use case). If I recall correctly, firewalld feature has differentiated between the two; opening needed ports only when relevant.
Right, but if gluster services are configured for firewalld it means that the host has been added to the cluster with gluster feature enabled and not only virt

On 19 October 2017 at 17:48, Allon Mureinik <amureini@redhat.com> wrote:
Fix merged based on Alona and Martin's reviews. It seems to do the trick with my testing on my local engine, let's hope that's really it.
Umm... It does not seem to be merged yet... -- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

not my finest hour. thanks, Barak, it's merged now. On Thu, Oct 19, 2017 at 6:29 PM, Barak Korren <bkorren@redhat.com> wrote:
On 19 October 2017 at 17:48, Allon Mureinik <amureini@redhat.com> wrote:
Fix merged based on Alona and Martin's reviews. It seems to do the trick with my testing on my local engine, let's hope that's really it.
Umm... It does not seem to be merged yet...
-- Barak Korren RHV DevOps team , RHCE, RHCi Red Hat EMEA redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
participants (6)
-
Allon Mureinik
-
Barak Korren
-
Dan Kenigsberg
-
Eyal Edri
-
Gal Ben Haim
-
Martin Perina