Engine - no, running from RHVM - yes - if you are using Ansible , I
think it
makes sense to use a single common script or possibly per cluster.
Exactly my point. The engine should not manage those, but it should
still know how to execute them to perform a proper host deploy.
Martin
On Thu, Apr 6, 2017 at 2:13 PM, Yaniv Kaul <ykaul(a)redhat.com> wrote:
>
>
> On Thu, Apr 6, 2017 at 2:56 PM, Leon Goldberg <lgoldber(a)redhat.com> wrote:
>>
>> Hey,
>>
>> There seems to be a growing consensus towards moving custom rules out of
>> Engine. It is believed that Engine shouldn't have assumed the role of a
>> centralized firewall management system in he first place, and that using a
>> proper 3rd party solution will be both favorable to the users (allowing
>> better functionality and usability) and will allow us to simplify our
>> firewall deployment process.
>>
>> Considering we don't have to manage custom rules, a host will be able to
>> derive all the information regarding its firewalld services from its own
>> configuration. Consequently, option #2 becomes a forerunner with Engine's
>> involvement being even further diminished.
>
>
Engine - no, running from RHVM - yes - if you are using Ansible , I
think it
makes sense to use a single common script or possibly per cluster.
> Y.
>
>>
>>
>>
>> On Sun, Mar 26, 2017 at 1:33 PM, Leon Goldberg <lgoldber(a)redhat.com>
>> wrote:
>>>
>>>
>>> Hey,
>>>
>>> We're looking to migrate from iptables to firewalld. We came up with a
>>> couple of possible approaches we'd like opinions on. I'll list the
options
>>> first, and will
>>>
>>> 1) Replicate existing flow:
>>>
>>> As of date, iptable rules are inserted in the database via SQL config
>>> files. During host deployment, VdsDeployIptablesUnit adds the required rules
>>> (based on cluster/firewall configuration) to the deployment configuration,
>>> en route to being deployed on the host via otopi and its iptables plugin.
>>>
>>> Pros:
>>>
>>> - Reuse of existing infrastructure.
>>>
>>> Cons:
>>>
>>> - Current infrastructure is overly complex...
>>> - Many of the required services are provided by firewalld. Rewriting them
>>> is wasteful; specifying them (instead of providing actual service .xml
>>> content) will require adaptations on both (engine/host) sides. More on that
>>> later.
>>>
>>>
>>> 2) Host side based configuration:
>>>
>>> Essentially, all the required logic (aforementioned cluster/firewall
>>> configuration) to determine if/how firewalld should be deployed could be
>>> passed on to the host via ohd. Vdsm could take on the responsibility of
>>> examining the relevant configuration, and then creating and/or adding the
>>> required services (using vdsm.conf and vdsm-tool).
>>>
>>> Pros:
>>>
>>> - Engine side involvement is greatly diminished.
>>> - Simple(r).
>>>
>>> Cons:
>>>
>>> - Custom services/rules capabilities will have to be rethought and
>>> re-implemented (current infrastructure supports custom iptables rules by
>>> being specified in the SQL config file).
>>>
>>>
>>> 3) Some other hybrid approach:
>>>
>>> If we're able to guarantee all the required firewalld services are
>>> statically provided one way or the other, the current procedure could be
>>> replicated and be made more simpler. Instead of providing xml content in the
>>> form of strings, service names could be supplied. The responsibility of
>>> actual service deployment becomes easier, and could be left to otopi (with
>>> the appropriate modifications) or switched over to vdsm.
>>>
>>> --
>>>
>>> Regardless, usage of statically provided vs. dynamically created services
>>> remains an open question. I think we'd like to avoid implementing logic
that
>>> ask whether some service is provided (and then write it if it isn't...),
and
>>> so choosing between the dynamic and static approaches is also needed. Using
>>> the static approach, guaranteeing all services are provided will be
>>> required.
>>>
>>> I do believe guaranteeing the presence of all required services is worth
>>> it, however custom services aren't going to be naively compatible, and
we'll
>>> still have to use similar mechanism as described in #1 (service string ->
>>> .xml -> addition of service name to active zone).
>>>
>>>
>>> Your thoughts are welcome.
>>>
>>> Thanks,
>>> Leon
>>>
>>
>>
>> _______________________________________________
>> Devel mailing list
>> Devel(a)ovirt.org
>>
http://lists.ovirt.org/mailman/listinfo/devel
>
>
>
> _______________________________________________
> Devel mailing list
> Devel(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/devel