[Engine-devel] Gluster IPTable configuration

Alon Bar-Lev alonbl at redhat.com
Mon Sep 3 13:11:57 UTC 2012



----- Original Message -----
> From: "Shireesh Anjal" <sanjal at redhat.com>
> To: "Alon Bar-Lev" <alonbl at redhat.com>
> Cc: "Selvasundaram" <sesubram at redhat.com>, engine-devel at ovirt.org
> Sent: Monday, September 3, 2012 4:00:14 PM
> Subject: Re: [Engine-devel] Gluster IPTable configuration
> 
> On Monday 03 September 2012 06:22 PM, Alon Bar-Lev wrote:
> >
> > ----- Original Message -----
> >> From: "Shireesh Anjal" <sanjal at redhat.com>
> >> To: engine-devel at ovirt.org
> >> Cc: "Alon Bar-Lev" <alonbl at redhat.com>, "Selvasundaram"
> >> <sesubram at redhat.com>
> >> Sent: Monday, September 3, 2012 3:42:17 PM
> >> Subject: Re: [Engine-devel] Gluster IPTable configuration
> >>
> >> On Friday 31 August 2012 12:05 AM, Alon Bar-Lev wrote:
> >>> ----- Original Message -----
> >>>> From: "Selvasundaram" <sesubram at redhat.com>
> >>>> To: engine-devel at ovirt.org
> >>>> Cc: "Shireesh Anjal" <sanjal at redhat.com>
> >>>> Sent: Thursday, August 30, 2012 4:30:16 PM
> >>>> Subject: [Engine-devel] Gluster IPTable configuration
> >>>>
> >>>>
> >>>> Hi,
> >>>>
> >>>> I want to add gluster specific IPTable configuration in addition
> >>>> to
> >>>> the ovirt IPTable configuration (if it is gluster node).
> >>>>
> >>>> There are two approaches,
> >>>> 1. Having one more gluster specific IP table config in db and
> >>>> merge
> >>>> with ovirt IPTable config (merging NOT appending)
> >>>> [I have the patch engine: Gluster specific firewall
> >>>> configurations
> >>>> #7244]
> >>>> 2. Having two different IP Table config (ovirt and
> >>>> ovirt+gluster)
> >>>> and
> >>>> use either one.
> >>>>
> >>>> Please provide your suggestions or improvements on this.
> >>>>
> >>> Hello all,
> >>>
> >>> The mentioned patch[1], adds hard coded gluster code into the
> >>> bootstrap code, manipulate the firewall configuration to be
> >>> gluster specific. It hardcoded search for "reject", insert before
> >>> some other rules.
> >>>
> >>> I believe this hardcode approach is obsolete now that we have
> >>> proper tools for templates.
> >>>
> >>> A more robust solution would be defining generic profiles, each
> >>> profile as a template, each template can refer to different
> >>> profiles, and assign profile to a node.
> >>>
> >>> This way the implementation is not gluster [or any] specific and
> >>> can be reused for more setups, code is cleaner.
> >>>
> >>> Example:
> >>>
> >>> BASIC.PRE
> >>>       :INPUT ACCEPT [0:0]
> >>>       :FORWARD ACCEPT [0:0]
> >>>       :OUTPUT ACCEPT [0:0]
> >>> BASIC.IN
> >>>       accept ...
> >>>       accept ...
> >>> BASIC.POST
> >>>       reject ...
> >>>       reject ...
> >>>
> >>> BASIC
> >>>       ${BASIC.PRE}
> >>>       ${BASIC.IN}
> >>>       ${BASIC.POST}
> >>>
> >>> GLUSTER
> >>>       ${BASIC.PRE}
> >>>       ${BASIC.IN}
> >>>       accept ...
> >>>       ${BASIC.POST}
> >>>       reject ...
> >> I like the separation of PRE/IN/POST rules here. However I think
> >> it
> >> is
> >> better to keep the service specific rules in separate
> >> configurations.
> >> Currently, whole iptables rules script is kept in the vdc option
> >> "IPTablesConfig". How about changing this as follows?
> >>
> >> - Split the current config into three: IPTablesConfig.PRE,
> >> IPTablesConfig.VIRT and IPTablesConfig.POST
> >> - Let services like Gluster add their own vdc options e.g.
> >> IPTablesConfig.GLUSTER
> >> - When assembling the full script in VdsInstaller,
> >>     - Take IPTablesConfig.PRE
> >>     - Append it with IPTablesConfig.<service> for every service to
> >>     be
> >> enabled on the host/cluster
> >>     - Append it with IPTablesConfig.POST
> >>
> >> Thoughts?
> > This is a simple approach that will work for current implementation
> > and configuration.
> >
> > However, it will effect all nodes, with or without gluster.
> 
> I don't get the concern here. Could you please elaborate?

If we have 500 nodes, out of them 200 gluster, why do I need to distribute gluster specific rules to all 500?

Alon.



More information about the Engine-devel mailing list