[Engine-devel] Introducing virt / gluster flags at cluster level
Ayal Baron
abaron at redhat.com
Sun Mar 4 22:21:11 UTC 2012
----- Original Message -----
> On 03/04/2012 11:42 PM, Ayal Baron wrote:
> >
> >
> > ----- Original Message -----
> >> On 03/04/2012 03:22 PM, Ayal Baron wrote:
> >>>
> >>>
> >>> ----- Original Message -----
> >>>>
> >>>>
> >>>> ----- Original Message -----
> >>>>> From: "Moti Asayag"<masayag at redhat.com>
> >>>>> To: engine-devel at ovirt.org
> >>>>> Sent: Sunday, March 4, 2012 12:20:43 PM
> >>>>> Subject: Re: [Engine-devel] Introducing virt / gluster flags at
> >>>>> cluster level
> >>>>>
> >>>>> On 03/01/2012 12:54 PM, Shireesh Anjal wrote:
> >>>>>> Hi,
> >>>>>>
> >>>>>> In order to identify whether a cluster exposes Gluster /
> >>>>>> Virtualization
> >>>>>> capabilities, we plan to introduce two boolean columns -
> >>>>>> virt_service
> >>>>>> and gluster_service in the vds_groups table. As per immediate
> >>>>>> plans, it
> >>>>>> is intended to support only one service per cluster, meaning
> >>>>>> only
> >>>>>> one of
> >>>>>> these two values can be true.
> >>>>>
> >>>>> Couldn't there be additional future services in the future ? In
> >>>>> that
> >>>>> case perhaps worth considering enum for services, to be stored
> >>>>> in
> >>>>> a
> >>>>> single service column, its values are: virt, gluster,...)
> >>>>> instead
> >>>>> of
> >>>>> extending the vds_group table consistently when introducing new
> >>>>> services
> >>>>> (under the assumption no mix of services is allowed).
> >>>>
> >>>> +1 for an Enum insteand of boolean columns. we have too much of
> >>>> that
> >>>> already and eventually we see long records and
> >>>> routine refactoring to our DAL.
> >>>> Also to make mixed configuration we can embrace bit fields which
> >>>> interacts very nice with Enums e.g a 5 value of is a cluster
> >>>> with
> >>>> VIRT(1) and FUTURE(4) capabilities
> >>>
> >>> I agree about the columns being inflexible but personally I don't
> >>> like enums. What if we need to support finer grained services?
> >>> e.g. different topologies (active/passive, active/active etc) or
> >>> other types of intricate relationships?
> >>> Not to mention that looking at the raw data you don't undetstand
> >>> what it means without holding a dictionary. And it's annoying to
> >>> support BC of the numbers once things change.
> >>
> >> enums are an issue, since we want to share them.
> >> i agree services table seems the right approach, but actually,
> >> booleans
> >> are used here exactly because it seems services is the right long
> >> term
> >> approach, but we there is a lot of ground to cover before we know
> >> how
> >> they would look like.
> >> so boolean flags for the first few services, learning from these,
> >> designing the bigger service models, and upgrading to it from the
> >> cluster-with-flags seems (to me) the right path to take.
> >
> > db scheme should be something that doesn't change every major,
> > minor and z-stream version. It means we're doing something wrong.
> > Having a 'gluster service' boolean column means that by definition
> > we will be changing the db scheme for the next service we want to
> > add or even for the same service but when we'll need some more
> > info.
> > To start with, we can just rely on the hosts themselves reporting
> > capabilities and we can cache this info if we want it to display
> > quickly next time we load. This would be similar to supported cpu
> > types.
> > If we have limitations on combining services then the first service
> > utilized wins.
>
> we can't rely on hosts providing this info, as we want to provision
> them, and need to know what to provision them with / monitor for.
> i don't see the db scheme not changing between versions (not even
I think therein lies the problem.
But at least changes should be incremental and not deprecate things every version...
So as I mentioned on the patch, you could have a table which has 'clusterId, serviceType' and according to the serviceType(s) that this cluster is associated with dynamically provision whatever is required.
By the way, what services would be enabled on the default cluster?
What would ovirt-node register as?
> sure
> we defined a concept of major or minor versions, we just said we'll
> have
> a version every 6 months).
> as for backports of bug fixes to released versions, that's the only
> thing we managed to avoid in the past (i.e., so far, we didn't change
> db
> scheme when backporting patches, since dealing with upgrade is a
> nightmare in that case - but that doesn't mean distributions of ovirt
> will not stabilize on a version which is a mix of an upstream version
> and a few more patches/features, which may include a db change, and
> will
> have to maintain the upgrade from that.
>
More information about the Engine-devel
mailing list