[ovirt-devel] physical disk management for gluster in vdsm
Saggi Mizrahi
smizrahi at redhat.com
Mon Sep 29 11:35:08 UTC 2014
----- Original Message -----
> From: "Itamar Heim" <iheim at redhat.com>
> To: "Saggi Mizrahi" <smizrahi at redhat.com>, "Balamurugan Arumugam" <barumuga at redhat.com>
> Cc: devel at ovirt.org
> Sent: Monday, September 29, 2014 10:34:29 AM
> Subject: Re: [ovirt-devel] physical disk management for gluster in vdsm
>
> On 09/28/2014 05:45 PM, Saggi Mizrahi wrote:
> >
> >
> > ----- Original Message -----
> >> From: "Balamurugan Arumugam" <barumuga at redhat.com>
> >> To: "Saggi Mizrahi" <smizrahi at redhat.com>
> >> Cc: devel at ovirt.org
> >> Sent: Thursday, September 25, 2014 3:41:05 AM
> >> Subject: Re: [ovirt-devel] physical disk management for gluster in vdsm
> >>
> >>
> >> ----- Original Message -----
> >>> From: "Saggi Mizrahi" <smizrahi at redhat.com>
> >>> To: "Balamurugan Arumugam" <barumuga at redhat.com>
> >>> Cc: devel at ovirt.org
> >>> Sent: Wednesday, September 24, 2014 1:34:46 PM
> >>> Subject: Re: [ovirt-devel] physical disk management for gluster in vdsm
> >>>
> >>>
> >>>
> >>> ----- Original Message -----
> >>>> From: "Balamurugan Arumugam" <barumuga at redhat.com>
> >>>> To: devel at ovirt.org
> >>>> Sent: Tuesday, September 23, 2014 2:46:59 PM
> >>>> Subject: [ovirt-devel] physical disk management for gluster in vdsm
> >>>>
> >>>>
> >>>> Hi All,
> >>>>
> >>>> Currently gluster management in ovirt is not complete if disks in hosts
> >>>> are
> >>>> not formatted/mounted. It expects those actions done prior in the host
> >>>> added to ovirt. We have a requirement to manage physical disks by
> >>>>
> >>>> 1. identify and populate physical disks.
> >>>> 2. identify and manage hardware raids.
> >>>> 3. create thick and thin logical volumes in unused physical disks.
> >>>> 4. format and mount logical volumes.
> >>>> 5. fstab management for new logical volumes.
> >>>>
> >>>> To have this feature, I would like to start a discussion here to explore
> >>>> possible options suitable for vdsm/engine.
> >>>>
> >>>> We have done a small PoC with OpenLMI[1] by having verbs in vdsm to
> >>>> achieve
> >>>> this. Also we explored ovirt-engine directly calling
> >>>> tog-pegasus/cim-server
> >>>> to get cim object to avoid two level of hopes ("ovirt-engine calls vdsm
> >>>> <->
> >>>> vdsm calls openlmi locally <-> openlmi does the job" than "ovirt-engine
> >>>> calls vdsm <-> openlmi does the job") which also works.
> >>>>
> >>>> I would like to get your feedback about the PoC and suggestions/ideas
> >>>> how
> >>>> physical disk management can be.
> >>> I would prefer not depending on something like openlmi. It replicates or
> >>> goes against ovirt topology. There is no reason for VDSM to call open
> >>> something
> >>> that calls something that goes back to the host and runs fdisk.
> >>>
> >>
> >> Thanks for your input.
> >>
> >> Could you also comment on ovirt-engine directly calling openlmi to do the
> >> job?
> > Adds dependency on openlmi. I'd prefer not to depend on that.
> > Makes installing harder. Has more points of failure. And we already have
> > vdsm.
>
> with openlmi becoming the "way to remotely manage" fedora/rhel, and
> hopefully adding REST API, i don't see why we wouldn't allow using it
> directly from engine for single host admin operations to avoid to having
> to wrap such calls via vdsm.
> I would prefer doing this via its REST API if possible though.
It will require the user to setup openlmi which is another step.
openLMI uses it's own broker, it's own "locking" semantics etc.
It has it's own abstraction level and goals. The whole point
of VDSM is having tailor maid solution for our use case.
That includes the scale\redundancy\concurrency considerations.
There is no way for us to assure that openLMI has the same
goals and considerations as we do and quite frankly we do
most of those things anyway when creating a domain.
OVirt needs to start having a cohesive archtecture and not
a Frankenstein's monster like approach of connecting to things
hoping everything works.
Every dependency you have is another point of failure.
It's another service you have to manage.
It's another project you need to track bugs work around
interface define defaults etc.
The project is already stretched thin trying to support
configuring using VDSM and openstack.
Someone needs to fix those bugs, make sure the fixes are
available on all available distros when we release a version.
We need to make sure they don't make changes that are incompatible
with how we do things. That means being active with patches and the
mailing list.
We already have problems with gluster APIs.
Adding another project to the mix will take us over the edge.
That being said, you are the one responsible for those kinds
of things so you know better.
>
>
More information about the Devel
mailing list