On 05/18/2012 04:28 PM, Deepak C Shetty wrote:
On 05/17/2012 11:05 PM, Itamar Heim wrote:
> On 05/17/2012 06:55 PM, Bharata B Rao wrote:
>> On Wed, May 16, 2012 at 3:29 PM, Itamar Heim<iheim(a)redhat.com> wrote:
>>> On 05/15/2012 07:35 PM, Andrei Vakhnin wrote:
>>>>
>>>> Yair
>>>>
>>>> Thanks for an update. Can I have KVM hypervisors also function as
>>>> storage
>>>> nodes for glusterfs? What is a release date for glusterfs support?
>>>> We're
>>>> looking for a production deployment in June. Thanks
>>>
>>>
>>> current status is
>>> 1. patches for provisioning gluster clusters and volumes via ovirt
>>> are in
>>> review, trying to cover this feature set [1].
>>> I'm not sure if all of them will make the ovirt 3.1 version which is
>>> slated
>>> to branch for stabilization June 1st, but i think "enough" is
there.
>>> so i'd start trying current upstream version to help find issues
>>> blocking
>>> you, and following on them during june as we stabilize ovirt 3.1 for
>>> release
>>> (planned for end of june).
>>>
>>> 2. you should be able to use same hosts for both gluster and virt,
>>> but there
>>> is no special logic/handling for this yet (i.e., trying and providing
>>> feedback would help improve this mode).
>>> I would suggest start from separate clusters though first, and only
>>> later
>>> trying the joint mode.
>>>
>>> 3. creating a storage domain on top of gluster:
>>> - expose NFS on top of it, and consume as a normal nfs storage domain
>>> - use posixfs storage domain with gluster mount semantics
>>> - future: probably native gluster storage domain, up to native
>>> integration with qemu
>>
>> I am looking at GlusterFS integration with QEMU which involves adding
>> GlusterFS as block backend in QEMU. This will involve QEMU talking to
>> gluster directly via libglusterfs bypassing FUSE. I could specify a
>> volume file and the VM image directly on QEMU command line to boot
>> from the VM image that resides on a gluster volume.
>>
>> Eg: qemu -drive file=client.vol:/Fedora.img,format=gluster
>>
>> In this example, Fedora.img is being served by gluster and client.vol
>> would have client-side translators specified.
>>
>> I am not sure if this use case would be served if GlusterFS is
>> integrated as posixfs storage domain in VDSM. Posixfs would involve
>> normal FUSE mount and QEMU would be required to work with images from
>> FUSE mount path ?
>>
>> With QEMU supporting GlusterFS backend natively, further optimizations
>> are possible in case of gluster volume being local to the host node.
>> In this case, one could provide QEMU with a simple volume file that
>> would not contain client or server xlators, but instead just the posix
>> xlator. This would lead to most optimal IO path that bypasses RPC
>> calls.
>>
>> So do you think, this use case (QEMU supporting GlusterFS backend
>> natively and using volume file to specify the needed translators)
>> warrants a specialized storage domain type for GlusterFS in VDSM ?
>
> I'm not sure if a special storage domain, or a PosixFS based domain
> with enhanced capabilities.
> Ayal?
Related Question:
With QEMU using GlusterFS backend natively (as described above), it also
means that
it needs addnl options/parameters as part of qemu command line (as given
above).
How does VDSM today support generating a custom qemu cmdline. I know
VDSM talks to libvirt,
so is there a framework in VDSM to edit/modify the domxml based on some
pre-conditions,
and how / where one should hook up to do that modification ? I know of
libvirt hooks
framework in VDSM, but that was more for temporary/experimental needs,
or am i completely
wrong here ?
for something vdsm is not aware of yet - you can use vdsm custom hooks
to manipulate the libvirt xml.
Irrespective of whether GlusterFS integrates into VDSM as PosixFS or
special storage domain
it won't address the need to generate a custom qemu cmdline if a
file/image was served by
GlusterFS. Whats the way to address this issue in VDSM ?
when vdsm supports this I expect it will know to pass these.
it won't necessarily be a generic PosixFS at that time.
I am assuming here that special storage domain (aka repo engine) is only
to manage image
repository, and image related operations, won't help in modifying qemu
cmd line being generated.
support by vdsm for specific qemu options (via libvirt) will be done by
either having a special type of storage domain, or some capability
exchange, etc.
[Ccing vdsm-devel also]
thanx,
deepak