[Users] Ceph / rbd and ovirt
Josh Logan
joshtlogan at gmail.com
Sun Sep 23 15:59:16 UTC 2012
On Sun, Sep 23, 2012 at 8:41 AM, Itamar Heim <iheim at redhat.com> wrote:
> On 09/23/2012 05:33 PM, Josh Logan wrote:
>
>>
>> On Sun, Sep 23, 2012 at 6:10 AM, Itamar Heim <iheim at redhat.com
>> <mailto:iheim at redhat.com>> wrote:
>>
>> On 09/22/2012 08:58 AM, Josh Logan wrote:
>>
>>
>> I'm currently setting up an ovirt cluster and so far it looks
>> good. I
>> like the integration with Foreman http://theforeman.org/ .
>>
>> I would like to use Ceph / rbd for my storage. I saw some
>> mention of
>> patches coming in May, but I did not find any new posts.
>>
>> What is the status of this work? Is there some patches I can
>> try out?
>> I have a working Ceph cluster and a working ovirt cluster, I
>> just need a
>> way to bring them together.
>>
>> Thanks, JOSH
>>
>>
>>
>> I don't remember any active work on this right now (for sure nothing
>> like the gluster integration being done).
>> but iiuc, ceph provides posixfs support - did you try creating a
>> posixfs based storage domain?
>> (you would need a "full" host (not ovirt-node) to install ceph
>> client components on).
>>
>> Thanks,
>> Itamar
>>
>>
>>
>> I am doing my work on Fedora 17 hosts, not ovirt-node, since I know this
>> will need more OS support.
>>
>> There are a few different Ceph filesystems. But the posix based one is
>> the least ready for production. The rbd filesystem is integrated into
>> qemu and libvirt is the most suited for VM images.
>>
>> Are the Gluster patches available? I would like to see what that
>> feature looks like and if I can modify them for Ceph.
>> If there is a better filesystem to investigate please let me know.
>>
>> Thanks, JOSH
>>
>>
> gluster as a native storage domain (rather than posixfs) is still in
> reviews (and has patches only for vdsm side).
> http://gerrit.ovirt.org/#/c/**6856/ <http://gerrit.ovirt.org/#/c/6856/>
>
> you can also use NFS in the meantime if relevant for ceph.
>
>
Thanks for the pointer. I'll follow that and see what I learn.
The vdsm side may be similar since both are network disk device. There are
only 2 steps needed to start up a VM with rbd.
qemu-img create -f rbd rbd:data/host1 10G
Then to start the image for qemu add -drive
file=rbd:data/host1,if=none,id=drive-virtio-disk0,format=raw
or within libvirt:
<disk type='network' device='disk'>
<driver name='qemu' type='raw'/>
<source protocol='rbd' name='data/host1'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
</disk>
So the steps are simple, and maybe Gluster is more complex then I should
use as an example.
Thanks, JOSH
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20120923/447f4672/attachment-0001.html>
More information about the Users
mailing list