2017-01-25 21:01 GMT+02:00 Logan Kuhn <support(a)jac-properties.com>:
We prefer Ceph too and we've got our ovirt instance configured in
two
different ways.
1. Openstack Cinder, each VM's disk will have a single volume in ceph with
all volumes being under the same pool.
I am familiar with OpenStack, but do not want to deploy parts of it. That's
why I want just to map rbd and use it like VMware uses mapped datastore:
create a file system on it and create a file like virtual block device per
VM, or even without file system at all just by using LVM.
This scenario is not far from iSCSI: we have mapped one block device (with
LVM on top) across all computes, oVirt agent manage volumes on that block
device, and agent manage also mapping themselves. My idea is to do mapping
block device by hand and all other process grant to oVirt.
2. Export an RBD via NFS from a gateway machine, this can be a
trivially
small physical or virtual machine that just exports the NFS share that is
pointed at whatever RBD you choose to use.
I can see two cons:
1. Single point of failure.
2. Potential growth of latency.
Not a direct answer to your question, but hopefully it helps.
Regards,
Logan
On Wed, Jan 25, 2017 at 12:55 PM, Yura Poltoratskiy <yurapoltora(a)gmail.com
> wrote:
> Hi,
>
> I want to use Ceph with oVirt in some non standard way. The main idea is
> to map rbd volume to all computes and to get the same block device, say
> /dev/foo/bar, across all nodes, and then use "POSIX compliant file
systems"
> option to add Storage Domain.
>
> Am I crazy? If not, what should I do next: create a file system on top of
> /dev/foo/bar, say XFS, and add DATA Domain as POSIX compliant? Does it
> work, I mean does oVirt compatible with not clustered file system in this
> scenario?
>
> Mostly, I want to use rbd like oVirt do with iSCSI storage just to have
> scalability and high availability (for example, when one storage node
> failed).
>
> Thanks for advice.
>
> PS. Yes, I know about Gluster but want to use Ceph :)
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
>
>