On Thu, Jan 26, 2017 at 11:39 AM, Yura Poltoratskiy
<yurapoltora(a)gmail.com> wrote:
26.01.2017 11:11, Nir Soffer пишет:
>
> On Wed, Jan 25, 2017 at 8:55 PM, Yura Poltoratskiy
> <yurapoltora(a)gmail.com> wrote:
>>
>> Hi,
>>
>> I want to use Ceph with oVirt in some non standard way. The main idea is
>> to
>> map rbd volume to all computes and to get the same block device, say
>> /dev/foo/bar, across all nodes, and then use "POSIX compliant file
>> systems"
>> option to add Storage Domain.
>>
>> Am I crazy?
>
> Yes
Thnx :)
>
>> If not, what should I do next: create a file system on top of
>> /dev/foo/bar, say XFS, and add DATA Domain as POSIX compliant? Does it
>> work,
>> I mean does oVirt compatible with not clustered file system in this
>> scenario?
>
> This can work only for clustered file system, not with XFS. Double
> mounting will
> quickly corrupt the file system.
Can you tell me what FS should I choose to do some experiments?
GFS2 is one example.
And in general: what is use cases for option like "POSIX compliant FS"?
The main usecase is to allow users to use a clustered file system they
already have in their organization.
CephFS is also a viable option if you don't want to use Cinder, however
the performance and scalability will be lower than RBD, and the gateway
publishing the cephfs mounts will be a bottleneck.
I think RBD is the best option if you can manage the Cinder deployment
and upgrade required to support it.
We are talking for long time about real Ceph storage type, managing
Ceph directly without Cinder. However we never found the time to work
on this.
>> Mostly, I want to use rbd like oVirt do with iSCSI storage
just to have
>> scalability and high availability (for example, when one storage node
>> failed).
>
> You have two ways to use ceph:
>
> - via cinder - you will get best performance and scalability
> - via cephfs - you will get all features, works like fault tolerant NFS
>
> Nir
>
>> Thanks for advice.
>>
>> PS. Yes, I know about Gluster but want to use Ceph :)
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>>
http://lists.ovirt.org/mailman/listinfo/users
>>