<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">2017-01-25 21:01 GMT+02:00 Logan Kuhn <span dir="ltr"><<a href="mailto:support@jac-properties.com" target="_blank">support@jac-properties.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>We prefer Ceph too and we've got our ovirt instance configured in two different ways.</div><div><br></div><div>1. Openstack Cinder, each VM's disk will have a single volume in ceph with all volumes being under the same pool. </div></div></blockquote><div><span id="gmail-result_box" class="gmail-short_text" lang="en"><span class="gmail-">I am familiar with OpenStack, but do not want to deploy parts of it. That's why I want just to map rbd and use it like VMware uses mapped datastore: create a file system on it and create a file like virtual block device per VM, or even without file system at all just by using LVM.<br><br></span></span></div><div><span id="gmail-result_box" class="gmail-short_text" lang="en"><span class="gmail-">This scenario is not far from iSCSI: we have mapped one block device (with LVM on top) across all computes, oVirt agent manage volumes on that block device, and agent manage also mapping themselves. My idea is to do mapping block device by hand and all other process grant to oVirt.<br></span></span></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>2. Export an RBD via NFS from a gateway machine, this can be a trivially small physical or virtual machine that just exports the NFS share that is pointed at whatever RBD you choose to use.</div></div></blockquote><div>I can see two cons:<br></div><div>1. Single point of failure.<br></div><div>2. Potential growth of latency.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div><div>Not a direct answer to your question, but hopefully it helps.</div><div><br></div>Regards,<div>Logan<br><div class="gmail_extra"><br><div class="gmail_quote"><div><div class="gmail-h5">On Wed, Jan 25, 2017 at 12:55 PM, Yura Poltoratskiy <span dir="ltr"><<a href="mailto:yurapoltora@gmail.com" target="_blank">yurapoltora@gmail.com</a>></span> wrote:<br></div></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div class="gmail-h5"><div dir="ltr"><div><div><div>Hi,<br><br></div>I want to use Ceph with oVirt in some non standard way. The main idea is to map rbd volume to all computes and to get the same block device, say /dev/foo/bar, across all nodes, and then use "POSIX compliant file systems" option to add Storage Domain. <br><br></div>Am I crazy? If not, what should I do next: create a file system on top of /dev/foo/bar, say XFS, and add DATA Domain as POSIX compliant? Does it work, I mean does oVirt compatible with not clustered file system in this scenario?<br><br></div>Mostly, I want to use rbd like oVirt do with iSCSI storage just to have scalability and high availability (for example, when one storage node failed).<br><div><br>Thanks for advice. <br><br></div><div>PS. Yes, I know about Gluster but want to use Ceph :)<br></div></div>
<br></div></div>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
<br></blockquote></div><br></div></div></div>
</blockquote></div><br></div></div>