[ovirt-users] Hacking in Ceph rather then Gluster.
Samuli Heinonen
samppah at neutraali.net
Tue Jun 10 18:45:32 UTC 2014
Hi Nathan,
We have been running GlusterFS 3.4 with RHEV in production for about six months now. We were waiting for libgfapi support to show up for long time but finally we had to give up waiting and start using fuse mounts. Everything has been good so far and we haven't seen any big issues with GlusterFS. We even had hardware problems on one of the storage nodes but RHEV and GlusterFS survived through that without problems.
Our setup is rather small and it has only 4 compute nodes and 2 storage nodes. Of course we expect it to grow but biggest issue for us is that we are bit uncertain how many VM's we can run on it without affecting too much performance.
We are also looking at possibility of creating 3-6 node clusters where each node is acting as compute and storage node as well. Hopefully we will have test setup running in about week.
What kind of issues you have had with GlusterFS?
-samuli
Nathan Stratton <nathan at robotics.net> kirjoitti 10.6.2014 kello 2.02:
> Thanks, I will take a look at it, anyone else currently using Gluster for backend images in production?
>
>
> ><>
> nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 | www.broadsoft.com
>
>
> On Mon, Jun 9, 2014 at 2:55 PM, Itamar Heim <iheim at redhat.com> wrote:
> On 06/09/2014 01:28 PM, Nathan Stratton wrote:
> So I understand that the news is still fresh and there may not be much
> going on yet in making Ceph work with ovirt, but I thought I would reach
> out and see if it was possible to hack them together and still use
> librdb rather then NFS.
>
> I know, why not just use Gluster... the problem is I have tried to use
> Gluster for VM storage for years and I still don't think it is ready.
> Ceph still has work in other areas, but this is one area where I think
> it shines. This is a new lab cluster and I would like to try to use ceph
> over gluster if possible.
>
> Unless I am missing something, can anyone tell me they are happy with
> Gluster as a backend image store? This will be a small 16 node 10 gig
> cluster of shared compute / storage (yes I know people want to keep them
> separate).
>
> ><>
> nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
> www.broadsoft.com <http://www.broadsoft.com>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
> there was a threat about this recently. afaict, ceph support will require adding a specific ceph storage domain to engine and vdsm, which is a full blown feature (I assume you could try and hack it somewhat with a custom hook). waiting for next version planning cycle to see if/how it gets pushed.
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20140610/ad3e4d47/attachment-0001.html>
More information about the Users
mailing list