--Apple-Mail=_C78B732B-06FE-4020-AE0C-A2B64C0334BB
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=us-ascii
Hi Nathan,
We have been running GlusterFS 3.4 with RHEV in production for about six =
months now. We were waiting for libgfapi support to show up for long =
time but finally we had to give up waiting and start using fuse mounts. =
Everything has been good so far and we haven't seen any big issues with =
GlusterFS. We even had hardware problems on one of the storage nodes but =
RHEV and GlusterFS survived through that without problems.
Our setup is rather small and it has only 4 compute nodes and 2 storage =
nodes. Of course we expect it to grow but biggest issue for us is that =
we are bit uncertain how many VM's we can run on it without affecting =
too much performance.
We are also looking at possibility of creating 3-6 node clusters where =
each node is acting as compute and storage node as well. Hopefully we =
will have test setup running in about week.
What kind of issues you have had with GlusterFS?
-samuli
Nathan Stratton <nathan(a)robotics.net> kirjoitti 10.6.2014 kello 2.02:
Thanks, I will take a look at it, anyone else currently using Gluster
=
for backend images in production?=20
=20
=20
><>
nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 | =
www.broadsoft.com
=20
=20
On Mon, Jun 9, 2014 at 2:55 PM, Itamar Heim <iheim(a)redhat.com> wrote:
On 06/09/2014 01:28 PM, Nathan Stratton wrote:
So I understand that the news is still fresh and there may not be much
going on yet in making Ceph work with ovirt, but I thought I would =
reach
out and see if it was possible to hack them together and still use
librdb rather then NFS.
=20
I know, why not just use Gluster... the problem is I have tried to use
Gluster for VM storage for years and I still don't think it is ready.
Ceph still has work in other areas, but this is one area where I think
it shines. This is a new lab cluster and I would like to try to use =
ceph
over gluster if possible.
=20
Unless I am missing something, can anyone tell me they are happy with
Gluster as a backend image store? This will be a small 16 node 10 gig
cluster of shared compute / storage (yes I know people want to keep =
them
separate).
=20
><>
nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com <
http://www.broadsoft.com>
=20
=20
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
=20
=20
there was a threat about this recently. afaict, ceph support will =
require adding
a specific ceph storage domain to engine and vdsm, which =
is a full blown feature (I assume you could try and hack it somewhat =
with a custom hook). waiting for next version planning cycle to see =
if/how it gets pushed.
=20
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--Apple-Mail=_C78B732B-06FE-4020-AE0C-A2B64C0334BB
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
charset=us-ascii
<html><head><meta http-equiv=3D"Content-Type"
content=3D"text/html =
charset=3Dus-ascii"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Hi =
Nathan,<div><br></div><div>We have been running GlusterFS 3.4 with
RHEV =
in production for about six months now. We were waiting for libgfapi =
support to show up for long time but finally we had to give up waiting =
and start using fuse mounts. Everything has been good so far and we =
haven't seen any big issues with GlusterFS. We even had hardware =
problems on one of the storage nodes but RHEV and GlusterFS survived =
through that without problems.</div><div><br></div><div>Our
setup is =
rather small and it has only 4 compute nodes and 2 storage nodes. Of =
course we expect it to grow but biggest issue for us is that we are bit =
uncertain how many VM's we can run on it without affecting too much =
performance.</div><div><br></div><div>We are also looking at
possibility =
of creating 3-6 node clusters where each node is acting as compute and =
storage node as well. Hopefully we will have test setup running in about =
week.</div><div><br></div><div>What kind of issues you have
had with =
GlusterFS?</div><div><br></div><div>-samuli</div><div><br></div><div><br><=
/div><div><br><div><div><div>Nathan Stratton
<<a =
href=3D"mailto:nathan@robotics.net">nathan@robotics.net</a>> =
kirjoitti 10.6.2014 kello 2.02:</div><br =
class=3D"Apple-interchange-newline"><blockquote
type=3D"cite"><div =
dir=3D"ltr">Thanks, I will take a look at it, anyone else currently =
using Gluster for backend images in production? </div><div =
class=3D"gmail_extra"><br
clear=3D"all"><div><br>><><br>nathan
=
stratton | vp technology | broadsoft, inc | +1-240-404-6580 | <a =
href=3D"http://www.broadsoft.com/" =
target=3D"_blank">www.broadsoft.com</a></div>
<br><br><div class=3D"gmail_quote">On Mon, Jun 9, 2014 at 2:55
PM, =
Itamar Heim <span dir=3D"ltr"><<a
href=3D"mailto:iheim@redhat.com" =
target=3D"_blank">iheim(a)redhat.com</a>&gt;</span>
wrote:<br><blockquote =
class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc =
solid;padding-left:1ex">
<div class=3D"HOEnZb"><div class=3D"h5">On 06/09/2014
01:28 PM, Nathan =
Stratton wrote:<br>
</div></div><blockquote class=3D"gmail_quote"
style=3D"margin:0 0 0 =
.8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div
class=3D"h5">
So I understand that the news is still fresh and there may not be =
much<br>
going on yet in making Ceph work with ovirt, but I thought I would =
reach<br>
out and see if it was possible to hack them together and still use<br>
librdb rather then NFS.<br>
<br>
I know, why not just use Gluster... the problem is I have tried to =
use<br>
Gluster for VM storage for years and I still don't think it is =
ready.<br>
Ceph still has work in other areas, but this is one area where I =
think<br>
it shines. This is a new lab cluster and I would like to try to use =
ceph<br>
over gluster if possible.<br>
<br>
Unless I am missing something, can anyone tell me they are happy =
with<br>
Gluster as a backend image store? This will be a small 16 node 10 =
gig<br>
cluster of shared compute / storage (yes I know people want to keep =
them<br>
separate).<br>
<br>
><><br>
nathan stratton | vp technology | broadsoft, inc | <a =
href=3D"tel:%2B1-240-404-6580" value=3D"+12404046580" =
target=3D"_blank">+1-240-404-6580</a> |<br>
</div></div><a
href=3D"http://www.broadsoft.com/" =
target=3D"_blank">www.broadsoft.com</a> <<a =
href=3D"http://www.broadsoft.com/" =
target=3D"_blank">http://www.broadsoft.com</a>><br>
<br>
<br>
______________________________<u></u>_________________<br>
Users mailing list<br>
<a href=3D"mailto:Users@ovirt.org" =
target=3D"_blank">Users(a)ovirt.org</a><br>
<a
href=3D"http://lists.ovirt.org/mailman/listinfo/users" =
target=3D"_blank">http://lists.ovirt.org/<u></u>...
<br>
<br>
</blockquote>
<br>
there was a threat about this recently. afaict, ceph support will =
require adding a specific ceph storage domain to engine and vdsm, which =
is a full blown feature (I assume you could try and hack it somewhat =
with a custom hook). waiting for next version planning cycle to see =
if/how it gets pushed.<br>
</blockquote></div><br></div>
_______________________________________________<br>Users mailing =
list<br><a =
href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a><br>http://lists.ovirt.=
org/mailman/listinfo/users<br></blockquote></div><br></div></div></body></=
html>=
--Apple-Mail=_C78B732B-06FE-4020-AE0C-A2B64C0334BB--