--Apple-Mail=_8AAB9C84-D655-4737-923B-CFB562B00CDF
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=utf-8
For a somewhat dissenting opinion, I have a (currently) 2 node =
gluster/compute system with a self hosted engine running and a 2nd =
cluster of 3 hosts sharing the gluster servers from the first just fine =
in production and a 3 node dev system. On top of ZFS even :)
That said, I=E2=80=99ve broken it several times and had interesting =
experiences fixing it. There are also 2 bugs out there that I=E2=80=99d =
consider blockers for production use of gluster server/hosts with ovirt. =
In particular: Bug=C2=A01172905 =
<
https://bugzilla.redhat.com/show_bug.cgi?id=3D1172905> (gluster vms =
pause when vdsmd restarted) in combination with 1158108 =
<
https://bugzilla.redhat.com/show_bug.cgi?id=3D1158108> (vddm leaks =
small amounts of memory) mean you have to be careful not to run out of =
ram on a gluster serer/host node or you pause some VMs and have to =
restart them to recover. I was already running this when this surfaced =
in 3.5, so I=E2=80=99m limping along, but I wouldn=E2=80=99t setup this =
config or use gluster in a new deployment right now. Side note, afaict =
this doesn=E2=80=99t freeze VMs mounted via NFS from a gluster server, =
so you can do that (and I=E2=80=99m working on migrating to it). And I =
currently have Fencing disabled because it can be ugly on a gluster =
system. The new arguments to prevent fencing if part of the cluster is =
down should work around this, I=E2=80=99m just waiting until my 3rd =
gluster node is online before going back to that.
Ovirt won=E2=80=99t help you understand the best way to deploy your =
gluster bricks either. For instance, your hosted engine should be on a =
brick by itself, with n-way replication across any hosted engine =
servers. Your VMs should probably be on a distributed-replicated or the =
newly available dispersed mode volume to get the benefits of multiple =
servers and not have to write to every brick all the time (unless your =
use case demands 4 copies of your data for redundancy). Allocate extra =
RAM for caching, too, it helps a lot. Proper setup of the server name =
and use of localhost mounts, cddb, or keepalived (and an understanding =
of why you want that) is important too.
Bottom line, Gluster is complex no matter how easy the Ovirt interface =
makes it look. If you aren=E2=80=99t prepared to get down and dirty with =
your network file system, I wouldn't recommend this. If you are, it=E2=80=99=
s good stuff, and I=E2=80=99m really looking forward to libgfapi =
integration in Ovirt beyond the dev builds.=20
-Darrell
On Feb 18, 2015, at 3:06 PM, Donny D <donny(a)cloudspin.me>
wrote:
=20
=20
What he said
=20
=20
Happy Connecting. Sent from my Sprint Samsung Galaxy S=C2=AE 5
=20
=20
-------- Original message --------
From: Scott Worthington <scott.c.worthington(a)gmail.com>=20
Date: 02/18/2015 2:03 PM (GMT-07:00)=20
To: Donny D <donny(a)cloudspin.me>=20
Subject: Re: [ovirt-users] ovirt and glusterfs setup=20
=20
> I did not have a good experience putting both gluster and virt on =
the same
> node. I was doing hosted engine with replicate across two nodes,
and =
one day
> it went into split brain hell... i was never able to track down
why. =
However
> i do have a gluster with distribute and replica setup on its own
=
with a
> couple nodes, and it has given me zero problems in the last 60
days. =
It
> seems to me that gluster and virt need to stay seperate for now.
=
Both are
> great products and both work as described, just not on the same
node =
at the
> same time.
>
=20
The issue, as I perceive it, is newbies find Jason Brook's blog:
=
http://community.redhat.com/blog/2014/11/up-and-running-with-ovirt-3-5-par=
t-two/
=20
And then newbies think this Red Hat blog is production quality. In my
opinion, the blog how-to is okay (not really, IMHO) for a lab, but not =
for
production.
=20
Since fencing is important in oVirt, having gluster on the hosts is a =
no-no
since a non-responsive host could be fenced at any time -- and the =
engine
could fence multiple hosts (and bork a locally hosted gluster file =
system and
then screw up the entire gluster cluster).
=20
--ScottW
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--Apple-Mail=_8AAB9C84-D655-4737-923B-CFB562B00CDF
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
charset=utf-8
<html><head><meta http-equiv=3D"Content-Type"
content=3D"text/html =
charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" =
class=3D"">For a somewhat dissenting opinion, I have a (currently) 2 =
node gluster/compute system with a self hosted engine running and a 2nd =
cluster of 3 hosts sharing the gluster servers from the first just fine =
in production and a 3 node dev system. On top of ZFS even :)<div =
class=3D""><br class=3D""></div><div
class=3D"">That said, I=E2=80=99ve =
broken it several times and had interesting experiences fixing it. There =
are also 2 bugs out there that I=E2=80=99d consider blockers for =
production use of gluster server/hosts with ovirt. In =
particular: <a =
href=3D"https://bugzilla.redhat.com/show_bug.cgi?id=3D1172905" =
class=3D"">Bug 1172905</a> (gluster vms pause when
vdsmd =
restarted) in combination with <a =
href=3D"https://bugzilla.redhat.com/show_bug.cgi?id=3D1158108" =
class=3D"">1158108</a> (vddm leaks small amounts of memory)
mean =
you have to be careful not to run out of ram on a gluster serer/host =
node or you pause some VMs and have to restart them to recover. I was =
already running this when this surfaced in 3.5, so I=E2=80=99m limping =
along, but I wouldn=E2=80=99t setup this config or use gluster in a new =
deployment right now. Side note, afaict this doesn=E2=80=99t freeze VMs =
mounted via NFS from a gluster server, so you can do that (and I=E2=80=99m=
working on migrating to it). And I currently have Fencing disabled =
because it can be ugly on a gluster system. The new arguments to prevent =
fencing if part of the cluster is down should work around this, I=E2=80=99=
m just waiting until my 3rd gluster node is online before going back to =
that.</div><div class=3D""><br
class=3D""></div><div class=3D"">Ovirt =
won=E2=80=99t help you understand the best way to deploy your gluster =
bricks either. For instance, your hosted engine should be on a brick by =
itself, with n-way replication across any hosted engine servers. Your =
VMs should probably be on a distributed-replicated or the newly =
available dispersed mode volume to get the benefits of multiple servers =
and not have to write to every brick all the time (unless your use case =
demands 4 copies of your data for redundancy). Allocate extra RAM for =
caching, too, it helps a lot. Proper setup of the server name and use of =
localhost mounts, cddb, or keepalived (and an understanding of why you =
want that) is important too.</div><div class=3D""><br =
class=3D""></div><div class=3D"">Bottom line, Gluster
is complex no =
matter how easy the Ovirt interface makes it look. If you aren=E2=80=99t =
prepared to get down and dirty with your network file system, I wouldn't =
recommend this. If you are, it=E2=80=99s good stuff, and I=E2=80=99m =
really looking forward to libgfapi integration in Ovirt beyond the dev =
builds. </div><div class=3D""><br
class=3D""></div><div =
class=3D""> -Darrell</div><div
class=3D""><br =
class=3D""><div><blockquote type=3D"cite"
class=3D""><div class=3D"">On =
Feb 18, 2015, at 3:06 PM, Donny D <<a =
href=3D"mailto:donny@cloudspin.me"
class=3D"">donny(a)cloudspin.me</a>&gt; =
wrote:</div><br class=3D"Apple-interchange-newline"><div
class=3D""><meta =
http-equiv=3D"Content-Type" content=3D"text/html; charset=3DUTF-8" =
class=3D""><div class=3D"">
=20
<div class=3D""><br class=3D""></div><div
class=3D"">What he =
said</div><div class=3D""><br
class=3D""></div><div class=3D""><br =
class=3D""></div><div id=3D"composer_signature"
class=3D""><div =
style=3D"font-size:85%;color:#575757" class=3D"">Happy Connecting.
Sent =
from my Sprint Samsung Galaxy S=C2=AE 5</div></div><br
class=3D""><br =
class=3D"">-------- Original message --------<br
class=3D"">From: Scott =
Worthington <<a href=3D"mailto:scott.c.worthington@gmail.com" =
class=3D"">scott.c.worthington(a)gmail.com</a>&gt; <br
class=3D"">Date: =
02/18/2015 2:03 PM (GMT-07:00) <br class=3D"">To: Donny D <<a
=
href=3D"mailto:donny@cloudspin.me"
class=3D"">donny(a)cloudspin.me</a>&gt; =
<br class=3D"">Subject: Re: [ovirt-users] ovirt and glusterfs setup <br
=
class=3D""><br class=3D"">> I did not have a good
experience putting =
both gluster and virt on the same<br class=3D"">> node. I was doing
=
hosted engine with replicate across two nodes, and one day<br =
class=3D"">> it went into split brain hell... i was never able to =
track down why. However<br class=3D"">> i do have a gluster with =
distribute and replica setup on its own with a<br class=3D"">>
couple =
nodes, and it has given me zero problems in the last 60 days. It<br =
class=3D"">> seems to me that gluster and virt need to stay seperate
=
for now. Both are<br class=3D"">> great products and both work as
=
described, just not on the same node at the<br class=3D"">> same =
time.<br class=3D"">><br class=3D""><br
class=3D"">The issue, as I =
perceive it, is newbies find Jason Brook's blog:<br
class=3D""> <a =
href=3D"http://community.redhat.com/blog/2014/11/up-and-running-with...
-3-5-part-two/" =
class=3D"">http://community.redhat.com/blog/2014/11/up-and-r...
irt-3-5-part-two/</a><br class=3D""><br
class=3D"">And then newbies =
think this Red Hat blog is production quality. In my<br =
class=3D"">opinion, the blog how-to is okay (not really, IMHO) for a =
lab, but not for<br class=3D"">production.<br
class=3D""><br =
class=3D"">Since fencing is important in oVirt, having gluster on the =
hosts is a no-no<br class=3D"">since a non-responsive host could be =
fenced at any time -- and the engine<br class=3D"">could fence multiple =
hosts (and bork a locally hosted gluster file system and<br =
class=3D"">then screw up the entire gluster cluster).<br
class=3D""><br =
class=3D"">--ScottW<br =
class=3D""></div>_______________________________________________<br
=
class=3D"">Users mailing list<br class=3D""><a =
href=3D"mailto:Users@ovirt.org"
class=3D"">Users(a)ovirt.org</a><br =
class=3D"">http://lists.ovirt.org/mailman/listinfo/users<br =
class=3D""></div></blockquote></div><br
class=3D""></div></body></html>=
--Apple-Mail=_8AAB9C84-D655-4737-923B-CFB562B00CDF--