On 23/07/2014 1:45 am, "Jason Brooks" <jbrooks(a)redhat.com> wrote:
----- Original Message -----
> From: "Jason Brooks" <jbrooks(a)redhat.com>
> To: "Andrew Lau" <andrew(a)andrewklau.com>
> Cc: "users" <users(a)ovirt.org>
> Sent: Tuesday, July 22, 2014 8:29:46 AM
> Subject: Re: [ovirt-users] Can we debug some truths/myths/facts
about
hosted-engine and gluster?
>
>
>
> ----- Original Message -----
> > From: "Andrew Lau" <andrew(a)andrewklau.com>
> > To: "users" <users(a)ovirt.org>
> > Sent: Friday, July 18, 2014 4:50:31 AM
> > Subject: [ovirt-users] Can we debug some truths/myths/facts about
> > hosted-engine and gluster?
> >
> > Hi all,
> >
> > As most of you have got hints from previous messages, hosted engine
won't
> > work on gluster . A quote from BZ1097639
> >
> > "Using hosted engine with Gluster backed storage is currently
something we
> > really warn against.
>
> My current setup is hosted engine, configured w/ gluster storage as
described
> in my
> blog post, but with three hosts and replica 3 volumes.
>
> Only issue I've seen is an errant message about the Hosted Engine being
down
> following an engine migration. The engine does migrate
successfully,
though.
That was fixed in 3.4.3 I believe, although when it happened to me my
engine didn't migrate it just sat there.
>
> RE your bug, what do you use for a mount point for the nfs storage?
In the log you attached to your bug, it looks like you're using localhost
as
the nfs mount point. I use a dns name that resolves to the virtual IP
hosted
by ctdb. So, you're only ever talking to one nfs server at a
time, and
failover
between the nfs hosts is handled by ctdb.
I also tried your setup, but hit other complications. I used localhost
in an old setup,
previously as I was under the assumption when accessing anything gluster
related,
the connection point only provides the volume info and you connect to any
server in the volume group.
Anyway, like I said, my main testing rig is now using this configuration,
help me try and break it. :)
rm -rf /
Jokes aside, are you able to reboot a server without losing the VM ?
My experience with ctdb (based on your blog) was even with the
"floating/virtual IP" it wasn't fast enough, or something in the gluster
layer delayed the failover. Either way, the VM goes into paused state and
can't be resumed.
>
> Jason
>
>
> >
> >
> > I think this bug should be closed or re-targeted at documentation,
> > because there is nothing we can do here. Hosted engine assumes that
> > all writes are atomic and (immediately) available for all hosts in the
> > cluster. Gluster violates those assumptions.
> >
> > "
> >
> > Until the documentation gets updated, I hope this serves as a useful
> > notice at least to save people some of the headaches I hit like
> > hosted-engine starting up multiple VMs because of above issue.
> >
> >
> > Now my question, does this theory prevent a scenario of perhaps
something
> > like a gluster replicated volume being mounted as a
glusterfs
filesystem
> > and then re-exported as the native kernel NFS share for the
hosted-engine
> > to consume? It could then be possible to chuck ctdb in
there to
provide a
> > last resort failover solution. I have tried myself and
suggested it
to two
> > people who are running a similar setup. Now using the
native kernel
NFS
> > server for hosted-engine and they haven't reported as
many issues.
Curious,
> > could anyone validate my theory on this?
> >
> > Thanks,
> > Andrew
> >
> > _______________________________________________
> > Users mailing list
> > Users(a)ovirt.org
> >
http://lists.ovirt.org/mailman/listinfo/users
> >
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
>