[ovirt-users] HE (3.6) on gluster storage, chicken and egg status
Sahina Bose
sabose at redhat.com
Wed Jan 6 02:49:35 EST 2016
On 01/05/2016 09:33 PM, Fil Di Noto wrote:
> On Thu, Dec 31, 2015 at 3:44 AM, Donny Davis <donny at cloudspin.me> wrote:
>> I would say you would be much better off picking two hosts to do your HE on
>> and setting up drdb for the HE storage. You will have fewer problems with
>> your HE.
> Is drdb an oVirt thing? Or are you suggesting a db backup or
> replicated sql db solution that I would cut over to manually.
>
>
>> On Thu, Dec 31, 2015 at 1:41 AM, Sahina Bose <sabose at redhat.com> wrote:
>>> If you mean, creating new gluster volumes - you need to make sure the
>>> gluster service is enabled on the Default cluster. The cluster that HE
>>> creates, has only virt service enabled by default. Engine should have been
>>> installed in "Both" mode like Roy mentioned.
> I went over those settings with limited success. I went through
> multiple iterations of trying to stand up HE on gluster storage. Then
> I gave up on that and tried NFS storage.
> Ultimately it always came down to sanlock errors (both gluster and NFS
> storage). I tried restarting the sanlock service, which would lead to
> watchdog rebooting the hosts. When the host came back up it almost
> seemed like things had started working. I could see begin to
> see/create gluster volumes (see hosted_storage, or begin to create a
> data storage domain)
>
> But when I would try to activate the hosted_storage domain things
> would start to fall apart again. sanlock as far as I can tell.
>
> I am currently running the engine on a physical system and things are
> working fine. I am considering taking a backup and attempting to use
> the HE physical to VM migration method, time permitting.
Were the optimizations applied on the gluster volume hosting the HE?
Assuming that "engine" is the gluster volume -
# gluster volume set engine group virt
# gluster volume set engine storage.owner-uid 36 && gluster volume set
engine storage.owner-gid 36
Did you encounter errors on adding the data storage domain or
installation of additional HE hosts?
If you are having trouble with creating gluster volume using the oVirt
interface
1. Enable gluster service on Default cluster
2. Put host to maintenance and activate - to ensure that oVirt
understands the host as a gluster enabled host.
3. You should now see the gluster volume hosting the HE being added to
the oVirt UI
If this does not work,
you could try creating the gluster volume (for data store) from CLI -
and then adding the storage domain from oVirt UI.
>
>
>>> On 12/28/2015 12:43 AM, Roy Golan wrote:
>>>
>>>
>>> 3 way replica is the officially supported replica count for VM store use
>>> case. If you wish to work with replica 4, you can update the
>>> supported_replica_count in vdsm.conf
> Thanks for that insight. I think I just experienced the bad aspects of
> both quorum=auto and quorum=none. I don't like replica 3, because you
> can only have one brick offline at a time. I think N+2 should be the
> target for a production environment. (so you have the capacity for a
> failure while doing maintenance). Would adding an arbiter effect the
> quorum status? Is 3x replica and 1x arbiter considered a replica 3 or
> 4?
For the VM store use-case, quorum=auto is what you need to set, to
ensure that your VM images are not corrupted (due to split brain files)
With quorum=auto, odd replica count is recommended. With an even replica
count, for instance 4, if 2 bricks fail there's a restriction that the
primary brick (the first brick that was specified in creating volume)
should be UP. If primary brick is down along with any other brick, then
the volume becomes read only.
Arbiter does affect the quorum status. Arbiter is currently supported
only in a replica 3 setup. replica 3 arbiter 1 - implies that you have 3
bricks that are providing a 3 way replica out of which 1 is an arbiter
brick.
>>> No chicken and egg here I think. You want a volume to be used as your
>>> master data domain and creating a new volume in a new gluster-cluster is
>>> independent of your datacenter status.
>>>
>>> You mentioned your hosts are on default cluster - so make sure your
>>> cluster support gluster service (you should have picked gluster as a service
>>> during engine install)
> I chose "both" during engine-setup, although I didn't have "gluster
> service" enable on the default cluster at first. Also vdsm-gluster rpm
> was not installed (I sort of feel like 'hosted-engine --deploy' should
> take care of that. Adding a host from my current physical engine using
> the "add host" gui didn't bring it in either.
In our hyperconverged testing, we installed vdsm and vdsm-gluster prior
to running hosted-engine --deploy to get around this. (HE setup does not
support hyper-converged setup yet)
Could you try that?
# yum install vdsm vdsm-gluster ovirt-hosted-engine-setup screen
-- prior to running hosted-engine deploy on the 3 nodes?
Adding a host to a cluster with "gluster service" enabled, should bring
in vdsm-gluster. Please provide the host-deploy logs if this didnot work
for you.
HTH,
sahina
>
>
> Thanks for the input!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160106/6a37dc04/attachment.html>
More information about the Users
mailing list