[ovirt-users] HE (3.6) on gluster storage, chicken and egg status

Fil Di Noto fdinoto at gmail.com
Tue Jan 5 16:03:10 UTC 2016


On Thu, Dec 31, 2015 at 3:44 AM, Donny Davis <donny at cloudspin.me> wrote:
> I would say you would be much better off picking two hosts to do your HE on
> and setting up drdb for the HE storage. You will have fewer problems with
> your HE.

Is drdb an oVirt thing? Or are you suggesting a db backup or
replicated sql db solution that I would cut over to manually.


> On Thu, Dec 31, 2015 at 1:41 AM, Sahina Bose <sabose at redhat.com> wrote:
>>
>> If you mean, creating new gluster volumes - you need to make sure the
>> gluster service is enabled on the Default cluster. The cluster that HE
>> creates, has only virt service enabled by default. Engine should have been
>> installed in "Both" mode like Roy mentioned.

I went over those settings with limited success. I went through
multiple iterations of trying to stand up HE on gluster storage. Then
I gave up on that and tried NFS storage.
Ultimately it always came down to sanlock errors (both gluster and NFS
storage). I tried restarting the sanlock service, which would lead to
watchdog rebooting the hosts. When the host came back up it almost
seemed like things had started working. I could see begin to
see/create gluster volumes (see hosted_storage, or begin to create a
data storage domain)

But when I would try to activate the hosted_storage domain things
would start to fall apart again. sanlock as far as I can tell.

I am currently running the engine on a physical system and things are
working fine. I am considering taking a backup and attempting to use
the HE physical to VM migration method, time permitting.


>> On 12/28/2015 12:43 AM, Roy Golan wrote:
>>
>>
>> 3 way replica is the officially supported replica count for VM store use
>> case. If you wish to work with replica 4, you can update the
>> supported_replica_count in vdsm.conf

Thanks for that insight. I think I just experienced the bad aspects of
both quorum=auto and quorum=none. I don't like replica 3, because you
can only have one brick offline at a time. I think N+2 should be the
target for a production environment. (so you have the capacity for a
failure while doing maintenance). Would adding an arbiter effect the
quorum status? Is 3x replica and 1x arbiter considered a replica 3 or
4?

>>  No chicken and egg here I think. You want a volume to be used as your
>> master data domain and creating a new volume in a new gluster-cluster is
>> independent of your datacenter status.
>>
>> You mentioned your hosts are on default cluster - so make sure your
>> cluster support gluster service (you should have picked gluster as a service
>> during engine install)

I chose "both" during engine-setup, although I didn't have "gluster
service" enable on the default cluster at first. Also vdsm-gluster rpm
was not installed (I sort of feel like 'hosted-engine --deploy' should
take care of that. Adding a host from my current physical engine using
the "add host" gui didn't bring it in either.


Thanks for the input!



More information about the Users mailing list