Hi Nir,
On Wed, Jan 20, 2016 at 1:31 AM, <arik.mitschang(a)sbibits.com>
wrote:
>> On 25-12-2015 5:26, Arik Mitschang wrote:
>>> Hi ovirt-users,
>>>
>>> I have been working on a new install of ovirt 3.6 hosted-engine and ran
>>> into difficulty adding a gluster data storage domain to host my VMs. I
>>> have 4 servers for gluster (separate from vm hosts) and would like to
>>> have the quorum enforcement of replica 3 without sacrificing space. I
>>> created a gluster using
>>>
>>> replica 3 arbiter 1
>>>
>>> That looks like this:
>>>
>>> Volume Name: arbtest
>>> Type: Distributed-Replicate
>>> Volume ID: 01b36368-1f37-435c-9f48-0442e0c34160
>>> Status: Stopped
>>> Number of Bricks: 2 x 3 = 6
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: t2-gluster01b:/gluster/00/arbtest
>>> Brick2: t2-gluster02b:/gluster/00/arbtest
>>> Brick3: t2-gluster03b:/gluster/00/arbtest.arb
>>> Brick4: t2-gluster03b:/gluster/00/arbtest
>>> Brick5: t2-gluster04b:/gluster/00/arbtest
>>> Brick6: t2-gluster01b:/gluster/00/arbtest.arb
>>> Options Reconfigured:
>>> nfs.disable: true
>>> network.ping-timeout: 10
>>> storage.owner-uid: 36
>>> storage.owner-gid: 36
>>> cluster.server-quorum-type: server
>>> cluster.quorum-type: auto
>>> network.remote-dio: enable
>>> cluster.eager-lock: enable
>>> performance.stat-prefetch: off
>>> performance.io-cache: off
>>> performance.read-ahead: off
>>> performance.quick-read: off
>>> performance.readdir-ahead: on
>>>
>>> But adding to gluster I get the following error:
>>>
>>> "Error while executing action AddGlusterFsStorageDomain: Error
creating
>>> a storage domain's metadata"
In vdsm log we see:
StorageDomainMetadataCreationError: Error creating a storage domain's
metadata: ("create meta file 'outbox' failed: [Errno 5] Input/output
error",)
Which does not mean much.
>>>
>>>
>> Anything in engine.log (/var/log/ovirt-engine/engine.log) around that time?
>> Anything in vdsm.log on your 2 hypervisors around that time?
>> (Guessing that you'll see an error about replication unsupported by
>> vdsm, if so, have a look at /etc/vdsmd.conf.rpmnew)
>
> Hi Joop,
>
> Thanks for your response, and sorry for the long delay in mine. I had a
> chance to test adding again and catch the logs around that operation. I
> am attaching the engine logs and vdsm logs of the hypervisor that was
> responsible for the storage operations.
>
> Also, I have the following:
>
> [gluster]
> allowed_replica_counts = 1,2,3
>
> in /etc/vdsm/vdsm.conf.
>
> The volume was successfully mounted and I see the following in it after
> trying to add:
>
> arik@t2-virt01:~$ sudo mount -t glusterfs t2-gluster01b:arbtest /mnt/
> arik@t2-virt01:~$ ls -ltr /mnt/
> total 0
> -rwxr-xr-x 1 vdsm kvm 0 Jan 20 08:08 __DIRECT_IO_TEST__
> drwxr-xr-x 3 vdsm kvm 54 Jan 20 08:08 3d31af0b-18ad-45c4-90f1-18e2f804f34b
>
> I hope you can see something interesting in these logs!
You may find more info in gluster mount log, which should be at:
/var/log/glusterfs/<server>:<volname><date>.log
I will take a look if I can find something in the gluster logs.
We (ovirt storage developers) did not try arbiter volumes yet, so this
is basically
unsupported :-)
Ah, understood. Any plans to try?
The recommended setup is replica 3. Can you try to create a small
replica 3 volume,
just to check that replica 3 works for you?
The hosted engine for our setup is on a replica 3 volume, and this works
well.
Regards,
-Arik