[ovirt-users] ovirt 3.6 and gluster arbiter volumes?

arik.mitschang at sbibits.com arik.mitschang at sbibits.com
Tue Jan 19 23:31:21 UTC 2016


> On 25-12-2015 5:26, Arik Mitschang wrote:
>> Hi ovirt-users,
>>
>> I have been working on a new install of ovirt 3.6 hosted-engine and ran
>> into difficulty adding a gluster data storage domain to host my VMs. I
>> have 4 servers for gluster (separate from vm hosts) and would like to
>> have the quorum enforcement of replica 3 without sacrificing space. I
>> created a gluster using
>>
>>  replica 3 arbiter 1
>>
>> That looks like this:
>>
>>  Volume Name: arbtest
>>  Type: Distributed-Replicate
>>  Volume ID: 01b36368-1f37-435c-9f48-0442e0c34160
>>  Status: Stopped
>>  Number of Bricks: 2 x 3 = 6
>>  Transport-type: tcp
>>  Bricks:
>>  Brick1: t2-gluster01b:/gluster/00/arbtest
>>  Brick2: t2-gluster02b:/gluster/00/arbtest
>>  Brick3: t2-gluster03b:/gluster/00/arbtest.arb
>>  Brick4: t2-gluster03b:/gluster/00/arbtest
>>  Brick5: t2-gluster04b:/gluster/00/arbtest
>>  Brick6: t2-gluster01b:/gluster/00/arbtest.arb
>>  Options Reconfigured:
>>  nfs.disable: true
>>  network.ping-timeout: 10
>>  storage.owner-uid: 36
>>  storage.owner-gid: 36
>>  cluster.server-quorum-type: server
>>  cluster.quorum-type: auto
>>  network.remote-dio: enable
>>  cluster.eager-lock: enable
>>  performance.stat-prefetch: off
>>  performance.io-cache: off
>>  performance.read-ahead: off
>>  performance.quick-read: off
>>  performance.readdir-ahead: on
>>
>> But adding to gluster I get the following error:
>>
>>  "Error while executing action AddGlusterFsStorageDomain: Error creating
>>  a storage domain's metadata"
>>
>>
> Anything in engine.log (/var/log/ovirt-engine/engine.log) around that time?
> Anything in vdsm.log on your 2 hypervisors around that time?
> (Guessing that you'll see an error about replication unsupported by
> vdsm, if so, have a look at /etc/vdsmd.conf.rpmnew)

Hi Joop,

Thanks for your response, and sorry for the long delay in mine. I had a
chance to test adding again and catch the logs around that operation. I
am attaching the engine logs and vdsm logs of the hypervisor that was
responsible for the storage operations.

Also, I have the following:

 [gluster]
 allowed_replica_counts = 1,2,3

in /etc/vdsm/vdsm.conf.

The volume was successfully mounted and I see the following in it after
trying to add:

 arik at t2-virt01:~$ sudo mount -t glusterfs t2-gluster01b:arbtest /mnt/
 arik at t2-virt01:~$ ls -ltr /mnt/
 total 0
 -rwxr-xr-x 1 vdsm kvm  0 Jan 20 08:08 __DIRECT_IO_TEST__
 drwxr-xr-x 3 vdsm kvm 54 Jan 20 08:08 3d31af0b-18ad-45c4-90f1-18e2f804f34b

I hope you can see something interesting in these logs!

Regards,
-Arik

ENGINE logs:

-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: arbiter_engine.log
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160120/064ce6b9/attachment-0002.ksh>
-------------- next part --------------

VDSM logs:

-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: arbiter_vdsm.log
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160120/064ce6b9/attachment-0003.ksh>


More information about the Users mailing list