[root@ovirt1 ~]# gluster volume info
Volume Name: data
Type: Replicate
Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1.nwfiber.com:/gluster/brick2/data
Brick2: ovirt2.nwfiber.com:/gluster/brick2/data
Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter)
Options Reconfigured:
performance.strict-o-direct: on
nfs.disable: on
user.cifs: off
network.ping-timeout: 30
cluster.shd-max-threads: 6
cluster.shd-wait-qlength: 10000
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
performance.low-prio-threads: 32
features.shard-block-size: 512MB
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.readdir-ahead: on
server.allow-insecure: on
Volume Name: engine
Type: Replicate
Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine
Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine
Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter)
Options Reconfigured:
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 6
network.ping-timeout: 30
user.cifs: off
nfs.disable: on
performance.strict-o-direct: on
Volume Name: export
Type: Replicate
Volume ID: 04ee58c7-2ba1-454f-be99-26ac75a352b4
Status: Stopped
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1.nwfiber.com:/gluster/brick3/export
Brick2: ovirt2.nwfiber.com:/gluster/brick3/export
Brick3: ovirt3.nwfiber.com:/gluster/brick3/export (arbiter)
Options Reconfigured:
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 6
network.ping-timeout: 30
user.cifs: off
nfs.disable: on
performance.strict-o-direct: on
Volume Name: iso
Type: Replicate
Volume ID: b1ba15f5-0f0f-4411-89d0-595179f02b92
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1.nwfiber.com:/gluster/brick4/iso
Brick2: ovirt2.nwfiber.com:/gluster/brick4/iso
Brick3: ovirt3.nwfiber.com:/gluster/brick4/iso (arbiter)
Options Reconfigured:
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 6
network.ping-timeout: 30
user.cifs: off
nfs.disable: on
performance.strict-o-direct: on
The node marked as (arbiter) on all of the bricks is the node that is not using any of its disk space.
The engine domain is the volume dedicated for storing the hosted engine. Here's some LVM info:
--- Logical volume ---
LV Path /dev/gluster/engine
LV Name engine
VG Name gluster
LV UUID 4gZ1TF-a1PX-i1Qx-o4Ix-MjEf-0HD8-esm3wg
LV Write Access read/write
LV Status available
# open 1
LV Size 25.00 GiB
Current LE 6400
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
--- Logical volume ---
LV Name lvthinpool
VG Name gluster
LV UUID aaNtso-fN1T-ZAkY-kUF2-dlxf-0ap2-JAwSid
LV Write Access read/write
LV Pool metadata lvthinpool_tmeta
LV Pool data lvthinpool_tdata
LV Status available
# open 4
LV Size 150.00 GiB
Allocated pool data 65.02%
Allocated metadata 14.92%
Current LE 38400
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:5
--- Logical volume ---
LV Path /dev/gluster/data
LV Name data
VG Name gluster
LV UUID NBxLOJ-vp48-GM4I-D9ON-4OcB-hZrh-MrDacn
LV Write Access read/write
LV Pool name lvthinpool
LV Status available
# open 1
LV Size 100.00 GiB
Mapped size 90.28%
Current LE 25600
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:7
--- Logical volume ---
LV Path /dev/gluster/export
LV Name export
VG Name gluster
LV UUID bih4nU-1QfI-tE12-ZLp0-fSR5-dlKt-YHkhx8
LV Write Access read/write
LV Pool name lvthinpool
LV Status available
# open 1
LV Size 25.00 GiB
Mapped size 0.12%
Current LE 6400
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:8
--- Logical volume ---
LV Path /dev/gluster/iso
LV Name iso
VG Name gluster
LV UUID l8l1JU-ViD3-IFiZ-TucN-tGPE-Toqc-Q3R6uX
LV Write Access read/write
LV Pool name lvthinpool
LV Status available
# open 1
LV Size 25.00 GiB
Mapped size 28.86%
Current LE 6400
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:9
--- Logical volume ---
LV Path /dev/centos_ovirt/swap
LV Name swap
VG Name centos_ovirt
LV UUID PcVQ11-hQ9U-9KZT-QPuM-HwT6-8o49-2hzNkQ
LV Write Access read/write
LV Creation host, time localhost, 2016-12-31 13:56:36 -0800
LV Status available
# open 2
LV Size 16.00 GiB
Current LE 4096
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
--- Logical volume ---
LV Path /dev/centos_ovirt/root
LV Name root
VG Name centos_ovirt
LV UUID g2h2fn-sF0r-Peos-hAE1-WEo9-WENO-MlO3ly
LV Write Access read/write
LV Creation host, time localhost, 2016-12-31 13:56:36 -0800
LV Status available
# open 1
LV Size 20.00 GiB
Current LE 5120
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
I don't use the export gluster volume, and I've never used lvthinpool-type allocations before, so I'm not sure if there's anything special there.
I followed the setup instructions from an ovirt contributed documentation that I can't find now that talked about how to install ovirt with gluster on a 3-node cluster.