On Sat, Apr 1, 2017 at 10:32 PM, Jim Kusznir <jim(a)palousetech.com> wrote:
Thank you!
Here's the output of gluster volume info:
[root@ovirt1 ~]# gluster volume info
Volume Name: data
Type: Replicate
Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1.nwfiber.com:/gluster/brick2/data
Brick2: ovirt2.nwfiber.com:/gluster/brick2/data
Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter)
Options Reconfigured:
performance.strict-o-direct: on
nfs.disable: on
user.cifs: off
network.ping-timeout: 30
cluster.shd-max-threads: 6
cluster.shd-wait-qlength: 10000
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
performance.low-prio-threads: 32
features.shard-block-size: 512MB
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.readdir-ahead: on
server.allow-insecure: on
Volume Name: engine
Type: Replicate
Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine
Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine
Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter)
Options Reconfigured:
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 6
network.ping-timeout: 30
user.cifs: off
nfs.disable: on
performance.strict-o-direct: on
Volume Name: export
Type: Replicate
Volume ID: 04ee58c7-2ba1-454f-be99-26ac75a352b4
Status: Stopped
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1.nwfiber.com:/gluster/brick3/export
Brick2: ovirt2.nwfiber.com:/gluster/brick3/export
Brick3: ovirt3.nwfiber.com:/gluster/brick3/export (arbiter)
Options Reconfigured:
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 6
network.ping-timeout: 30
user.cifs: off
nfs.disable: on
performance.strict-o-direct: on
Volume Name: iso
Type: Replicate
Volume ID: b1ba15f5-0f0f-4411-89d0-595179f02b92
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1.nwfiber.com:/gluster/brick4/iso
Brick2: ovirt2.nwfiber.com:/gluster/brick4/iso
Brick3: ovirt3.nwfiber.com:/gluster/brick4/iso (arbiter)
Options Reconfigured:
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 6
network.ping-timeout: 30
user.cifs: off
nfs.disable: on
performance.strict-o-direct: on
The node marked as (arbiter) on all of the bricks is the node that is not
using any of its disk space.
This is by design - the arbiter brick only stores metadata and hence saves
on storage.
The engine domain is the volume dedicated for storing the hosted engine.
Here's some LVM info:
--- Logical volume ---
LV Path /dev/gluster/engine
LV Name engine
VG Name gluster
LV UUID 4gZ1TF-a1PX-i1Qx-o4Ix-MjEf-0HD8-esm3wg
LV Write Access read/write
LV Creation host, time
ovirt1.nwfiber.com, 2016-12-31 14:40:00 -0800
LV Status available
# open 1
LV Size 25.00 GiB
Current LE 6400
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
--- Logical volume ---
LV Name lvthinpool
VG Name gluster
LV UUID aaNtso-fN1T-ZAkY-kUF2-dlxf-0ap2-JAwSid
LV Write Access read/write
LV Creation host, time
ovirt1.nwfiber.com, 2016-12-31 14:40:09 -0800
LV Pool metadata lvthinpool_tmeta
LV Pool data lvthinpool_tdata
LV Status available
# open 4
LV Size 150.00 GiB
Allocated pool data 65.02%
Allocated metadata 14.92%
Current LE 38400
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:5
--- Logical volume ---
LV Path /dev/gluster/data
LV Name data
VG Name gluster
LV UUID NBxLOJ-vp48-GM4I-D9ON-4OcB-hZrh-MrDacn
LV Write Access read/write
LV Creation host, time
ovirt1.nwfiber.com, 2016-12-31 14:40:11 -0800
LV Pool name lvthinpool
LV Status available
# open 1
LV Size 100.00 GiB
Mapped size 90.28%
Current LE 25600
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:7
--- Logical volume ---
LV Path /dev/gluster/export
LV Name export
VG Name gluster
LV UUID bih4nU-1QfI-tE12-ZLp0-fSR5-dlKt-YHkhx8
LV Write Access read/write
LV Creation host, time
ovirt1.nwfiber.com, 2016-12-31 14:40:20 -0800
LV Pool name lvthinpool
LV Status available
# open 1
LV Size 25.00 GiB
Mapped size 0.12%
Current LE 6400
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:8
--- Logical volume ---
LV Path /dev/gluster/iso
LV Name iso
VG Name gluster
LV UUID l8l1JU-ViD3-IFiZ-TucN-tGPE-Toqc-Q3R6uX
LV Write Access read/write
LV Creation host, time
ovirt1.nwfiber.com, 2016-12-31 14:40:29 -0800
LV Pool name lvthinpool
LV Status available
# open 1
LV Size 25.00 GiB
Mapped size 28.86%
Current LE 6400
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:9
--- Logical volume ---
LV Path /dev/centos_ovirt/swap
LV Name swap
VG Name centos_ovirt
LV UUID PcVQ11-hQ9U-9KZT-QPuM-HwT6-8o49-2hzNkQ
LV Write Access read/write
LV Creation host, time localhost, 2016-12-31 13:56:36 -0800
LV Status available
# open 2
LV Size 16.00 GiB
Current LE 4096
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
--- Logical volume ---
LV Path /dev/centos_ovirt/root
LV Name root
VG Name centos_ovirt
LV UUID g2h2fn-sF0r-Peos-hAE1-WEo9-WENO-MlO3ly
LV Write Access read/write
LV Creation host, time localhost, 2016-12-31 13:56:36 -0800
LV Status available
# open 1
LV Size 20.00 GiB
Current LE 5120
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
------------
I don't use the export gluster volume, and I've never used lvthinpool-type
allocations before, so I'm not sure if there's anything special there.
I followed the setup instructions from an ovirt contributed documentation
that I can't find now that talked about how to install ovirt with gluster
on a 3-node cluster.
Thank you for your assistance!
--Jim
On Thu, Mar 30, 2017 at 1:27 AM, Sahina Bose <sabose(a)redhat.com> wrote:
>
>
> On Thu, Mar 30, 2017 at 1:23 PM, Liron Aravot <laravot(a)redhat.com> wrote:
>
>> Hi Jim, please see inline
>>
>> On Thu, Mar 30, 2017 at 4:08 AM, Jim Kusznir <jim(a)palousetech.com>
>> wrote:
>>
>>> hello:
>>>
>>> I've been running my ovirt Version 4.0.5.5-1.el7.centos cluster for a
>>> while now, and am now revisiting some aspects of it for ensuring that I
>>> have good reliability.
>>>
>>> My cluster is a 3 node cluster, with gluster nodes running on each
>>> node. After running my cluster a bit, I'm realizing I didn't do a
very
>>> optimal job of allocating the space on my disk to the different gluster
>>> mount points. Fortunately, they were created with LVM, so I'm hoping
that
>>> I can resize them without much trouble.
>>>
>>> I have a domain for iso, domain for export, and domain for storage, all
>>> thin provisioned; then a domain for the engine, not thin provisioned.
I'd
>>> like to expand the storage domain, and possibly shrink the engine domain
>>> and make that space also available to the main storage domain. Is it as
>>> simple as expanding the LVM partition, or are there more steps involved?
>>> Do I need to take the node offline?
>>>
>>
>> I didn't understand completely that part - what is the difference
>> between the domain for storage and the domain for engine you mentioned?
>>
>
> I think the domain for engine is the one storing Hosted Engine data.
> You should be able to expand your underlying LVM partition without having
> to take the node offline
>
>
>>
>>> second, I've noticed that the first two nodes seem to have a full copy
>>> of the data (the disks are in use), but the 3rd node appears to not be
>>> using any of its storage space...It is participating in the gluster
>>> cluster, though.
>>>
>>
> Is the volume created as replica 3? If so, fully copy of the data should
> be present on all 3 nodes. Please provide the output of "gluster volume
> info"
>
>
>>> Third, currently gluster shares the same network as the VM networks.
>>> I'd like to put it on its own network. I'm not sure how to do this,
as
>>> when I tried to do it at install time, I never got the cluster to come
>>> online; I had to make them share the same network to make that work.
>>>
>>
> While creating the bricks the network intended for gluster should have
> been used to identify the brick in hostname:brick-directory. Changing this
> at a later point is a bit more involved. Please check online or on
> gluster-users on changing IP address associated with brick.
>
>
>>
>> I'm adding Sahina who may shed some light on the gluster question, I'd
>> try on the gluster mailing list as well.
>>
>>>
>>>
>>> Ovirt questions:
>>> I've noticed that recently, I don't appear to be getting software
>>> updates anymore. I used to get update available notifications on my nodes
>>> every few days; I haven't seen one for a couple weeks now. is something
>>> wrong?
>>>
>>> I have a windows 10 x64 VM. I get a warning that my VM type does not
>>> match the installed OS. All works fine, but I've quadrouple-checked
that
>>> it does match. Is this a known bug?
>>>
>>
>> Arik, any info on that?
>>
>>>
>>> I have a UPS that all three nodes and the networking are on. It is a
>>> USB UPS. How should I best integrate monitoring in? I could put a
>>> raspberry pi up and then run NUT or similar on it, but is there a
"better"
>>> way with oVirt?
>>>
>>> Thanks!
>>> --Jim
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>