On Sat, Apr 1, 2017 at 10:32 PM, Jim Kusznir <jim@palousetech.com> wrote:Thank you!Here's the output of gluster volume info:[root@ovirt1 ~]# gluster volume infoVolume Name: dataType: ReplicateVolume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59 Status: StartedNumber of Bricks: 1 x (2 + 1) = 3Transport-type: tcpBricks:Brick1: ovirt1.nwfiber.com:/gluster/brick2/data Brick2: ovirt2.nwfiber.com:/gluster/brick2/data Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter) Options Reconfigured:performance.strict-o-direct: onnfs.disable: onuser.cifs: offnetwork.ping-timeout: 30cluster.shd-max-threads: 6cluster.shd-wait-qlength: 10000cluster.locking-scheme: granularcluster.data-self-heal-algorithm: full performance.low-prio-threads: 32features.shard-block-size: 512MBfeatures.shard: onstorage.owner-gid: 36storage.owner-uid: 36cluster.server-quorum-type: servercluster.quorum-type: autonetwork.remote-dio: enablecluster.eager-lock: enableperformance.stat-prefetch: offperformance.io-cache: offperformance.read-ahead: offperformance.quick-read: offperformance.readdir-ahead: onserver.allow-insecure: onVolume Name: engineType: ReplicateVolume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de Status: StartedNumber of Bricks: 1 x (2 + 1) = 3Transport-type: tcpBricks:Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter) Options Reconfigured:performance.readdir-ahead: onperformance.quick-read: offperformance.read-ahead: offperformance.io-cache: offperformance.stat-prefetch: offcluster.eager-lock: enablenetwork.remote-dio: offcluster.quorum-type: autocluster.server-quorum-type: serverstorage.owner-uid: 36storage.owner-gid: 36features.shard: onfeatures.shard-block-size: 512MBperformance.low-prio-threads: 32cluster.data-self-heal-algorithm: full cluster.locking-scheme: granularcluster.shd-wait-qlength: 10000cluster.shd-max-threads: 6network.ping-timeout: 30user.cifs: offnfs.disable: onperformance.strict-o-direct: onVolume Name: exportType: ReplicateVolume ID: 04ee58c7-2ba1-454f-be99-26ac75a352b4 Status: StoppedNumber of Bricks: 1 x (2 + 1) = 3Transport-type: tcpBricks:Brick1: ovirt1.nwfiber.com:/gluster/brick3/export Brick2: ovirt2.nwfiber.com:/gluster/brick3/export Brick3: ovirt3.nwfiber.com:/gluster/brick3/export (arbiter) Options Reconfigured:performance.readdir-ahead: onperformance.quick-read: offperformance.read-ahead: offperformance.io-cache: offperformance.stat-prefetch: offcluster.eager-lock: enablenetwork.remote-dio: offcluster.quorum-type: autocluster.server-quorum-type: serverstorage.owner-uid: 36storage.owner-gid: 36features.shard: onfeatures.shard-block-size: 512MBperformance.low-prio-threads: 32cluster.data-self-heal-algorithm: full cluster.locking-scheme: granularcluster.shd-wait-qlength: 10000cluster.shd-max-threads: 6network.ping-timeout: 30user.cifs: offnfs.disable: onperformance.strict-o-direct: onVolume Name: isoType: ReplicateVolume ID: b1ba15f5-0f0f-4411-89d0-595179f02b92 Status: StartedNumber of Bricks: 1 x (2 + 1) = 3Transport-type: tcpBricks:Brick1: ovirt1.nwfiber.com:/gluster/brick4/iso Brick2: ovirt2.nwfiber.com:/gluster/brick4/iso Brick3: ovirt3.nwfiber.com:/gluster/brick4/iso (arbiter) Options Reconfigured:performance.readdir-ahead: onperformance.quick-read: offperformance.read-ahead: offperformance.io-cache: offperformance.stat-prefetch: offcluster.eager-lock: enablenetwork.remote-dio: offcluster.quorum-type: autocluster.server-quorum-type: serverstorage.owner-uid: 36storage.owner-gid: 36features.shard: onfeatures.shard-block-size: 512MBperformance.low-prio-threads: 32cluster.data-self-heal-algorithm: full cluster.locking-scheme: granularcluster.shd-wait-qlength: 10000cluster.shd-max-threads: 6network.ping-timeout: 30user.cifs: offnfs.disable: onperformance.strict-o-direct: onThe node marked as (arbiter) on all of the bricks is the node that is not using any of its disk space.This is by design - the arbiter brick only stores metadata and hence saves on storage.
The engine domain is the volume dedicated for storing the hosted engine. Here's some LVM info:--- Logical volume ---LV Path /dev/gluster/engineLV Name engineVG Name glusterLV UUID 4gZ1TF-a1PX-i1Qx-o4Ix-MjEf-0HD8-esm3wg LV Write Access read/writeLV Creation host, time ovirt1.nwfiber.com, 2016-12-31 14:40:00 -0800LV Status available# open 1LV Size 25.00 GiBCurrent LE 6400Segments 1Allocation inheritRead ahead sectors auto- currently set to 256Block device 253:2--- Logical volume ---LV Name lvthinpoolVG Name glusterLV UUID aaNtso-fN1T-ZAkY-kUF2-dlxf-0ap2-JAwSid LV Write Access read/writeLV Creation host, time ovirt1.nwfiber.com, 2016-12-31 14:40:09 -0800LV Pool metadata lvthinpool_tmetaLV Pool data lvthinpool_tdataLV Status available# open 4LV Size 150.00 GiBAllocated pool data 65.02%Allocated metadata 14.92%Current LE 38400Segments 1Allocation inheritRead ahead sectors auto- currently set to 256Block device 253:5--- Logical volume ---LV Path /dev/gluster/dataLV Name dataVG Name glusterLV UUID NBxLOJ-vp48-GM4I-D9ON-4OcB-hZrh-MrDacn LV Write Access read/writeLV Creation host, time ovirt1.nwfiber.com, 2016-12-31 14:40:11 -0800LV Pool name lvthinpoolLV Status available# open 1LV Size 100.00 GiBMapped size 90.28%Current LE 25600Segments 1Allocation inheritRead ahead sectors auto- currently set to 256Block device 253:7--- Logical volume ---LV Path /dev/gluster/exportLV Name exportVG Name glusterLV UUID bih4nU-1QfI-tE12-ZLp0-fSR5-dlKt-YHkhx8 LV Write Access read/writeLV Creation host, time ovirt1.nwfiber.com, 2016-12-31 14:40:20 -0800LV Pool name lvthinpoolLV Status available# open 1LV Size 25.00 GiBMapped size 0.12%Current LE 6400Segments 1Allocation inheritRead ahead sectors auto- currently set to 256Block device 253:8--- Logical volume ---LV Path /dev/gluster/isoLV Name isoVG Name glusterLV UUID l8l1JU-ViD3-IFiZ-TucN-tGPE-Toqc-Q3R6uX LV Write Access read/writeLV Creation host, time ovirt1.nwfiber.com, 2016-12-31 14:40:29 -0800LV Pool name lvthinpoolLV Status available# open 1LV Size 25.00 GiBMapped size 28.86%Current LE 6400Segments 1Allocation inheritRead ahead sectors auto- currently set to 256Block device 253:9--- Logical volume ---LV Path /dev/centos_ovirt/swapLV Name swapVG Name centos_ovirtLV UUID PcVQ11-hQ9U-9KZT-QPuM-HwT6-8o49-2hzNkQ LV Write Access read/writeLV Creation host, time localhost, 2016-12-31 13:56:36 -0800LV Status available# open 2LV Size 16.00 GiBCurrent LE 4096Segments 1Allocation inheritRead ahead sectors auto- currently set to 256Block device 253:1--- Logical volume ---LV Path /dev/centos_ovirt/rootLV Name rootVG Name centos_ovirtLV UUID g2h2fn-sF0r-Peos-hAE1-WEo9-WENO-MlO3ly LV Write Access read/writeLV Creation host, time localhost, 2016-12-31 13:56:36 -0800LV Status available# open 1LV Size 20.00 GiBCurrent LE 5120Segments 1Allocation inheritRead ahead sectors auto- currently set to 256Block device 253:0------------I don't use the export gluster volume, and I've never used lvthinpool-type allocations before, so I'm not sure if there's anything special there.I followed the setup instructions from an ovirt contributed documentation that I can't find now that talked about how to install ovirt with gluster on a 3-node cluster.Thank you for your assistance!--JimOn Thu, Mar 30, 2017 at 1:27 AM, Sahina Bose <sabose@redhat.com> wrote:On Thu, Mar 30, 2017 at 1:23 PM, Liron Aravot <laravot@redhat.com> wrote:Hi Jim, please see inlineOn Thu, Mar 30, 2017 at 4:08 AM, Jim Kusznir <jim@palousetech.com> wrote:hello:I've been running my ovirt Version 4.0.5.5-1.el7.centos cluster for a while now, and am now revisiting some aspects of it for ensuring that I have good reliability.My cluster is a 3 node cluster, with gluster nodes running on each node. After running my cluster a bit, I'm realizing I didn't do a very optimal job of allocating the space on my disk to the different gluster mount points. Fortunately, they were created with LVM, so I'm hoping that I can resize them without much trouble.I have a domain for iso, domain for export, and domain for storage, all thin provisioned; then a domain for the engine, not thin provisioned. I'd like to expand the storage domain, and possibly shrink the engine domain and make that space also available to the main storage domain. Is it as simple as expanding the LVM partition, or are there more steps involved? Do I need to take the node offline?I didn't understand completely that part - what is the difference between the domain for storage and the domain for engine you mentioned?I think the domain for engine is the one storing Hosted Engine data.You should be able to expand your underlying LVM partition without having to take the node offline
second, I've noticed that the first two nodes seem to have a full copy of the data (the disks are in use), but the 3rd node appears to not be using any of its storage space...It is participating in the gluster cluster, though.Is the volume created as replica 3? If so, fully copy of the data should be present on all 3 nodes. Please provide the output of "gluster volume info"Third, currently gluster shares the same network as the VM networks. I'd like to put it on its own network. I'm not sure how to do this, as when I tried to do it at install time, I never got the cluster to come online; I had to make them share the same network to make that work.While creating the bricks the network intended for gluster should have been used to identify the brick in hostname:brick-directory. Changing this at a later point is a bit more involved. Please check online or on gluster-users on changing IP address associated with brick.
I'm adding Sahina who may shed some light on the gluster question, I'd try on the gluster mailing list as well.Ovirt questions:I've noticed that recently, I don't appear to be getting software updates anymore. I used to get update available notifications on my nodes every few days; I haven't seen one for a couple weeks now. is something wrong?I have a windows 10 x64 VM. I get a warning that my VM type does not match the installed OS. All works fine, but I've quadrouple-checked that it does match. Is this a known bug?Arik, any info on that?I have a UPS that all three nodes and the networking are on. It is a USB UPS. How should I best integrate monitoring in? I could put a raspberry pi up and then run NUT or similar on it, but is there a "better" way with oVirt?Thanks!--Jim
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users