<div dir="ltr">Based on the suggestions here, I did successfully remove the unused export gluster brick and allocate all otherwise unassigned space to my data export, then used xfs_growfs to realize the new size. This should hold me for a while longer before building a "proper" storage solution.<div><br></div><div>--Jim</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Apr 1, 2017 at 10:02 AM, Jim Kusznir <span dir="ltr"><<a href="mailto:jim@palousetech.com" target="_blank">jim@palousetech.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Thank you!<div><br></div><div>Here's the output of gluster volume info:</div><div><div>[root@ovirt1 ~]# gluster volume info</div><div> </div><div>Volume Name: data</div><div>Type: Replicate</div><div>Volume ID: e670c488-ac16-4dd1-8bd3-<wbr>e43b2e42cc59</div><div>Status: Started</div><div>Number of Bricks: 1 x (2 + 1) = 3</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: ovirt1.nwfiber.com:/gluster/<wbr>brick2/data</div><div>Brick2: ovirt2.nwfiber.com:/gluster/<wbr>brick2/data</div><div>Brick3: ovirt3.nwfiber.com:/gluster/<wbr>brick2/data (arbiter)</div><div>Options Reconfigured:</div><div>performance.strict-o-direct: on</div><div>nfs.disable: on</div><div>user.cifs: off</div><div>network.ping-timeout: 30</div><div>cluster.shd-max-threads: 6</div><div>cluster.shd-wait-qlength: 10000</div><div>cluster.locking-scheme: granular</div><div>cluster.data-self-heal-<wbr>algorithm: full</div><div>performance.low-prio-threads: 32</div><div>features.shard-block-size: 512MB</div><div>features.shard: on</div><div>storage.owner-gid: 36</div><div>storage.owner-uid: 36</div><div>cluster.server-quorum-type: server</div><div>cluster.quorum-type: auto</div><div>network.remote-dio: enable</div><div>cluster.eager-lock: enable</div><div>performance.stat-prefetch: off</div><div>performance.io-cache: off</div><div>performance.read-ahead: off</div><div>performance.quick-read: off</div><div>performance.readdir-ahead: on</div><div>server.allow-insecure: on</div><div> </div><div>Volume Name: engine</div><div>Type: Replicate</div><div>Volume ID: 87ad86b9-d88b-457e-ba21-<wbr>5d3173c612de</div><div>Status: Started</div><div>Number of Bricks: 1 x (2 + 1) = 3</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: ovirt1.nwfiber.com:/gluster/<wbr>brick1/engine</div><div>Brick2: ovirt2.nwfiber.com:/gluster/<wbr>brick1/engine</div><div>Brick3: ovirt3.nwfiber.com:/gluster/<wbr>brick1/engine (arbiter)</div><div>Options Reconfigured:</div><div>performance.readdir-ahead: on</div><div>performance.quick-read: off</div><div>performance.read-ahead: off</div><div>performance.io-cache: off</div><div>performance.stat-prefetch: off</div><div>cluster.eager-lock: enable</div><div>network.remote-dio: off</div><div>cluster.quorum-type: auto</div><div>cluster.server-quorum-type: server</div><div>storage.owner-uid: 36</div><div>storage.owner-gid: 36</div><div>features.shard: on</div><div>features.shard-block-size: 512MB</div><div>performance.low-prio-threads: 32</div><div>cluster.data-self-heal-<wbr>algorithm: full</div><div>cluster.locking-scheme: granular</div><div>cluster.shd-wait-qlength: 10000</div><div>cluster.shd-max-threads: 6</div><div>network.ping-timeout: 30</div><div>user.cifs: off</div><div>nfs.disable: on</div><div>performance.strict-o-direct: on</div><div> </div><div>Volume Name: export</div><div>Type: Replicate</div><div>Volume ID: 04ee58c7-2ba1-454f-be99-<wbr>26ac75a352b4</div><div>Status: Stopped</div><div>Number of Bricks: 1 x (2 + 1) = 3</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: ovirt1.nwfiber.com:/gluster/<wbr>brick3/export</div><div>Brick2: ovirt2.nwfiber.com:/gluster/<wbr>brick3/export</div><div>Brick3: ovirt3.nwfiber.com:/gluster/<wbr>brick3/export (arbiter)</div><div>Options Reconfigured:</div><div>performance.readdir-ahead: on</div><div>performance.quick-read: off</div><div>performance.read-ahead: off</div><div>performance.io-cache: off</div><div>performance.stat-prefetch: off</div><div>cluster.eager-lock: enable</div><div>network.remote-dio: off</div><div>cluster.quorum-type: auto</div><div>cluster.server-quorum-type: server</div><div>storage.owner-uid: 36</div><div>storage.owner-gid: 36</div><div>features.shard: on</div><div>features.shard-block-size: 512MB</div><div>performance.low-prio-threads: 32</div><div>cluster.data-self-heal-<wbr>algorithm: full</div><div>cluster.locking-scheme: granular</div><div>cluster.shd-wait-qlength: 10000</div><div>cluster.shd-max-threads: 6</div><div>network.ping-timeout: 30</div><div>user.cifs: off</div><div>nfs.disable: on</div><div>performance.strict-o-direct: on</div><div> </div><div>Volume Name: iso</div><div>Type: Replicate</div><div>Volume ID: b1ba15f5-0f0f-4411-89d0-<wbr>595179f02b92</div><div>Status: Started</div><div>Number of Bricks: 1 x (2 + 1) = 3</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: ovirt1.nwfiber.com:/gluster/<wbr>brick4/iso</div><div>Brick2: ovirt2.nwfiber.com:/gluster/<wbr>brick4/iso</div><div>Brick3: ovirt3.nwfiber.com:/gluster/<wbr>brick4/iso (arbiter)</div><div>Options Reconfigured:</div><div>performance.readdir-ahead: on</div><div>performance.quick-read: off</div><div>performance.read-ahead: off</div><div>performance.io-cache: off</div><div>performance.stat-prefetch: off</div><div>cluster.eager-lock: enable</div><div>network.remote-dio: off</div><div>cluster.quorum-type: auto</div><div>cluster.server-quorum-type: server</div><div>storage.owner-uid: 36</div><div>storage.owner-gid: 36</div><div>features.shard: on</div><div>features.shard-block-size: 512MB</div><div>performance.low-prio-threads: 32</div><div>cluster.data-self-heal-<wbr>algorithm: full</div><div>cluster.locking-scheme: granular</div><div>cluster.shd-wait-qlength: 10000</div><div>cluster.shd-max-threads: 6</div><div>network.ping-timeout: 30</div><div>user.cifs: off</div><div>nfs.disable: on</div><div>performance.strict-o-direct: on</div></div><div><br></div><div><br></div><div>The node marked as (arbiter) on all of the bricks is the node that is not using any of its disk space.</div><div><br></div><div>The engine domain is the volume dedicated for storing the hosted engine. Here's some LVM info:</div><div><br></div><div><div> --- Logical volume ---</div><div> LV Path /dev/gluster/engine</div><div> LV Name engine</div><div> VG Name gluster</div><div> LV UUID 4gZ1TF-a1PX-i1Qx-o4Ix-MjEf-<wbr>0HD8-esm3wg</div><div> LV Write Access read/write</div><div> LV Creation host, time <a href="http://ovirt1.nwfiber.com" target="_blank">ovirt1.nwfiber.com</a>, 2016-12-31 14:40:00 -0800</div><div> LV Status available</div><div> # open 1</div><div> LV Size 25.00 GiB</div><div> Current LE 6400</div><div> Segments 1</div><div> Allocation inherit</div><div> Read ahead sectors auto</div><div> - currently set to 256</div><div> Block device 253:2</div><div> </div><div> --- Logical volume ---</div><div> LV Name lvthinpool</div><div> VG Name gluster</div><div> LV UUID aaNtso-fN1T-ZAkY-kUF2-dlxf-<wbr>0ap2-JAwSid</div><div> LV Write Access read/write</div><div> LV Creation host, time <a href="http://ovirt1.nwfiber.com" target="_blank">ovirt1.nwfiber.com</a>, 2016-12-31 14:40:09 -0800</div><div> LV Pool metadata lvthinpool_tmeta</div><div> LV Pool data lvthinpool_tdata</div><div> LV Status available</div><div> # open 4</div><div> LV Size 150.00 GiB</div><div> Allocated pool data 65.02%</div><div> Allocated metadata 14.92%</div><div> Current LE 38400</div><div> Segments 1</div><div> Allocation inherit</div><div> Read ahead sectors auto</div><div> - currently set to 256</div><div> Block device 253:5</div><div> </div><div> --- Logical volume ---</div><div> LV Path /dev/gluster/data</div><div> LV Name data</div><div> VG Name gluster</div><div> LV UUID NBxLOJ-vp48-GM4I-D9ON-4OcB-<wbr>hZrh-MrDacn</div><div> LV Write Access read/write</div><div> LV Creation host, time <a href="http://ovirt1.nwfiber.com" target="_blank">ovirt1.nwfiber.com</a>, 2016-12-31 14:40:11 -0800</div><div> LV Pool name lvthinpool</div><div> LV Status available</div><div> # open 1</div><div> LV Size 100.00 GiB</div><div> Mapped size 90.28%</div><div> Current LE 25600</div><div> Segments 1</div><div> Allocation inherit</div><div> Read ahead sectors auto</div><div> - currently set to 256</div><div> Block device 253:7</div><div> </div><div> --- Logical volume ---</div><div> LV Path /dev/gluster/export</div><div> LV Name export</div><div> VG Name gluster</div><div> LV UUID bih4nU-1QfI-tE12-ZLp0-fSR5-<wbr>dlKt-YHkhx8</div><div> LV Write Access read/write</div><div> LV Creation host, time <a href="http://ovirt1.nwfiber.com" target="_blank">ovirt1.nwfiber.com</a>, 2016-12-31 14:40:20 -0800</div><div> LV Pool name lvthinpool</div><div> LV Status available</div><div> # open 1</div><div> LV Size 25.00 GiB</div><div> Mapped size 0.12%</div><div> Current LE 6400</div><div> Segments 1</div><div> Allocation inherit</div><div> Read ahead sectors auto</div><div> - currently set to 256</div><div> Block device 253:8</div><div> </div><div> --- Logical volume ---</div><div> LV Path /dev/gluster/iso</div><div> LV Name iso</div><div> VG Name gluster</div><div> LV UUID l8l1JU-ViD3-IFiZ-TucN-tGPE-<wbr>Toqc-Q3R6uX</div><div> LV Write Access read/write</div><div> LV Creation host, time <a href="http://ovirt1.nwfiber.com" target="_blank">ovirt1.nwfiber.com</a>, 2016-12-31 14:40:29 -0800</div><div> LV Pool name lvthinpool</div><div> LV Status available</div><div> # open 1</div><div> LV Size 25.00 GiB</div><div> Mapped size 28.86%</div><div> Current LE 6400</div><div> Segments 1</div><div> Allocation inherit</div><div> Read ahead sectors auto</div><div> - currently set to 256</div><div> Block device 253:9</div><div> </div><div> --- Logical volume ---</div><div> LV Path /dev/centos_ovirt/swap</div><div> LV Name swap</div><div> VG Name centos_ovirt</div><div> LV UUID PcVQ11-hQ9U-9KZT-QPuM-HwT6-<wbr>8o49-2hzNkQ</div><div> LV Write Access read/write</div><div> LV Creation host, time localhost, 2016-12-31 13:56:36 -0800</div><div> LV Status available</div><div> # open 2</div><div> LV Size 16.00 GiB</div><div> Current LE 4096</div><div> Segments 1</div><div> Allocation inherit</div><div> Read ahead sectors auto</div><div> - currently set to 256</div><div> Block device 253:1</div><div> </div><div> --- Logical volume ---</div><div> LV Path /dev/centos_ovirt/root</div><div> LV Name root</div><div> VG Name centos_ovirt</div><div> LV UUID g2h2fn-sF0r-Peos-hAE1-WEo9-<wbr>WENO-MlO3ly</div><div> LV Write Access read/write</div><div> LV Creation host, time localhost, 2016-12-31 13:56:36 -0800</div><div> LV Status available</div><div> # open 1</div><div> LV Size 20.00 GiB</div><div> Current LE 5120</div><div> Segments 1</div><div> Allocation inherit</div><div> Read ahead sectors auto</div><div> - currently set to 256</div><div> Block device 253:0</div></div><div><br></div><div>------------</div><div><br></div><div>I don't use the export gluster volume, and I've never used lvthinpool-type allocations before, so I'm not sure if there's anything special there.</div><div><br></div><div>I followed the setup instructions from an ovirt contributed documentation that I can't find now that talked about how to install ovirt with gluster on a 3-node cluster.</div><div><br></div><div>Thank you for your assistance!</div><span class="HOEnZb"><font color="#888888"><div>--Jim</div></font></span></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Mar 30, 2017 at 1:27 AM, Sahina Bose <span dir="ltr"><<a href="mailto:sabose@redhat.com" target="_blank">sabose@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span>On Thu, Mar 30, 2017 at 1:23 PM, Liron Aravot <span dir="ltr"><<a href="mailto:laravot@redhat.com" target="_blank">laravot@redhat.com</a>></span> wrote:<br><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div dir="ltr">Hi Jim, please see inline<br><div class="gmail_extra"><br><div class="gmail_quote"><span class="m_-3496744416754248898m_3465073948164697150gmail-">On Thu, Mar 30, 2017 at 4:08 AM, Jim Kusznir <span dir="ltr"><<a href="mailto:jim@palousetech.com" target="_blank">jim@palousetech.com</a>></span> wrote:<br><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div dir="ltr">hello:<div><br></div><div>I've been running my ovirt Version 4.0.5.5-1.el7.centos cluster for a while now, and am now revisiting some aspects of it for ensuring that I have good reliability.</div><div><br></div><div>My cluster is a 3 node cluster, with gluster nodes running on each node. After running my cluster a bit, I'm realizing I didn't do a very optimal job of allocating the space on my disk to the different gluster mount points. Fortunately, they were created with LVM, so I'm hoping that I can resize them without much trouble.</div><div><br></div><div>I have a domain for iso, domain for export, and domain for storage, all thin provisioned; then a domain for the engine, not thin provisioned. I'd like to expand the storage domain, and possibly shrink the engine domain and make that space also available to the main storage domain. Is it as simple as expanding the LVM partition, or are there more steps involved? Do I need to take the node offline?</div></div></blockquote><div><br></div></span><div>I didn't understand completely that part - what is the difference between the domain for storage and the domain for engine you mentioned? </div></div></div></div></blockquote><div><br></div></span><div>I think the domain for engine is the one storing Hosted Engine data.<br></div><div>You should be able to expand your underlying LVM partition without having to take the node offline<br> <br></div><span><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="m_-3496744416754248898m_3465073948164697150gmail-"><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div dir="ltr"><div><br></div><div>second, I've noticed that the first two nodes seem to have a full copy of the data (the disks are in use), but the 3rd node appears to not be using any of its storage space...It is participating in the gluster cluster, though.</div></div></blockquote></span></div></div></div></blockquote><div><br></div></span><div>Is the volume created as replica 3? If so, fully copy of the data should be present on all 3 nodes. Please provide the output of "gluster volume info"<br><br></div><span><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="m_-3496744416754248898m_3465073948164697150gmail-"><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div dir="ltr"><div><br></div><div>Third, currently gluster shares the same network as the VM networks. I'd like to put it on its own network. I'm not sure how to do this, as when I tried to do it at install time, I never got the cluster to come online; I had to make them share the same network to make that work.</div></div></blockquote></span></div></div></div></blockquote><div><br></div></span><div>While creating the bricks the network intended for gluster should have been used to identify the brick in hostname:brick-directory. Changing this at a later point is a bit more involved. Please check online or on gluster-users on changing IP address associated with brick.<br> <br></div><span><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="m_-3496744416754248898m_3465073948164697150gmail-"><div><br></div></span><div>I'm adding Sahina who may shed some light on the gluster question, I'd try on the gluster mailing list as well. </div><span class="m_-3496744416754248898m_3465073948164697150gmail-"><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div dir="ltr"><div><br></div><div><br></div><div>Ovirt questions:</div><div>I've noticed that recently, I don't appear to be getting software updates anymore. I used to get update available notifications on my nodes every few days; I haven't seen one for a couple weeks now. is something wrong?</div><div><br></div><div>I have a windows 10 x64 VM. I get a warning that my VM type does not match the installed OS. All works fine, but I've quadrouple-checked that it does match. Is this a known bug?</div></div></blockquote><div><br></div></span><div>Arik, any info on that? </div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><span class="m_-3496744416754248898m_3465073948164697150gmail-"><div dir="ltr"><div><br></div><div>I have a UPS that all three nodes and the networking are on. It is a USB UPS. How should I best integrate monitoring in? I could put a raspberry pi up and then run NUT or similar on it, but is there a "better" way with oVirt?</div><div><br></div><div>Thanks!</div><span class="m_-3496744416754248898m_3465073948164697150gmail-m_2753504372628888366HOEnZb"><font color="#888888"><div>--Jim</div></font></span></div>
<br></span>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a rel="noreferrer" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
<br></blockquote></div><br></div></div>
</blockquote></span></div><br></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>