<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Mon, May 8, 2017 at 10:46 PM, Ryan Housand <span dir="ltr"><<a href="mailto:rhousand@empoweredbenefits.com" target="_blank">rhousand@empoweredbenefits.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">We have three gluster shares (_data, _engine, _export) created by a brick located on three of our VM hosts. See output from "gluster volume info" below:<br>
<br>
Volume Name: data<br>
Type: Replicate<br>
Volume ID: c07fdf43-b838-4e4b-bb26-61dbf4<wbr>06cb57<br>
Status: Started<br>
Number of Bricks: 1 x (2 + 1) = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: vmhost01-chi.empoweredbenefits<wbr>.com:/gluster/brick2/data<br>
Brick2: vmhost02-chi.empoweredbenefits<wbr>.com:/gluster/brick2/data<br>
Brick3: vmhost03-chi.empoweredbenefits<wbr>.com:/gluster/brick2/data (arbiter)<br>
Options Reconfigured:<br>
performance.readdir-ahead: on<br>
performance.quick-read: off<br>
performance.read-ahead: off<br>
performance.io-cache: off<br>
performance.stat-prefetch: off<br>
cluster.eager-lock: enable<br>
network.remote-dio: off<br>
cluster.quorum-type: auto<br>
cluster.server-quorum-type: server<br>
storage.owner-uid: 36<br>
storage.owner-gid: 36<br>
features.shard: on<br>
features.shard-block-size: 512MB<br>
performance.low-prio-threads: 32<br>
cluster.data-self-heal-algorit<wbr>hm: full<br>
cluster.locking-scheme: granular<br>
cluster.shd-wait-qlength: 10000<br>
cluster.shd-max-threads: 6<br>
network.ping-timeout: 30<br>
user.cifs: off<br>
nfs.disable: on<br>
performance.strict-o-direct: on<br>
<br>
Volume Name: engine<br>
Type: Distributed-Replicate<br>
Volume ID: 25455f13-75ba-4bc6-926a-d06ee7<wbr>c5859a<br>
Status: Started<br>
Number of Bricks: 2 x (2 + 1) = 6<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: vmhost01-chi.empoweredbenefits<wbr>.com:/gluster/brick1/engine<br>
Brick2: vmhost02-chi.empoweredbenefits<wbr>.com:/gluster/brick1/engine<br>
Brick3: vmhost03-chi.empoweredbenefits<wbr>.com:/gluster/brick1/engine (arbiter)<br>
Brick4: vmhost04-chi:/mnt/engine<br>
Brick5: vmhost05-chi:/mnt/engine<br>
Brick6: vmhost06-chi:/mnt/engine (arbiter)<br>
Options Reconfigured:<br>
performance.readdir-ahead: on<br>
performance.quick-read: off<br>
performance.read-ahead: off<br>
performance.io-cache: off<br>
performance.stat-prefetch: off<br>
cluster.eager-lock: enable<br>
network.remote-dio: off<br>
cluster.quorum-type: auto<br>
cluster.server-quorum-type: server<br>
storage.owner-uid: 36<br>
storage.owner-gid: 36<br>
features.shard: on<br>
features.shard-block-size: 512MB<br>
performance.low-prio-threads: 32<br>
cluster.data-self-heal-algorit<wbr>hm: full<br>
cluster.locking-scheme: granular<br>
cluster.shd-wait-qlength: 10000<br>
cluster.shd-max-threads: 6<br>
network.ping-timeout: 30<br>
user.cifs: off<br>
nfs.disable: on<br>
performance.strict-o-direct: on<br>
<br>
Volume Name: export<br>
Type: Replicate<br>
Volume ID: a4c3a49a-fa83-4a62-9523-989c8e<wbr>016c35<br>
Status: Started<br>
Number of Bricks: 1 x (2 + 1) = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: vmhost01-chi.empoweredbenefits<wbr>.com:/gluster/brick3/export<br>
Brick2: vmhost02-chi.empoweredbenefits<wbr>.com:/gluster/brick3/export<br>
Brick3: vmhost03-chi.empoweredbenefits<wbr>.com:/gluster/brick3/export (arbiter)<br>
Options Reconfigured:<br>
performance.readdir-ahead: on<br>
performance.quick-read: off<br>
performance.read-ahead: off<br>
performance.io-cache: off<br>
performance.stat-prefetch: off<br>
cluster.eager-lock: enable<br>
network.remote-dio: off<br>
cluster.quorum-type: auto<br>
cluster.server-quorum-type: server<br>
storage.owner-uid: 36<br>
storage.owner-gid: 36<br>
features.shard: on<br>
features.shard-block-size: 512MB<br>
performance.low-prio-threads: 32<br>
cluster.data-self-heal-algorit<wbr>hm: full<br>
cluster.locking-scheme: granular<br>
cluster.shd-wait-qlength: 10000<br>
cluster.shd-max-threads: 6<br>
network.ping-timeout: 30<br>
user.cifs: off<br>
nfs.disable: on<br>
performance.strict-o-direct: on<br>
<br>
Our issue is that we ran out of space on our gluster-engine bricks which caused our Hosted Engine vm to crash. We added additional bricks from new VM Hosts (see vmhost05 to vmhost06 above) but we still are unable to restart our Hosted Engine due to the first three space being depleted. My understanding is that I need to extend the bricks that are 100% full on our engine partition. Is it the best practice to stop the glusterd service or can I use "gloster volume stop engine" to only stop the volume I need to extend? Also, if I need to stop glusterd will my VMs hosted on my ovirt cluster be affected by mount points export and data being off line?<br></blockquote><div><br></div><div>Adding the 3 bricks to engine does not redistribute the data. You need to run rebalance on gluster volume engine for this. There's a bug currently that rebalance causes corruption when performed with ongoing IO on the volume.<br></div><div>I think the best way for you to do this, is put hosted-engine to global maintenance, stop the hosted-engine and rebalance the engine gluster volume.<br><br></div><div>What was the original size of the engine gluster volume? (Curious to understand why you ran out of space)<br></div><div><br></div><div>The VMs running on data gluster volume should not be affected by this.<br><br></div><div> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Thanks,<br>
<br>
Ryan<br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
</blockquote></div><br></div></div>