<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<br>
<br>
<div class="moz-cite-prefix">On 04/03/2015 10:04 PM, Alastair Neil
wrote:<br>
</div>
<blockquote
cite="mid:CA+SarwqNuvVGUDDjhDRbNii-foMGAyaVibxyMGM5AEPzRkDu+w@mail.gmail.com"
type="cite">
<div dir="ltr">Any follow up on this?
<div><br>
</div>
<div> Are there known issues using a replica 3 glsuter datastore
with lvm thin provisioned bricks?</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On 20 March 2015 at 15:22, Alastair
Neil <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:ajneil.tech@gmail.com" target="_blank">ajneil.tech@gmail.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">
<div>CentOS 6.6</div>
<span class="">
<div> </div>
<blockquote class="gmail_quote"
style="font-size:13px;margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"> vdsm-4.16.10-8.gitc937927.el6<br>
glusterfs-3.6.2-1.el6<br>
2.6.32 - 504.8.1.el6.x86_64</blockquote>
<div><br>
</div>
</span>
<div>moved to 3.6 specifically to get the snapshotting
feature, hence my desire to migrate to thinly
provisioned lvm bricks.</div>
</div>
</blockquote>
</div>
</div>
</blockquote>
<br>
<br>
Well on the glusterfs mailinglist there have been discussions:<br>
<br>
<br>
<blockquote type="cite">3.6.2 is a major release and introduces some
new features in cluster wide concept. Additionally it is not
stable yet.</blockquote>
<br>
<br>
<br>
<br>
<blockquote
cite="mid:CA+SarwqNuvVGUDDjhDRbNii-foMGAyaVibxyMGM5AEPzRkDu+w@mail.gmail.com"
type="cite">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">
<div><br>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On 20 March 2015 at 14:57,
Darrell Budic <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:budic@onholyground.com"
target="_blank">budic@onholyground.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div style="word-wrap:break-word">What version of
gluster are you running on these?
<div><br>
</div>
<div>I’ve seen high load during heals bounce my
hosted engine around due to overall system load,
but never pause anything else. Cent 7 combo
storage/host systems, gluster 3.5.2.</div>
<div>
<div>
<div><br>
</div>
<div><br>
<div>
<blockquote type="cite">
<div>On Mar 20, 2015, at 9:57 AM,
Alastair Neil <<a
moz-do-not-send="true"
href="mailto:ajneil.tech@gmail.com"
target="_blank">ajneil.tech@gmail.com</a>>
wrote:</div>
<br>
<div>
<div dir="ltr">Pranith
<div><br>
</div>
<div>I have run a pretty
straightforward test. I created a
two brick 50 G replica volume with
normal lvm bricks, and installed
two servers, one centos 6.6 and
one centos 7.0. I kicked off
bonnie++ on both to generate some
file system activity and then made
the volume replica 3. I saw no
issues on the servers. </div>
<div><br>
</div>
<div>Not clear if this is a
sufficiently rigorous test and the
Volume I have had issues on is a
3TB volume with about 2TB used.</div>
<div><br>
</div>
<div>-Alastair</div>
<div><br>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On 19
March 2015 at 12:30, Alastair
Neil <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:ajneil.tech@gmail.com"
target="_blank">ajneil.tech@gmail.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div dir="ltr">I don't think I
have the resources to test
it meaningfully. I have
about 50 vms on my primary
storage domain. I might be
able to set up a small 50 GB
volume and provision 2 or 3
vms running test loads but
I'm not sure it would be
comparable. I'll give it a
try and let you know if I
see similar behaviour.</div>
<div>
<div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On
19 March 2015 at
11:34, Pranith Kumar
Karampuri <span
dir="ltr"><<a
moz-do-not-send="true"
href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>></span>
wrote:<br>
<blockquote
class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px
#ccc
solid;padding-left:1ex">
<div text="#000000"
bgcolor="#FFFFFF">
Without thinly
provisioned lvm.<span><font
color="#888888"><br>
<br>
Pranith</font></span>
<div>
<div><br>
<div>On
03/19/2015
08:01 PM,
Alastair Neil
wrote:<br>
</div>
<blockquote
type="cite">
<div dir="ltr">do
you mean raw
partitions as
bricks or
simply with
out thin
provisioned
lvm?
<div><br>
</div>
<div><br>
</div>
</div>
<div
class="gmail_extra"><br>
<div
class="gmail_quote">On
19 March 2015
at 00:32,
Pranith Kumar
Karampuri <span
dir="ltr"><<a
moz-do-not-send="true" href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>></span>
wrote:<br>
<blockquote
class="gmail_quote"
style="margin:0
0 0
.8ex;border-left:1px
#ccc
solid;padding-left:1ex">
<div
text="#000000"
bgcolor="#FFFFFF"> Could you let me know if you see this problem without
lvm as well?<span><font
color="#888888"><br>
<br>
Pranith</font></span>
<div>
<div><br>
<div>On
03/18/2015
08:25 PM,
Alastair Neil
wrote:<br>
</div>
<blockquote
type="cite">
<div dir="ltr">I
am in the
process of
replacing the
bricks with
thinly
provisioned
lvs yes.
<div><br>
</div>
<div><br>
</div>
</div>
<div
class="gmail_extra"><br>
<div
class="gmail_quote">On
18 March 2015
at 09:35,
Pranith Kumar
Karampuri <span
dir="ltr"><<a
moz-do-not-send="true" href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>></span>
wrote:<br>
<blockquote
class="gmail_quote"
style="margin:0
0 0
.8ex;border-left:1px
#ccc
solid;padding-left:1ex">
<div
text="#000000"
bgcolor="#FFFFFF"> hi,<br>
Are you
using thin-lvm
based backend
on which the
bricks are
created?<br>
<br>
Pranith
<div>
<div><br>
<div>On
03/18/2015
02:05 AM,
Alastair Neil
wrote:<br>
</div>
</div>
</div>
<blockquote
type="cite">
<div>
<div>
<div dir="ltr">I
have a Ovirt
cluster with 6
VM hosts and 4
gluster nodes.
There are two
virtualisation
clusters one
with two
nehelem nodes
and one with
four
sandybridge
nodes. My
master storage
domain is a
GlusterFS
backed by a
replica 3
gluster volume
from 3 of the
gluster
nodes. The
engine is a
hosted engine
3.5.1 on 3 of
the
sandybridge
nodes, with
storage
broviede by
nfs from a
different
gluster
volume. All
the hosts are
CentOS 6.6.
<div><br>
</div>
<blockquote
class="gmail_quote"
style="margin:0px
0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"> vdsm-4.16.10-8.gitc937927.el6<br>
glusterfs-3.6.2-1.el6<br>
2.6.32 -
504.8.1.el6.x86_64</blockquote>
<div><br>
</div>
<div>Problems
happen when I
try to add a
new brick or
replace a
brick
eventually the
self heal will
kill the VMs.
In the VM's
logs I see
kernel hung
task
messages. </div>
<div><br>
</div>
<div>
<blockquote
class="gmail_quote"
style="margin:0px
0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel: INFO:
task
nginx:1736
blocked for
more than 120
seconds.<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
Not tainted
2.6.32-504.3.3.el6.x86_64
#1<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel: "echo
0 >
/proc/sys/kernel/hung_task_timeout_secs"
disables this
message.<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel: nginx
D
0000000000000001
0 1736
1735
0x00000080<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
ffff8800778b17a8
0000000000000082
0000000000000000
00000000000126c0<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
ffff88007e5c6500
ffff880037170080
0006ce5c85bd9185
ffff88007e5c64d0<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
ffff88007a614ae0
00000001722b64ba
ffff88007a615098
ffff8800778b1fd8<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel: Call
Trace:<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffff8152a885>]
schedule_timeout+0x215/0x2e0<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffff8152a503>]
wait_for_common+0x123/0x180<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffff81064b90>]
?
default_wake_function+0x0/0x20<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffffa0210a76>]
?
_xfs_buf_read+0x46/0x60
[xfs]<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffffa02063c7>]
?
xfs_trans_read_buf+0x197/0x410
[xfs]<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffff8152a61d>]
wait_for_completion+0x1d/0x20<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffffa020ff5b>]
xfs_buf_iowait+0x9b/0x100
[xfs]<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffffa02063c7>]
?
xfs_trans_read_buf+0x197/0x410
[xfs]<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffffa0210a76>]
_xfs_buf_read+0x46/0x60
[xfs]<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffffa0210b3b>]
xfs_buf_read+0xab/0x100
[xfs]<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffffa02063c7>]
xfs_trans_read_buf+0x197/0x410
[xfs]<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffffa01ee6a4>]
xfs_imap_to_bp+0x54/0x130
[xfs]<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffffa01f077b>]
xfs_iread+0x7b/0x1b0
[xfs]<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffff811ab77e>]
?
inode_init_always+0x11e/0x1c0<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffffa01eb5ee>]
xfs_iget+0x27e/0x6e0
[xfs]<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffffa01eae1d>]
?
xfs_iunlock+0x5d/0xd0
[xfs]<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffffa0209366>]
xfs_lookup+0xc6/0x110
[xfs]<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffffa0216024>]
xfs_vn_lookup+0x54/0xa0
[xfs]<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffff8119dc65>]
do_lookup+0x1a5/0x230<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffff8119e8f4>]
__link_path_walk+0x7a4/0x1000<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffff811738e7>]
?
cache_grow+0x217/0x320<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffff8119f40a>]
path_walk+0x6a/0xe0<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffff8119f61b>]
filename_lookup+0x6b/0xc0<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffff811a0747>]
user_path_at+0x57/0xa0<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffffa0204e74>]
?
_xfs_trans_commit+0x214/0x2a0
[xfs]<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffffa01eae3e>]
?
xfs_iunlock+0x7e/0xd0
[xfs]<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffff81193bc0>]
vfs_fstatat+0x50/0xa0<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffff811aaf5d>]
?
touch_atime+0x14d/0x1a0<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffff81193d3b>]
vfs_stat+0x1b/0x20<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffff81193d64>]
sys_newstat+0x24/0x50<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffff810e5c87>]
?
audit_syscall_entry+0x1d7/0x200<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffff810e5a7e>]
?
__audit_syscall_exit+0x25e/0x290<br>
</font><font
face="monospace,
monospace">Mar
12 23:05:16
static1
kernel:
[<ffffffff8100b072>]
system_call_fastpath+0x16/0x1b</font></blockquote>
</div>
<div><br>
</div>
<div><br>
</div>
<div>I am
wondering if
my volume
settings are
causing this.
Can anyone
with more
knowledge take
a look and let
me know:</div>
<div><br>
</div>
<div>
<blockquote
class="gmail_quote"
style="margin:0px
0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><font
face="monospace,
monospace">network.remote-dio:
on<br>
</font><font
face="monospace,
monospace">performance.stat-prefetch:
off<br>
</font><font
face="monospace,
monospace">performance.io-cache:
off<br>
</font><font
face="monospace,
monospace">performance.read-ahead:
off<br>
</font><font
face="monospace,
monospace">performance.quick-read:
off<br>
</font><font
face="monospace,
monospace">nfs.export-volumes:
on<br>
</font><font
face="monospace,
monospace">network.ping-timeout:
20<br>
</font><font
face="monospace,
monospace">cluster.self-heal-readdir-size:
64KB<br>
</font><font
face="monospace,
monospace">cluster.quorum-type:
auto<br>
</font><font
face="monospace,
monospace">cluster.data-self-heal-algorithm:
diff<br>
</font><font
face="monospace,
monospace">cluster.self-heal-window-size:
8<br>
</font><font
face="monospace,
monospace">cluster.heal-timeout:
500<br>
</font><font
face="monospace,
monospace">cluster.self-heal-daemon:
on<br>
</font><font
face="monospace,
monospace">cluster.entry-self-heal:
on<br>
</font><font
face="monospace,
monospace">cluster.data-self-heal:
on<br>
</font><font
face="monospace,
monospace">cluster.metadata-self-heal:
on<br>
</font><font
face="monospace,
monospace">cluster.readdir-optimize:
on<br>
</font><font
face="monospace,
monospace">cluster.background-self-heal-count:
20<br>
</font><font
face="monospace,
monospace">cluster.rebalance-stats:
on<br>
</font><font
face="monospace,
monospace">cluster.min-free-disk:
5%<br>
</font><font
face="monospace,
monospace">cluster.eager-lock:
enable<br>
</font><font
face="monospace,
monospace">storage.owner-uid:
36<br>
</font><font
face="monospace,
monospace">storage.owner-gid:
36<br>
</font><font
face="monospace,
monospace">auth.allow:*<br>
</font><font
face="monospace,
monospace">user.cifs:
disable<br>
</font><font
face="monospace,
monospace">cluster.server-quorum-ratio:
51%</font></blockquote>
</div>
<div><br>
</div>
<div>Many
Thanks,
Alastair</div>
<div><br>
</div>
</div>
<br>
<fieldset></fieldset>
<br>
</div>
</div>
<pre>_______________________________________________
Users mailing list
<a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>
<a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a>
</pre>
</blockquote>
<br>
</div>
<br>
_______________________________________________<br>
Users mailing
list<br>
<a
moz-do-not-send="true"
href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a
moz-do-not-send="true"
href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
<br>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
_______________________________________________<br>
Users mailing list<br>
<a moz-do-not-send="true"
href="mailto:Users@ovirt.org"
target="_blank">Users@ovirt.org</a><br>
<a moz-do-not-send="true"
href="http://lists.ovirt.org/mailman/listinfo/users"
target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
<div class="HOEnZb">
<div class="h5"><br>
</div>
</div>
</blockquote>
</div>
</div>
</blockquote>
<br>
<br>
<blockquote
cite="mid:CA+SarwqNuvVGUDDjhDRbNii-foMGAyaVibxyMGM5AEPzRkDu+w@mail.gmail.com"
type="cite">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb">
<div class="h5">
</div>
</div>
</blockquote>
</div>
<br>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a>
<a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a>
</pre>
</blockquote>
<br>
<BR />
<BR />
<b style="color:#604c78"></b><br><span style="color:#604c78;"><font color="000000"><span style="mso-fareast-language:en-gb;" lang="NL">Met vriendelijke groet, With kind regards,<br><br></span>Jorick Astrego</font></span><b style="color:#604c78"><br><br>Netbulae Virtualization Experts </b><br><hr style="border:none;border-top:1px solid #ccc;"><table style="width: 522px"><tbody><tr><td style="width: 130px;font-size: 10px">Tel: 053 20 30 270</td> <td style="width: 130px;font-size: 10px">info@netbulae.eu</td> <td style="width: 130px;font-size: 10px">Staalsteden 4-3A</td> <td style="width: 130px;font-size: 10px">KvK 08198180</td></tr><tr> <td style="width: 130px;font-size: 10px">Fax: 053 20 30 271</td> <td style="width: 130px;font-size: 10px">www.netbulae.eu</td> <td style="width: 130px;font-size: 10px">7547 TA Enschede</td> <td style="width: 130px;font-size: 10px">BTW NL821234584B01</td></tr></tbody></table><br><hr style="border:none;border-top:1px solid #ccc;"><BR />
</body>
</html>