--Apple-Mail=_0AF8638A-F586-43E9-B803-1068443CC555
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=utf-8
What version of gluster are you running on these?
I=E2=80=99ve seen high load during heals bounce my hosted engine around =
due to overall system load, but never pause anything else. Cent 7 combo =
storage/host systems, gluster 3.5.2.
On Mar 20, 2015, at 9:57 AM, Alastair Neil
<ajneil.tech(a)gmail.com> =
wrote:
=20
Pranith
=20
I have run a pretty straightforward test. I created a two brick 50 G =
replica
volume with normal lvm bricks, and installed two servers, one =
centos 6.6 and one centos 7.0. I kicked off bonnie++ on both to =
generate some file system activity and then made the volume replica 3. =
I saw no issues on the servers. =20
=20
Not clear if this is a sufficiently rigorous test and the Volume I =
have had
issues on is a 3TB volume with about 2TB used.
=20
-Alastair
=20
=20
On 19 March 2015 at 12:30, Alastair Neil <ajneil.tech(a)gmail.com =
<mailto:ajneil.tech@gmail.com>> wrote:
I don't think I have the resources to test it meaningfully. I
have =
about 50 vms on my primary storage domain. I might be able to set up a =
small 50 GB volume and provision 2 or 3 vms running test loads but I'm =
not sure it would be comparable. I'll give it a try and let you know if =
I see similar behaviour.
=20
On 19 March 2015 at 11:34, Pranith Kumar Karampuri =
<pkarampu(a)redhat.com
<mailto:pkarampu@redhat.com>> wrote:
Without thinly provisioned lvm.
=20
Pranith
=20
On 03/19/2015 08:01 PM, Alastair Neil wrote:
> do you mean raw partitions as bricks or simply with out thin =
provisioned
lvm?
>=20
>=20
>=20
> On 19 March 2015 at 00:32, Pranith Kumar Karampuri =
<pkarampu(a)redhat.com
<mailto:pkarampu@redhat.com>> wrote:
> Could you let me know if you see this problem without lvm as
well?
>=20
> Pranith
>=20
> On 03/18/2015 08:25 PM, Alastair Neil wrote:
>> I am in the process of replacing the bricks with thinly provisioned =
lvs
yes.
>>=20
>>=20
>>=20
>> On 18 March 2015 at 09:35, Pranith Kumar Karampuri =
<pkarampu(a)redhat.com <mailto:pkarampu@redhat.com>> wrote:
>> hi,
>> Are you using thin-lvm based backend on which the bricks are =
created?
>>=20
>> Pranith
>>=20
>> On 03/18/2015 02:05 AM, Alastair Neil wrote:
>>> I have a Ovirt cluster with 6 VM hosts and 4 gluster nodes. There =
are
two virtualisation clusters one with two nehelem nodes and one with =
four sandybridge nodes. My master storage domain is a GlusterFS backed =
by a replica 3 gluster volume from 3 of the gluster nodes. The engine =
is a hosted engine 3.5.1 on 3 of the sandybridge nodes, with storage =
broviede by nfs from a different gluster volume. All the hosts are =
CentOS 6.6.
>>>=20
>>> vdsm-4.16.10-8.gitc937927.el6
>>> glusterfs-3.6.2-1.el6
>>> 2.6.32 - 504.8.1.el6.x86_64
>>>=20
>>> Problems happen when I try to add a new brick or replace a brick =
eventually the self heal will kill the VMs. In the VM's logs I see =
kernel hung task messages.=20
>>>=20
>>> Mar 12 23:05:16 static1 kernel: INFO: task nginx:1736 blocked for =
more than 120 seconds.
>>> Mar 12 23:05:16 static1 kernel: Not tainted =
2.6.32-504.3.3.el6.x86_64 #1
>>> Mar 12 23:05:16 static1 kernel: "echo 0 > =
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> Mar 12 23:05:16 static1 kernel: nginx D
0000000000000001 =
0 1736 1735 0x00000080
>>> Mar 12 23:05:16 static1 kernel: ffff8800778b17a8
0000000000000082 =
0000000000000000 00000000000126c0
>>> Mar 12 23:05:16 static1 kernel: ffff88007e5c6500
ffff880037170080 =
0006ce5c85bd9185 ffff88007e5c64d0
>>> Mar 12 23:05:16 static1 kernel: ffff88007a614ae0
00000001722b64ba =
ffff88007a615098 ffff8800778b1fd8
>>> Mar 12 23:05:16 static1 kernel: Call Trace:
>>> Mar 12 23:05:16 static1 kernel: [<ffffffff8152a885>] =
schedule_timeout+0x215/0x2e0
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffff8152a503>] =
wait_for_common+0x123/0x180
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffff81064b90>] ? =
default_wake_function+0x0/0x20
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffffa0210a76>] ? =
_xfs_buf_read+0x46/0x60 [xfs]
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffffa02063c7>] ? =
xfs_trans_read_buf+0x197/0x410 [xfs]
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffff8152a61d>] =
wait_for_completion+0x1d/0x20
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffffa020ff5b>] =
xfs_buf_iowait+0x9b/0x100 [xfs]
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffffa02063c7>] ? =
xfs_trans_read_buf+0x197/0x410 [xfs]
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffffa0210a76>] =
_xfs_buf_read+0x46/0x60 [xfs]
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffffa0210b3b>] =
xfs_buf_read+0xab/0x100 [xfs]
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffffa02063c7>] =
xfs_trans_read_buf+0x197/0x410 [xfs]
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffffa01ee6a4>] =
xfs_imap_to_bp+0x54/0x130 [xfs]
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffffa01f077b>] =
xfs_iread+0x7b/0x1b0 [xfs]
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffff811ab77e>] ? =
inode_init_always+0x11e/0x1c0
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffffa01eb5ee>] =
xfs_iget+0x27e/0x6e0 [xfs]
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffffa01eae1d>] ? =
xfs_iunlock+0x5d/0xd0 [xfs]
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffffa0209366>] =
xfs_lookup+0xc6/0x110 [xfs]
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffffa0216024>] =
xfs_vn_lookup+0x54/0xa0 [xfs]
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffff8119dc65>] =
do_lookup+0x1a5/0x230
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffff8119e8f4>] =
__link_path_walk+0x7a4/0x1000
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffff811738e7>] ? =
cache_grow+0x217/0x320
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffff8119f40a>] =
path_walk+0x6a/0xe0
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffff8119f61b>] =
filename_lookup+0x6b/0xc0
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffff811a0747>] =
user_path_at+0x57/0xa0
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffffa0204e74>] ? =
_xfs_trans_commit+0x214/0x2a0 [xfs]
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffffa01eae3e>] ? =
xfs_iunlock+0x7e/0xd0 [xfs]
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffff81193bc0>] =
vfs_fstatat+0x50/0xa0
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffff811aaf5d>] ? =
touch_atime+0x14d/0x1a0
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffff81193d3b>] =
vfs_stat+0x1b/0x20
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffff81193d64>] =
sys_newstat+0x24/0x50
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffff810e5c87>] ? =
audit_syscall_entry+0x1d7/0x200
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffff810e5a7e>] ? =
__audit_syscall_exit+0x25e/0x290
>>> Mar 12 23:05:16 static1 kernel:
[<ffffffff8100b072>] =
system_call_fastpath+0x16/0x1b
>>>=20
>>>=20
>>> I am wondering if my volume settings are causing this. Can anyone =
with more knowledge take a look and let me know:
>>>=20
>>> network.remote-dio: on
>>> performance.stat-prefetch: off
>>> performance.io-cache: off
>>> performance.read-ahead: off
>>> performance.quick-read: off
>>> nfs.export-volumes: on
>>> network.ping-timeout: 20
>>> cluster.self-heal-readdir-size: 64KB
>>> cluster.quorum-type: auto
>>> cluster.data-self-heal-algorithm: diff
>>> cluster.self-heal-window-size: 8
>>> cluster.heal-timeout: 500
>>> cluster.self-heal-daemon: on
>>> cluster.entry-self-heal: on
>>> cluster.data-self-heal: on
>>> cluster.metadata-self-heal: on
>>> cluster.readdir-optimize: on
>>> cluster.background-self-heal-count: 20
>>> cluster.rebalance-stats: on
>>> cluster.min-free-disk: 5%
>>> cluster.eager-lock: enable
>>> storage.owner-uid: 36
>>> storage.owner-gid: 36
>>> auth.allow:*
>>> user.cifs: disable
>>> cluster.server-quorum-ratio: 51%
>>>=20
>>> Many Thanks, Alastair
>>>=20
>>>=20
>>>=20
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org <mailto:Users@ovirt.org>
>>>
http://lists.ovirt.org/mailman/listinfo/users =
<
http://lists.ovirt.org/mailman/listinfo/users>
>>=20
>>=20
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org <mailto:Users@ovirt.org>
>>
http://lists.ovirt.org/mailman/listinfo/users =
<
http://lists.ovirt.org/mailman/listinfo/users>
>>=20
>>=20
>=20
>=20
=20
=20
=20
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--Apple-Mail=_0AF8638A-F586-43E9-B803-1068443CC555
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
charset=utf-8
<html><head><meta http-equiv=3D"Content-Type"
content=3D"text/html =
charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" =
class=3D"">What version of gluster are you running on these?<div =
class=3D""><br class=3D""></div><div
class=3D"">I=E2=80=99ve seen high =
load during heals bounce my hosted engine around due to overall system =
load, but never pause anything else. Cent 7 combo storage/host systems, =
gluster 3.5.2.</div><div class=3D""><br
class=3D""></div><div =
class=3D""><br class=3D""><div><blockquote
type=3D"cite" class=3D""><div =
class=3D"">On Mar 20, 2015, at 9:57 AM, Alastair Neil <<a =
href=3D"mailto:ajneil.tech@gmail.com" =
class=3D"">ajneil.tech(a)gmail.com</a>&gt; wrote:</div><br
=
class=3D"Apple-interchange-newline"><div class=3D""><div
dir=3D"ltr" =
class=3D"">Pranith<div class=3D""><br
class=3D""></div><div class=3D"">I =
have run a pretty straightforward test. I created a two brick 50 G =
replica volume with normal lvm bricks, and installed two servers, one =
centos 6.6 and one centos 7.0. I kicked off bonnie++ on both to =
generate some file system activity and then made the volume replica =
3. I saw no issues on the servers. </div><div =
class=3D""><br class=3D""></div><div
class=3D"">Not clear if this is a =
sufficiently rigorous test and the Volume I have had issues on is a 3TB =
volume with about 2TB used.</div><div class=3D""><br =
class=3D""></div><div
class=3D"">-Alastair</div><div class=3D""><br =
class=3D""></div><div class=3D"gmail_extra"><br
class=3D""><div =
class=3D"gmail_quote">On 19 March 2015 at 12:30, Alastair Neil <span =
dir=3D"ltr" class=3D""><<a
href=3D"mailto:ajneil.tech@gmail.com" =
target=3D"_blank"
class=3D"">ajneil.tech(a)gmail.com</a>&gt;</span> =
wrote:<br class=3D""><blockquote class=3D"gmail_quote"
style=3D"margin:0 =
0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div
dir=3D"ltr" =
class=3D"">I don't think I have the resources to test it =
meaningfully. I have about 50 vms on my primary storage =
domain. I might be able to set up a small 50 GB volume and =
provision 2 or 3 vms running test loads but I'm not sure it would be =
comparable. I'll give it a try and let you know if I see similar =
behaviour.</div><div class=3D""><div
class=3D""><div =
class=3D"gmail_extra"><br class=3D""><div
class=3D"gmail_quote">On 19 =
March 2015 at 11:34, Pranith Kumar Karampuri <span dir=3D"ltr" =
class=3D""><<a href=3D"mailto:pkarampu@redhat.com"
target=3D"_blank" =
class=3D"">pkarampu(a)redhat.com</a>&gt;</span> wrote:<br =
class=3D""><blockquote class=3D"gmail_quote"
style=3D"margin:0 0 0 =
.8ex;border-left:1px #ccc solid;padding-left:1ex">
=20
=20
=20
<div text=3D"#000000" bgcolor=3D"#FFFFFF"
class=3D"">
Without thinly provisioned lvm.<span class=3D""><font =
color=3D"#888888" class=3D""><br class=3D"">
<br class=3D"">
Pranith</font></span><div class=3D""><div
class=3D""><br class=3D"">
<div class=3D"">On 03/19/2015 08:01 PM, Alastair Neil
wrote:<br class=3D"">
</div>
<blockquote type=3D"cite" class=3D"">
<div dir=3D"ltr" class=3D"">do you mean raw partitions as
bricks =
or simply with
out thin provisioned lvm?
<div class=3D""><br class=3D"">
</div>
<div class=3D""><br class=3D"">
</div>
</div>
<div class=3D"gmail_extra"><br class=3D"">
<div class=3D"gmail_quote">On 19 March 2015 at 00:32, Pranith
Kumar Karampuri <span dir=3D"ltr"
class=3D""><<a =
href=3D"mailto:pkarampu@redhat.com" target=3D"_blank" =
class=3D"">pkarampu(a)redhat.com</a>&gt;</span>
wrote:<br class=3D"">
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 =
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text=3D"#000000" bgcolor=3D"#FFFFFF"
class=3D""> Could =
you let me know
if you see this problem without lvm as well?<span =
class=3D""><font color=3D"#888888"
class=3D""><br class=3D"">
<br class=3D"">
Pranith</font></span>
<div class=3D"">
<div class=3D""><br class=3D"">
<div class=3D"">On 03/18/2015 08:25 PM, Alastair Neil =
wrote:<br class=3D"">
</div>
<blockquote type=3D"cite" class=3D"">
<div dir=3D"ltr" class=3D"">I am in the
process of =
replacing the
bricks with thinly provisioned lvs yes.
<div class=3D""><br class=3D"">
</div>
<div class=3D""><br class=3D"">
</div>
</div>
<div class=3D"gmail_extra"><br
class=3D"">
<div class=3D"gmail_quote">On 18 March 2015 at
09:35, Pranith Kumar Karampuri <span dir=3D"ltr" =
class=3D""><<a href=3D"mailto:pkarampu@redhat.com"
target=3D"_blank" =
class=3D"">pkarampu(a)redhat.com</a>&gt;</span>
wrote:<br class=3D"">
<blockquote class=3D"gmail_quote" =
style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text=3D"#000000" bgcolor=3D"#FFFFFF"
=
class=3D""> hi,<br class=3D"">
Are you
using =
thin-lvm based backend
on which the bricks are created?<br =
class=3D"">
<br class=3D"">
Pranith
<div class=3D"">
<div class=3D""><br
class=3D"">
<div class=3D"">On 03/18/2015 02:05 AM, =
Alastair
Neil wrote:<br class=3D"">
</div>
</div>
</div>
<blockquote type=3D"cite"
class=3D"">
<div class=3D"">
<div class=3D"">
<div dir=3D"ltr" class=3D"">I
have a =
Ovirt cluster
with 6 VM hosts and 4 gluster nodes.
There are two virtualisation
clusters one with two nehelem nodes
and one with four =
sandybridge
nodes. My master storage domain is a
GlusterFS backed by a replica 3
gluster volume from 3 of the gluster
nodes. The engine is a hosted
engine 3.5.1 on 3 of the sandybridge
nodes, with storage broviede by nfs
from a different gluster =
volume.
All the hosts are CentOS 6.6.
<div class=3D""><br
class=3D"">
</div>
<blockquote class=3D"gmail_quote" =
style=3D"margin:0px 0px 0px =
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left=
-style:solid;padding-left:1ex"> vdsm-4.16.10-8.gitc937927.el6<br =
class=3D"">
glusterfs-3.6.2-1.el6<br
class=3D"">=
2.6.32 - =
504.8.1.el6.x86_64</blockquote>
<div class=3D""><br
class=3D"">
</div>
<div class=3D"">Problems happen when
=
I try to
add a new brick or replace a brick
eventually the self heal will kill
the VMs. In the VM's logs I see
kernel hung task =
messages. </div>
<div class=3D""><br
class=3D"">
</div>
<div class=3D"">
<blockquote class=3D"gmail_quote" =
style=3D"margin:0px 0px 0px =
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left=
-style:solid;padding-left:1ex"><font face=3D"monospace, monospace" =
class=3D"">Mar
12 23:05:16 static1 kernel:
INFO: task nginx:1736 blocked
for more than 120 seconds.<br =
class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel: =
Not
tainted
2.6.32-504.3.3.el6.x86_64 =
#1<br class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel: "echo 0 >
=
/proc/sys/kernel/hung_task_timeout_secs"
disables this message.<br =
class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel: nginx =
D 0000000000000001 =
0 1736
1735 0x00000080<br =
class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
ffff8800778b17a8
0000000000000082
0000000000000000
00000000000126c0<br class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
ffff88007e5c6500
ffff880037170080
0006ce5c85bd9185
ffff88007e5c64d0<br class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
ffff88007a614ae0
00000001722b64ba
ffff88007a615098
ffff8800778b1fd8<br class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel: Call Trace:<br =
class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffff8152a885>]
=
schedule_timeout+0x215/0x2e0<br class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffff8152a503>]
wait_for_common+0x123/0x180<br =
class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffff81064b90>] ?
=
default_wake_function+0x0/0x20<br class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffffa0210a76>] ?
_xfs_buf_read+0x46/0x60 =
[xfs]<br class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffffa02063c7>] ?
xfs_trans_read_buf+0x197/0x410
[xfs]<br class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffff8152a61d>]
=
wait_for_completion+0x1d/0x20<br class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffffa020ff5b>]
xfs_buf_iowait+0x9b/0x100
[xfs]<br class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffffa02063c7>] ?
xfs_trans_read_buf+0x197/0x410
[xfs]<br class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffffa0210a76>]
_xfs_buf_read+0x46/0x60 =
[xfs]<br class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffffa0210b3b>]
xfs_buf_read+0xab/0x100 =
[xfs]<br class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffffa02063c7>]
xfs_trans_read_buf+0x197/0x410
[xfs]<br class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffffa01ee6a4>]
xfs_imap_to_bp+0x54/0x130
[xfs]<br class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffffa01f077b>]
xfs_iread+0x7b/0x1b0 [xfs]<br =
class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffff811ab77e>] ?
=
inode_init_always+0x11e/0x1c0<br class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffffa01eb5ee>]
xfs_iget+0x27e/0x6e0 [xfs]<br =
class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffffa01eae1d>] ?
xfs_iunlock+0x5d/0xd0 [xfs]<br =
class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffffa0209366>]
xfs_lookup+0xc6/0x110 [xfs]<br =
class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffffa0216024>]
xfs_vn_lookup+0x54/0xa0 =
[xfs]<br class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffff8119dc65>]
do_lookup+0x1a5/0x230<br =
class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffff8119e8f4>]
=
__link_path_walk+0x7a4/0x1000<br class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffff811738e7>] ?
cache_grow+0x217/0x320<br =
class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffff8119f40a>]
path_walk+0x6a/0xe0<br =
class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffff8119f61b>]
filename_lookup+0x6b/0xc0<br =
class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffff811a0747>]
user_path_at+0x57/0xa0<br =
class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffffa0204e74>] ?
_xfs_trans_commit+0x214/0x2a0
[xfs]<br class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffffa01eae3e>] ?
xfs_iunlock+0x7e/0xd0 [xfs]<br =
class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffff81193bc0>]
vfs_fstatat+0x50/0xa0<br =
class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffff811aaf5d>] ?
touch_atime+0x14d/0x1a0<br =
class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffff81193d3b>]
vfs_stat+0x1b/0x20<br =
class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffff81193d64>]
sys_newstat+0x24/0x50<br =
class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffff810e5c87>] ?
=
audit_syscall_entry+0x1d7/0x200<br class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffff810e5a7e>] ?
=
__audit_syscall_exit+0x25e/0x290<br class=3D"">
</font><font face=3D"monospace,
monospace" class=3D"">Mar 12
=
23:05:16
static1 kernel:
[<ffffffff8100b072>]
=
system_call_fastpath+0x16/0x1b</font></blockquote>
</div>
<div class=3D""><br
class=3D"">
</div>
<div class=3D""><br
class=3D"">
</div>
<div class=3D"">I am wondering if my
=
volume
settings are causing this. =
Can
anyone with more knowledge take a
look and let me know:</div>
<div class=3D""><br
class=3D"">
</div>
<div class=3D"">
<blockquote class=3D"gmail_quote" =
style=3D"margin:0px 0px 0px =
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left=
-style:solid;padding-left:1ex"><font face=3D"monospace, monospace" =
class=3D"">network.remote-dio:
on<br class=3D"">
</font><font face=3D"monospace,
monospace" =
class=3D"">performance.stat-prefetch:
off<br class=3D"">
</font><font face=3D"monospace,
monospace" =
class=3D"">performance.io-cache:
off<br class=3D"">
</font><font face=3D"monospace,
monospace" =
class=3D"">performance.read-ahead:
off<br class=3D"">
</font><font face=3D"monospace,
monospace" =
class=3D"">performance.quick-read:
off<br class=3D"">
</font><font face=3D"monospace,
monospace" =
class=3D"">nfs.export-volumes:
on<br class=3D"">
</font><font face=3D"monospace,
monospace" =
class=3D"">network.ping-timeout:
20<br class=3D"">
</font><font face=3D"monospace,
monospace" =
class=3D"">cluster.self-heal-readdir-size:
64KB<br class=3D"">
</font><font face=3D"monospace,
monospace" =
class=3D"">cluster.quorum-type:
auto<br class=3D"">
</font><font face=3D"monospace,
monospace" =
class=3D"">cluster.data-self-heal-algorithm:
diff<br class=3D"">
</font><font face=3D"monospace,
monospace" =
class=3D"">cluster.self-heal-window-size:
8<br class=3D"">
</font><font face=3D"monospace,
monospace" =
class=3D"">cluster.heal-timeout:
500<br class=3D"">
</font><font face=3D"monospace,
monospace" =
class=3D"">cluster.self-heal-daemon:
on<br class=3D"">
</font><font face=3D"monospace,
monospace" =
class=3D"">cluster.entry-self-heal:
on<br class=3D"">
</font><font face=3D"monospace,
monospace" =
class=3D"">cluster.data-self-heal:
on<br class=3D"">
</font><font face=3D"monospace,
monospace" =
class=3D"">cluster.metadata-self-heal:
on<br class=3D"">
</font><font face=3D"monospace,
monospace" =
class=3D"">cluster.readdir-optimize:
on<br class=3D"">
</font><font face=3D"monospace,
monospace" =
class=3D"">cluster.background-self-heal-count:
20<br class=3D"">
</font><font face=3D"monospace,
monospace" =
class=3D"">cluster.rebalance-stats:
on<br class=3D"">
</font><font face=3D"monospace,
monospace" =
class=3D"">cluster.min-free-disk:
5%<br class=3D"">
</font><font face=3D"monospace,
monospace" =
class=3D"">cluster.eager-lock:
enable<br class=3D"">
</font><font face=3D"monospace,
monospace" =
class=3D"">storage.owner-uid:
36<br class=3D"">
</font><font face=3D"monospace,
monospace" =
class=3D"">storage.owner-gid:
36<br class=3D"">
</font><font face=3D"monospace,
monospace" =
class=3D"">auth.allow:*<br class=3D"">
</font><font face=3D"monospace,
monospace"
class=3D"">user.cifs:=
disable<br class=3D"">
</font><font face=3D"monospace,
monospace" =
class=3D"">cluster.server-quorum-ratio:
51%</font></blockquote>
</div>
<div class=3D""><br
class=3D"">
</div>
<div class=3D"">Many Thanks, =
Alastair</div>
<div class=3D""><br
class=3D"">
</div>
</div>
<br class=3D"">
<fieldset class=3D""></fieldset>
<br class=3D"">
</div>
</div>
<pre =
class=3D"">_______________________________________________
Users mailing list
<a href=3D"mailto:Users@ovirt.org" target=3D"_blank" =
class=3D"">Users(a)ovirt.org</a>
<a
href=3D"http://lists.ovirt.org/mailman/listinfo/users" =
target=3D"_blank" =
class=3D"">http://lists.ovirt.org/mailman/listinfo/users<...
</pre>
</blockquote>
<br class=3D"">
</div>
<br class=3D"">
_______________________________________________<br class=3D"">
Users mailing list<br class=3D"">
<a href=3D"mailto:Users@ovirt.org" =
target=3D"_blank" class=3D"">Users(a)ovirt.org</a><br
class=3D"">
<a =
href=3D"http://lists.ovirt.org/mailman/listinfo/users"
target=3D"_blank" =
class=3D"">http://lists.ovirt.org/mailman/listinfo/users<... =
class=3D"">
<br class=3D"">
</blockquote>
</div>
<br class=3D"">
</div>
</blockquote>
<br class=3D"">
</div>
</div>
</div>
</blockquote>
</div>
<br class=3D"">
</div>
</blockquote>
<br class=3D"">
</div></div></div>
</blockquote></div><br class=3D""></div>
</div></div></blockquote></div><br
class=3D""></div></div>
_______________________________________________<br class=3D"">Users =
mailing list<br class=3D""><a href=3D"mailto:Users@ovirt.org"
=
class=3D"">Users(a)ovirt.org</a><br =
class=3D"">http://lists.ovirt.org/mailman/listinfo/users<br =
class=3D""></div></blockquote></div><br
class=3D""></div></body></html>=
--Apple-Mail=_0AF8638A-F586-43E9-B803-1068443CC555--