Hi,
On a new host, I am running into exactly the same scenario.
I have a host with an oVirt-managed GlusterFS volume (single brick on
local disk in distribute mode) on an XFS file system.
I think I have found the root cause, but I doubt I can fix it.
Around the time of the VMs going paused, there seemed to be a glusterfsd
restart:
[2014-12-18 01:43:27.272235] W [glusterfsd.c:1194:cleanup_and_exit]
(--> 0-: received signum (15), shutting down
[2014-12-18 01:43:27.272279] I [fuse-bridge.c:5599:fini] 0-fuse: Unmounting
'/rhev/data-center/mnt/glusterSD/onode3.isaac.local:data02'.
[2014-12-18 01:49:36.854339] I [MSGID: 100030] [glusterfsd.c:2018:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.6.1 (args:
/usr/sbin/glusterfs -
-volfile-server=onode3.isaac.local --volfile-id=data02
/rhev/data-center/mnt/glusterSD/onode3.isaac.local:data02)
[2014-12-18 01:49:36.862887] I [dht-shared.c:337:dht_init_regex] 0-data02-dht: using
regex rsync-hash-regex = ^\.(.+)\.[^.]+$
[2014-12-18 01:49:36.863749] I [client.c:2280:notify] 0-data02-client-0: parent
translators are ready, attempting connect on transport
So I thought I'd check /var/log/messages for potential sources of the
SIGTERM, and I found this:
Dec 18 02:43:26 onode3 kernel: supervdsmServer[1960]: segfault at 18
ip 00007faa89951bca sp 00007fa355b80f40 error 4 in libgfapi.so.0.0.0[7faa8994c000+18000]
Dec 18 02:43:27 onode3 systemd: supervdsmd.service: main process exited, code=killed,
status=11/SEGV
Dec 18 02:43:27 onode3 systemd: Unit supervdsmd.service entered failed state.
Dec 18 02:43:27 onode3 journal: vdsm jsonrpc.JsonRpcServer ERROR Internal server error
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 486, in
_serveRequest
res = method(**params)
File "/usr/share/vdsm/rpc/Bridge.py", line 266, in _dynamicMethod
result = fn(*methodArgs)
File "/usr/share/vdsm/gluster/apiwrapper.py", line 106, in status
return self._gluster.volumeStatus(volumeName, brick, statusOption)
File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
rv = func(*args, **kwargs)
File "/usr/share/vdsm/gluster/api.py", line 221, in volumeStatus
data = self.svdsmProxy.glusterVolumeStatvfs(volumeName)
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in glusterVolumeStatvfs
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 759, in
_callmethod
kind, result = conn.recv()
EOFError
Dec 18 02:43:27 onode3 systemd: supervdsmd.service holdoff time over, scheduling
restart.
Dec 18 02:43:27 onode3 systemd: Stopping Virtual Desktop Server Manager...
Dec 18 02:43:27 onode3 systemd: Stopping "Auxiliary vdsm service for running helper
functions as root"...
Dec 18 02:43:27 onode3 systemd: Starting "Auxiliary vdsm service for running helper
functions as root"...
Dec 18 02:43:27 onode3 systemd: Started "Auxiliary vdsm service for running helper
functions as root".
Dec 18 02:43:27 onode3 journal: vdsm IOProcessClient ERROR IOProcess failure
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 107, in
_communicate
raise Exception("FD closed")
Exception: FD closed
I guess I'll file a bug report.
Best regards,
Martijn Grendelman
Punit Dambiwal schreef op 12-12-2014 om 3:44:
Hi Dan,
Yes..it's glusterfs....
glusterfs logs :-
http://ur1.ca/j3b5f
OS Version: RHEL - 7 - 0.1406.el7.centos.2.3
Kernel Version: 3.10.0 - 123.el7.x86_64
KVM Version: 1.5.3 - 60.el7_0.2
LIBVIRT Version: libvirt-1.1.1-29.el7_0.3
VDSM Version: vdsm-4.16.7-1.gitdb83943.el7
GlusterFS Version: glusterfs-3.6.1-1.el7
Qemu Version : QEMU emulator version 1.5.3 (qemu-kvm-1.5.3-60.el7_0.2)
Thanks,
punit
On Thu, Dec 11, 2014 at 5:47 PM, Dan Kenigsberg <danken(a)redhat.com
<mailto:danken@redhat.com>> wrote:
On Thu, Dec 11, 2014 at 03:41:01PM +0800, Punit Dambiwal wrote:
> Hi,
>
> Suddenly all of my VM on one host paused with the following error :-
>
> vm has paused due to unknown storage error
>
> I am using glusterfs storage with distributed replicate
replica=2....my
> storage and compute both running on the same node...
>
> engine logs :-
http://ur1.ca/j31iu
> Host logs :-
http://ur1.ca/j31kk (I grep it for one Failed VM)
libvirtEventLoop::INFO::2014-12-11
15:00:48,627::vm::4780::vm.Vm::(_onIOError)
vmId=`e84bb987-a817-436a-9417-8eab9148e57e`::abnormal vm stop device
virtio-disk0 error eother
Which type of storage is it? gluster? Do you have anything in particular
on glusterfs logs?
Which glusterfs/qemu/libvirt/vdsm versions do you have installed?