Looking at the logs:
In the vdsm log:
2019-02-26 17:07:09,440+0400 ERROR (check/loop) [storage.Monitor]
Error checking path
/rhev/data-center/mnt/glusterSD/ovirtnode1.miac:_vmstore/01f6fd06-9ad1-4957-bcda-df24dc4cc4f5/dom_md/metadata
(monitor:498)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
line 496, in _pathChecked
delay = result.delay()
File "/usr/lib/python2.7/site-packages/vdsm/storage/check.py", line
391, in delay
raise exception.MiscFileReadException(self.path, self.rc, self.err)
MiscFileReadException: Internal file read failure:
(u'/rhev/data-center/mnt/glusterSD/ovirtnode1.miac:_vmstore/01f6fd06-9ad1-4957-bcda-df24dc4cc4f5/dom_md/metadata',
1, 'Read timeout')
Does this match the below error (UTC time), from gluster mount log/
07:01:23.040258] W [socket.c:600:__socket_rwv] 0-vmstore-client-1:
readv on 172.16.100.5:49155 failed (No data available)
[2019-02-28 07:01:23.040287] I [MSGID: 114018]
[client.c:2285:client_rpc_notify] 0-vmstore-client-1: disconnected
from vmstore-client-1. Client process will keep trying to connect to
glusterd until brick's port is available
Also, what does the below error translate to in English? I think the
error is to do with storage unavailability during the time. Was there
an issue with network connection during the time?
2019-02-26 17:08:20,271+0400 WARN (qgapoller/3)
[virt.periodic.VmDispatcher] could not run <function <lambda> at
0x7ff7341cd668> on ['de76aa6c-a211-41de-8d85-7d2821c3980d',
'7a3af2e7-8296-4fe0-ac55-c52a4b1de93f'] (periodic:323)
2019-02-26 17:08:20,823+0400 ERROR (vm/d546add1) [virt.vm]
(vmId='d546add1-126a-4490-bc83-469bab659854') The vm start process
failed (vm:948)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 877,
in _startUnderlyingVm
self._run()
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2898, in _run
dom.createWithFlags(flags)
File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py",
line 130, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/common/function.py",
line 92, in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1099, in
createWithFlags
if ret == -1: raise libvirtError ('virDomainCreateWithFlags()
failed', dom=self)
libvirtError: Не удалось установить блокировку: На устройстве не
осталось свободного места[2019-02-28
On Fri, Mar 1, 2019 at 1:08 PM Mike Lykov <combr(a)ya.ru> wrote:
>
> 01.03.2019 9:51, Sahina Bose пишет:
> > Any errors in vdsm.log or gluster mount log for this volume?
> >
>
> I cannot find any.
> Here is full logs from one node for that period:
>
>
https://yadi.sk/d/BzLBb8VGNEwidw
> file name ovirtnode1-logs-260219.tar.gz
>
> gluster, vdsm logs for all volumes
>
> sanlock client status now (can it contain any useful info for "cannot set
lock" error?):
> node without any VMs
>
> [root@ovirtnode5 ~]# sanlock client status
> daemon 165297fa-c9e7-47ec-8949-80f39f52304c.ovirtnode5
> p -1 helper
> p -1 listener
> p -1 status
> s
01f6fd06-9ad1-4957-bcda-df24dc4cc4f5:2:/rhev/data-center/mnt/glusterSD/ovirtnode1.miac\:_vmstore/01f6fd06-9ad1-4957-bcda-df24dc4cc4f5/dom_md/ids:0
> s
64f18bf1-4eb6-4b3e-a216-9681091a3bc7:2:/rhev/data-center/mnt/glusterSD/ovirtnode1.miac\:_data/64f18bf1-4eb6-4b3e-a216-9681091a3bc7/dom_md/ids:0
> s
hosted-engine:2:/var/run/vdsm/storage/0571ac7b-a28e-4e20-9cd8-4803e40ec602/1c7d4c4d-4ae4-4743-a61c-1437459dcc14/699eec1d-c713-4e66-8587-27792d9a2b32:0
> s
0571ac7b-a28e-4e20-9cd8-4803e40ec602:2:/rhev/data-center/mnt/glusterSD/ovirtstor1.miac\:_engine/0571ac7b-a28e-4e20-9cd8-4803e40ec602/dom_md/ids:0
>
> node with VMs
>
> [root@ovirtnode1 /]# sanlock client status
> daemon 71784659-0fac-4802-8c0d-0efe3ab977d9.ovirtnode1
> p -1 helper
> p -1 listener
> p 36456 miac_serv2
> p 48024 miac_gitlab_runner
> p 10151
> p 50624 e-l-k.miac
> p 455336 openfire.miac
> p 456445 miac_serv3
> p 458384 debian9_2
> p -1 status
> s
01f6fd06-9ad1-4957-bcda-df24dc4cc4f5:1:/rhev/data-center/mnt/glusterSD/ovirtnode1.miac\:_vmstore/01f6fd06-9ad1-4957-bcda-df24dc4cc4f5/dom_md/ids:0
> s
64f18bf1-4eb6-4b3e-a216-9681091a3bc7:1:/rhev/data-center/mnt/glusterSD/ovirtnode1.miac\:_data/64f18bf1-4eb6-4b3e-a216-9681091a3bc7/dom_md/ids:0
> s
hosted-engine:1:/var/run/vdsm/storage/0571ac7b-a28e-4e20-9cd8-4803e40ec602/1c7d4c4d-4ae4-4743-a61c-1437459dcc14/699eec1d-c713-4e66-8587-27792d9a2b32:0
> s
0571ac7b-a28e-4e20-9cd8-4803e40ec602:1:/rhev/data-center/mnt/glusterSD/ovirtstor1.miac\:_engine/0571ac7b-a28e-4e20-9cd8-4803e40ec602/dom_md/ids:0
> r
01f6fd06-9ad1-4957-bcda-df24dc4cc4f5:b19996be-1548-41ad-afe3-1726ee38d368:/rhev/data-center/mnt/glusterSD/ovirtnode1.miac\:_vmstore/01f6fd06-9ad1-4957-bcda-df24dc4cc4f5/dom_md/xleases:13631488:7
p 458384
> r
01f6fd06-9ad1-4957-bcda-df24dc4cc4f5:4507a184-e158-484e-932a-2f1266b80223:/rhev/data-center/mnt/glusterSD/ovirtnode1.miac\:_vmstore/01f6fd06-9ad1-4957-bcda-df24dc4cc4f5/dom_md/xleases:7340032:7
p 456445
> r
01f6fd06-9ad1-4957-bcda-df24dc4cc4f5:d546add1-126a-4490-bc83-469bab659854:/rhev/data-center/mnt/glusterSD/ovirtnode1.miac\:_vmstore/01f6fd06-9ad1-4957-bcda-df24dc4cc4f5/dom_md/xleases:19922944:6
p 455336
> r
0571ac7b-a28e-4e20-9cd8-4803e40ec602:SDM:/rhev/data-center/mnt/glusterSD/ovirtstor1.miac\:_engine/0571ac7b-a28e-4e20-9cd8-4803e40ec602/dom_md/leases:1048576:10
p 10151
> r
01f6fd06-9ad1-4957-bcda-df24dc4cc4f5:7a3af2e7-8296-4fe0-ac55-c52a4b1de93f:/rhev/data-center/mnt/glusterSD/ovirtnode1.miac\:_vmstore/01f6fd06-9ad1-4957-bcda-df24dc4cc4f5/dom_md/xleases:17825792:5
p 50624
> r
01f6fd06-9ad1-4957-bcda-df24dc4cc4f5:4c2aaf48-a3f1-45a1-9c2b-912763643268:/rhev/data-center/mnt/glusterSD/ovirtnode1.miac\:_vmstore/01f6fd06-9ad1-4957-bcda-df24dc4cc4f5/dom_md/xleases:10485760:4
p 48024
> r
01f6fd06-9ad1-4957-bcda-df24dc4cc4f5:6c380073-9650-4832-8416-3001c5a172ab:/rhev/data-center/mnt/glusterSD/ovirtnode1.miac\:_vmstore/01f6fd06-9ad1-4957-bcda-df24dc4cc4f5/dom_md/xleases:6291456:6
p 36456
>