Thank you very much for your reply.
I checked the NTP and realized the service wasn't working properly on two of the three
nodes, but despite that the clocks seemed to have the correct time(date and hwdate). I
switched to chronyd and stopped the ntpd service and now it seems the servers' clocks
are synchronized.
The time in BIOS has different time than the systems. Does this affect the behaviour of
the overall performance?
This is what I could gather from the logs:
Node1:
[root@ov-no1 ~]# more /var/log/messages |egrep "(WARN|error)"|more
d
Aug 27 17:53:08 ov-no1 libvirtd: 2020-08-27 14:53:08.947+0000: 5613: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU gues
t agent is not connected
Aug 27 17:58:08 ov-no1 libvirtd: 2020-08-27 14:58:08.943+0000: 5613: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU gues
t agent is not connected
Aug 27 18:03:08 ov-no1 libvirtd: 2020-08-27 15:03:08.937+0000: 5614: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU gues
t agent is not connected
Aug 27 18:08:08 ov-no1 libvirtd: 2020-08-27 15:08:08.951+0000: 5617: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU gues
t agent is not connected
Aug 27 18:13:08 ov-no1 libvirtd: 2020-08-27 15:13:08.951+0000: 5616: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU gues
t agent is not connected
Aug 27 18:18:08 ov-no1 libvirtd: 2020-08-27 15:18:08.942+0000: 5618: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU gues
t agent is not connected
.
.
[Around that time Node 3, who is the arbiter node, was placed in local Maintenance mode.
It was shut down for maintenance and when we boot it up again all seemed right. We removed
it from the maintenance mode and when the healing processes finished, Node 1 became
NonResponsive. Long story short, the VDSM agent sent Node1 a restart command. Node1
rebooted, HostedEngine was up on Node1 and the rest of the VMs that were hosted by Node1
had to be manually brought up. Since then everything seemed to be working as it should.
The HostedEngine VM shutdown with no apparent reason 5 days later, making us believe there
was no connection between the two incidents.]
.
.
.
.
.
Sep 1 05:53:30 ov-no1 vdsm[6706]: WARN Worker blocked: <Worker name=jsonrpc/3 running
<Task <JsonRpcTask {'params': {}, 'jsonrpc': '2.0',
'method':
u'GlusterVolume.geoRepSessionList', 'id':
u'e1f4db88-5d36-4ba1-8848-3855c1dbc6ee'} at 0x7f1ed7381650> timeout=60,
duration=60.00 at 0x7f1ed7381dd0> t
ask#=76268 at 0x7f1ebc0797d0>, traceback:#012File:
"/usr/lib64/python2.7/threading.py", line 785, in __bootstrap#012
self.__bootstrap_inner()#012Fil
e: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner#012
self.run()#012File: "/usr/lib64/python2.7/threading.py", line 765, in run
#012 self.__target(*self.__args, **self.__kwargs)#012File:
"/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 195, in
run#012 ret =
func(*args, **kwargs)#012File:
"/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run#012
self._execute_task()#012File: "/usr/lib/p
ython2.7/site-packages/vdsm/executor.py", line 315, in _execute_task#012
task()#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 3
91, in __call__#012 self._callable()#012File:
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in
__call__#012 self._handler(sel
f._ctx, self._req)#012File:
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in
_serveRequest#012 response = self._handle_request
(req, ctx)#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py",
line 345, in _handle_request#012 res = method(**params)#012File: "/usr
/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in _dynamicMethod#012
result = fn(*methodArgs)#012File: "/usr/lib/python2.7/site-package
s/vdsm/gluster/apiwrapper.py", line 237, in geoRepSessionList#012
remoteUserName)#012File: "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py",
l
ine 93, in wrapper#012 rv = func(*args, **kwargs)#012File:
"/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 551, in
volumeGeoRepSessionL
ist#012 remoteUserName,#012File:
"/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 56, in
__call__#012 return callMethod()#012File:
"/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 54, in
<lambda>#012 **kwargs)#012File: "<string>", line 2, in
glusterVolumeGeoRep
Status#012File: "/usr/lib64/python2.7/multiprocessing/managers.py", line 759, in
_callmethod#012 kind, result = conn.recv()
Sep 1 05:54:30 ov-no1 vdsm[6706]: WARN Worker blocked: <Worker name=jsonrpc/3 running
<Task <JsonRpcTask {'params': {}, 'jsonrpc': '2.0',
'method':
u'GlusterVolume.geoRepSessionList', 'id':
u'e1f4db88-5d36-4ba1-8848-3855c1dbc6ee'} at 0x7f1ed7381650> timeout=60,
duration=120.00 at 0x7f1ed7381dd0>
task#=76268 at 0x7f1ebc0797d0>, traceback:#012File:
"/usr/lib64/python2.7/threading.py", line 785, in __bootstrap#012
self.__bootstrap_inner()#012Fi
le: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner#012
self.run()#012File: "/usr/lib64/python2.7/threading.py", line 765, in ru
n#012 self.__target(*self.__args, **self.__kwargs)#012File:
"/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 195, in
run#012 ret
= func(*args, **kwargs)#012File:
"/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run#012
self._execute_task()#012File: "/usr/lib/
python2.7/site-packages/vdsm/executor.py", line 315, in _execute_task#012
task()#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line
391, in __call__#012 self._callable()#012File:
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in
__call__#012 self._handler(se
lf._ctx, self._req)#012File:
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in
_serveRequest#012 response = self._handle_reques
t(req, ctx)#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py",
line 345, in _handle_request#012 res = method(**params)#012File: "/us
r/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in _dynamicMethod#012
result = fn(*methodArgs)#012File: "/usr/lib/python2.7/site-packag
es/vdsm/gluster/apiwrapper.py", line 237, in geoRepSessionList#012
remoteUserName)#012File: "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py",
line 93, in wrapper#012 rv = func(*args, **kwargs)#012File:
"/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 551, in
volumeGeoRepSession
List#012 remoteUserName,#012File:
"/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 56, in
__call__#012 return callMethod()#012File
: "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 54, in
<lambda>#012 **kwargs)#012File: "<string>", line 2, in
glusterVolumeGeoRe
pStatus#012File: "/usr/lib64/python2.7/multiprocessing/managers.py", line 759,
in _callmethod#012 kind, result = conn.recv()
[root@ov-no1 ~]# xzcat /var/log/vdsm/vdsm.log.57.xz |egrep "(WARN|error)" |more
2020-09-01 09:40:50,023+0300 INFO (jsonrpc/5) [api] FINISH getStats error=Virtual machine
does not exist: {'vmId': u'baa3d4d1-6918-4447-80fa-64b86db
cb1f9'} (api:129)
2020-09-01 09:40:50,023+0300 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call
VM.getStats failed (error 1) in 0.00 seconds (__init__:312)
2020-09-01 09:41:00,061+0300 INFO (jsonrpc/0) [api] FINISH getStats error=Virtual machine
does not exist: {'vmId': u'baa3d4d1-6918-4447-80fa-64b86db
cb1f9'} (api:129)
2020-09-01 09:41:00,061+0300 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call
VM.getStats failed (error 1) in 0.00 seconds (__init__:312)
2020-09-01 09:41:10,078+0300 INFO (jsonrpc/4) [api] FINISH getStats error=Virtual machine
does not exist: {'vmId': u'baa3d4d1-6918-4447-80fa-64b86db
cb1f9'} (api:129)
2020-09-01 09:41:10,078+0300 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call
VM.getStats failed (error 1) in 0.00 seconds (__init__:312)
2020-09-01 09:41:20,086+0300 INFO (jsonrpc/0) [api] FINISH getStats error=Virtual machine
does not exist: {'vmId': u'baa3d4d1-6918-4447-80fa-64b86db
cb1f9'} (api:129)
2020-09-01 09:41:20,086+0300 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call
VM.getStats failed (error 1) in 0.00 seconds (__init__:312)
2020-09-01 09:41:30,107+0300 INFO (jsonrpc/0) [api] FINISH getStats error=Virtual machine
does not exist: {'vmId': u'baa3d4d1-6918-4447-80fa-64b86db
cb1f9'} (api:129)
2020-09-01 09:41:30,107+0300 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call
VM.getStats failed (error 1) in 0.00 seconds (__init__:312)
2020-09-01 09:41:38,710+0300 INFO (jsonrpc/4) [api] FINISH getStats error=Virtual machine
does not exist: {'vmId': u'baa3d4d1-6918-4447-80fa-64b86db
cb1f9'} (api:129)
2020-09-01 09:41:38,711+0300 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call
VM.getStats failed (error 1) in 0.00 seconds (__init__:312)
2020-09-01 09:41:40,115+0300 INFO (jsonrpc/5) [api] FINISH getStats error=Virtual machine
does not exist: {'vmId': u'baa3d4d1-6918-4447-80fa-64b86db
cb1f9'} (api:129)
2020-09-01 09:41:40,115+0300 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call
VM.getStats failed (error 1) in 0.01 seconds (__init__:312)
2020-09-01 09:41:44,541+0300 WARN (monitor/e2e7282) [storage.PersistentDict] Could not
parse line `
2020-09-01 09:41:44,544+0300 WARN (monitor/0a5e552) [storage.PersistentDict] Could not
parse line `
[root@ov-no1 ~]# xzcat /var/log/vdsm/supervdsm.log.29.xz |egrep
"(WARN|error|ERR)" |more
MainProcess|jsonrpc/1::WARNING::2020-09-01
09:32:32,182::util::196::blivet::(get_sysfs_attr) queue/logical_block_size is not a valid
attribute
MainProcess|jsonrpc/1::WARNING::2020-09-01
09:32:32,183::util::196::blivet::(get_sysfs_attr) queue/optimal_io_size is not a valid
attribute
MainProcess|jsonrpc/1::WARNING::2020-09-01
09:32:32,185::util::196::blivet::(get_sysfs_attr) queue/minimum_io_size is not a valid
attribute
MainProcess|jsonrpc/1::WARNING::2020-09-01
09:32:32,304::util::196::blivet::(get_sysfs_attr) queue/logical_block_size is not a valid
attribute
MainProcess|jsonrpc/1::WARNING::2020-09-01
09:32:32,306::util::196::blivet::(get_sysfs_attr) queue/optimal_io_size is not a valid
attribute
MainProcess|jsonrpc/1::WARNING::2020-09-01
09:32:32,308::util::196::blivet::(get_sysfs_attr) queue/minimum_io_size is not a valid
attribute
MainProcess|jsonrpc/1::WARNING::2020-09-01 09:32:32,440::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Read-only locking type set. Write loc
ks are prohibited.
MainProcess|jsonrpc/1::WARNING::2020-09-01 09:32:32,441::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Recovery of standalone physical volum
es failed.
MainProcess|jsonrpc/1::WARNING::2020-09-01 09:32:32,442::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Cannot process standalone physical vo
lumes
MainProcess|jsonrpc/1::WARNING::2020-09-01 09:32:32,442::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Read-only locking type set. Write loc
ks are prohibited.
MainProcess|jsonrpc/1::WARNING::2020-09-01 09:32:32,443::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Recovery of standalone physical volum
es failed.
MainProcess|jsonrpc/1::WARNING::2020-09-01 09:32:32,443::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Cannot process standalone physical vo
lumes
MainProcess|jsonrpc/1::WARNING::2020-09-01 09:32:32,444::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Read-only locking type set. Write loc
ks are prohibited.
MainProcess|jsonrpc/1::WARNING::2020-09-01 09:32:32,445::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Recovery of standalone physical volum
es failed.
MainProcess|jsonrpc/1::WARNING::2020-09-01 09:32:32,445::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Cannot process standalone physical vo
lumes
MainProcess|jsonrpc/1::ERROR::2020-09-01 09:32:36,358::udev::53::blivet::(get_device) No
device at ''
MainProcess|jsonrpc/1::ERROR::2020-09-01 09:32:36,361::udev::53::blivet::(get_device) No
device at ''
MainProcess|jsonrpc/1::ERROR::2020-09-01 09:32:36,363::udev::53::blivet::(get_device) No
device at ''
MainProcess|jsonrpc/1::ERROR::2020-09-01 09:32:36,365::udev::53::blivet::(get_device) No
device at ''
[root@ov-no1 ~]# xzcat /var/log/vdsm/supervdsm.log.28.xz |egrep
"(WARN|error|ERR)" |more
MainProcess|jsonrpc/7::WARNING::2020-09-01
11:32:36,730::util::196::blivet::(get_sysfs_attr) queue/logical_block_size is not a valid
attribute
MainProcess|jsonrpc/7::WARNING::2020-09-01
11:32:36,731::util::196::blivet::(get_sysfs_attr) queue/optimal_io_size is not a valid
attribute
MainProcess|jsonrpc/7::WARNING::2020-09-01
11:32:36,732::util::196::blivet::(get_sysfs_attr) queue/minimum_io_size is not a valid
attribute
MainProcess|jsonrpc/7::WARNING::2020-09-01
11:32:36,847::util::196::blivet::(get_sysfs_attr) queue/logical_block_size is not a valid
attribute
MainProcess|jsonrpc/7::WARNING::2020-09-01
11:32:36,849::util::196::blivet::(get_sysfs_attr) queue/optimal_io_size is not a valid
attribute
MainProcess|jsonrpc/7::WARNING::2020-09-01
11:32:36,850::util::196::blivet::(get_sysfs_attr) queue/minimum_io_size is not a valid
attribute
MainProcess|jsonrpc/7::WARNING::2020-09-01 11:32:36,974::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Read-only locking type set. Write loc
ks are prohibited.
MainProcess|jsonrpc/7::WARNING::2020-09-01 11:32:36,975::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Recovery of standalone physical volum
es failed.
MainProcess|jsonrpc/7::WARNING::2020-09-01 11:32:36,975::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Cannot process standalone physical vo
lumes
MainProcess|jsonrpc/7::WARNING::2020-09-01 11:32:36,976::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Read-only locking type set. Write loc
ks are prohibited.
MainProcess|jsonrpc/7::WARNING::2020-09-01 11:32:36,977::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Recovery of standalone physical volum
es failed.
MainProcess|jsonrpc/7::WARNING::2020-09-01 11:32:36,977::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Cannot process standalone physical vo
lumes
MainProcess|jsonrpc/7::WARNING::2020-09-01 11:32:36,978::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Read-only locking type set. Write loc
ks are prohibited.
MainProcess|jsonrpc/7::WARNING::2020-09-01 11:32:36,979::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Recovery of standalone physical volum
es failed.
MainProcess|jsonrpc/7::WARNING::2020-09-01 11:32:36,979::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Cannot process standalone physical vo
lumes
MainProcess|jsonrpc/7::ERROR::2020-09-01 11:32:40,951::udev::53::blivet::(get_device) No
device at ''
MainProcess|jsonrpc/7::ERROR::2020-09-01 11:32:40,954::udev::53::blivet::(get_device) No
device at ''
MainProcess|jsonrpc/7::ERROR::2020-09-01 11:32:40,956::udev::53::blivet::(get_device) No
device at ''
MainProcess|jsonrpc/7::ERROR::2020-09-01 11:32:40,958::udev::53::blivet::(get_device) No
device at ''
[root@ov-no1 ~]# cat /var/log/ovirt-hosted-engine-ha/broker.log.2020-09-01 |egrep -i
"(WARN|ERR)" |more
[nothing in the output]
[root@ov-no1 ~]# gluster volume status
Status of volume: data
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ov-no1.ariadne-t.local:/gluster_brick
s/data/data 49152 0 Y 6070
Brick ov-no2.ariadne-t.local:/gluster_brick
s/data/data 49152 0 Y 28860
Brick ov-no3.ariadne-t.local:/gluster_brick
s/data/data 49152 0 Y 5752
Self-heal Daemon on localhost N/A N/A Y 6539
Self-heal Daemon on ov-no2.ariadne-t.local N/A N/A Y 813
Self-heal Daemon on ov-no3.ariadne-t.local N/A N/A Y 19353
Task Status of Volume data
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: engine
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ov-no1.ariadne-t.local:/gluster_brick
s/engine/engine 49153 0 Y 6098
Brick ov-no2.ariadne-t.local:/gluster_brick
s/engine/engine 49156 0 Y 28867
Brick ov-no3.ariadne-t.local:/gluster_brick
s/engine/engine 49153 0 Y 5789
Self-heal Daemon on localhost N/A N/A Y 6539
Self-heal Daemon on ov-no2.ariadne-t.local N/A N/A Y 813
Self-heal Daemon on ov-no3.ariadne-t.local N/A N/A Y 19353
Task Status of Volume engine
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: vmstore
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ov-no1.ariadne-t.local:/gluster_brick
s/vmstore/vmstore 49154 0 Y 6102
Brick ov-no2.ariadne-t.local:/gluster_brick
s/vmstore/vmstore 49157 0 Y 28878
Brick ov-no3.ariadne-t.local:/gluster_brick
s/vmstore/vmstore 49154 0 Y 5772
Self-heal Daemon on localhost N/A N/A Y 6539
Self-heal Daemon on ov-no2.ariadne-t.local N/A N/A Y 813
Self-heal Daemon on ov-no3.ariadne-t.local N/A N/A Y 19353
Task Status of Volume vmstore
------------------------------------------------------------------------------
There are no active volume tasks
[root@ov-no1 ~]# gluster volume heal engine info
Brick ov-no1.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0
Brick ov-no2.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0
Brick ov-no3.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0
[root@ov-no1 ~]# gluster volume heal engine info summary
Brick ov-no1.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick ov-no2.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick ov-no3.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
[root@ov-no1 ~]# gluster volume heal data info
Brick ov-no1.ariadne-t.local:/gluster_bricks/data/data
Status: Connected
Number of entries: 0
Brick ov-no2.ariadne-t.local:/gluster_bricks/data/data
Status: Connected
Number of entries: 0
Brick ov-no3.ariadne-t.local:/gluster_bricks/data/data
Status: Connected
Number of entries: 0
[root@ov-no1 ~]# gluster volume heal data info summary
Brick ov-no1.ariadne-t.local:/gluster_bricks/data/data
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick ov-no2.ariadne-t.local:/gluster_bricks/data/data
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick ov-no3.ariadne-t.local:/gluster_bricks/data/data
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
[root@ov-no1 ~]# gluster volume heal vmstore info
Brick ov-no1.ariadne-t.local:/gluster_bricks/vmstore/vmstore
Status: Connected
Number of entries: 0
Brick ov-no2.ariadne-t.local:/gluster_bricks/vmstore/vmstore
Status: Connected
Number of entries: 0
Brick ov-no3.ariadne-t.local:/gluster_bricks/vmstore/vmstore
Status: Connected
Number of entries: 0
[root@ov-no1 ~]# gluster volume heal vmstore info summary
Brick ov-no1.ariadne-t.local:/gluster_bricks/vmstore/vmstore
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick ov-no2.ariadne-t.local:/gluster_bricks/vmstore/vmstore
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick ov-no3.ariadne-t.local:/gluster_bricks/vmstore/vmstore
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
[root@ov-no1 ~]# ps -A|grep gluster
5522 ? 00:01:40 glustereventsd
5591 ? 00:05:04 glusterd
6070 ? 02:16:28 glusterfsd
6098 ? 05:44:14 glusterfsd
6102 ? 00:23:48 glusterfsd
6285 ? 00:00:38 glustereventsd
6539 ? 01:32:19 glusterfs
10470 ? 00:34:10 glusterfs
17763 ? 00:50:18 glusterfs
18101 ? 00:10:23 glusterfs
Node2:
[root@ov-no2 ~]# more /var/log/messages |egrep "(WARN|error)"|more
Sep 1 07:58:30 ov-no2 vdsm[13586]: WARN Worker blocked: <Worker name=jsonrpc/7 running
<Task <JsonRpcTask {'params': {}, 'jsonrpc': '2.0',
'method':
u'GlusterVolume.geoRepSessionList', 'id':
u'ec8af6cc-6109-4ddb-a6fb-ad9f4fbc1e7b'} at 0x7f44602a7c90> timeout=60,
duration=120.00 at 0x7f44602a7510>
task#=5773350 at 0x7f44c00c71d0>, traceback:#012File:
"/usr/lib64/python2.7/threading.py", line 785, in __bootstrap#012
self.__bootstrap_inner()#01
2File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner#012
self.run()#012File: "/usr/lib64/python2.7/threading.py", line 765, in
run#012 self.__target(*self.__args, **self.__kwargs)#012File:
"/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 195, in
run#012 r
et = func(*args, **kwargs)#012File:
"/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run#012
self._execute_task()#012File: "/usr/l
ib/python2.7/site-packages/vdsm/executor.py", line 315, in _execute_task#012
task()#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", li
ne 391, in __call__#012 self._callable()#012File:
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in
__call__#012 self._handler
(self._ctx, self._req)#012File:
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in
_serveRequest#012 response = self._handle_req
uest(req, ctx)#012File:
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in
_handle_request#012 res = method(**params)#012File: "
/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in _dynamicMethod#012
result = fn(*methodArgs)#012File: "/usr/lib/python2.7/site-pac
kages/vdsm/gluster/apiwrapper.py", line 237, in geoRepSessionList#012
remoteUserName)#012File: "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py
", line 93, in wrapper#012 rv = func(*args, **kwargs)#012File:
"/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 551, in
volumeGeoRepSess
ionList#012 remoteUserName,#012File:
"/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 56, in
__call__#012 return callMethod()#012F
ile: "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 54, in
<lambda>#012 **kwargs)#012File: "<string>", line 2, in
glusterVolumeGe
oRepStatus#012File: "/usr/lib64/python2.7/multiprocessing/managers.py", line
759, in _callmethod#012 kind, result = conn.recv()
Sep 1 09:48:02 ov-no2 journal: ovirt-ha-agent
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Unhandled monitoring loop
exception#012T
raceback (most recent call last):#012 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 449, in start_moni
toring#012 self._monitoring_loop()#012 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 468, in _monit
oring_loop#012 for old_state, state, delay in self.fsm:#012 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/fsm/machine.py",
li
ne 127, in next#012 new_data = self.refresh(self._state.data)#012 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/state_machi
ne.py", line 81, in refresh#012
stats.update(self.hosted_engine.collect_stats())#012 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_h
a/agent/hosted_engine.py", line 756, in collect_stats#012 all_stats =
self._broker.get_stats_from_storage()#012 File "/usr/lib/python2.7/site-pac
kages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 143, in
get_stats_from_storage#012 result = self._proxy.get_stats()#012 File
"/usr/lib64/py
thon2.7/xmlrpclib.py", line 1233, in __call__#012 return self.__send(self.__name,
args)#012 File "/usr/lib64/python2.7/xmlrpclib.py", line 1591,
in __request#012 verbose=self.__verbose#012 File
"/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request#012 return
self.single_request(hos
t, handler, request_body, verbose)#012 File
"/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request#012
self.send_content(h, request_bo
dy)#012 File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in
send_content#012 connection.endheaders(request_body)#012 File "/usr/lib64/python
2.7/httplib.py", line 1037, in endheaders#012 self._send_output(message_body)#012
File "/usr/lib64/python2.7/httplib.py", line 881, in _send_outp
ut#012 self.send(msg)#012 File "/usr/lib64/python2.7/httplib.py", line 843,
in send#012 self.connect()#012 File "/usr/lib/python2.7/site-pack
ages/ovirt_hosted_engine_ha/lib/unixrpc.py", line 52, in connect#012
self.sock.connect(base64.b16decode(self.host))#012 File "/usr/lib64/python2.
7/socket.py", line 224, in meth#012 return
getattr(self._sock,name)(*args)#012error: [Errno 2] No such file or directory
Sep 1 09:48:02 ov-no2 journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent
ERROR Traceback (most recent call last):#012 File "/usr/lib/
python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 131, in
_run_agent#012 return action(he)#012 File "/usr/lib/python2.7/site-p
ackages/ovirt_hosted_engine_ha/agent/agent.py", line 55, in action_proper#012
return he.start_monitoring()#012 File "/usr/lib/python2.7/site-pack
ages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 456, in
start_monitoring#012 self.publish(stopped)#012 File "/usr/lib/python2.7/site-pa
ckages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 356, in publish#012
self._push_to_storage(blocks)#012 File "/usr/lib/python2.7/site-p
ackages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 727, in
_push_to_storage#012 self._broker.put_stats_on_storage(self.host_id, blocks)#
012 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 113, in put_stats_on_storage#012 self._proxy.put_stats
(host_id, xmlrpclib.Binary(data))#012 File "/usr/lib64/python2.7/xmlrpclib.py",
line 1233, in __call__#012 return self.__send(self.__name, args)#
012 File "/usr/lib64/python2.7/xmlrpclib.py", line 1591, in __request#012
verbose=self.__verbose#012 File "/usr/lib64/python2.7/xmlrpclib.py", l
ine 1273, in request#012 return self.single_request(host, handler, request_body,
verbose)#012 File "/usr/lib64/python2.7/xmlrpclib.py", line 1301
, in single_request#012 self.send_content(h, request_body)#012 File
"/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content#012 connec
tion.endheaders(request_body)#012 File "/usr/lib64/python2.7/httplib.py", line
1037, in endheaders#012 self._send_output(message_body)#012 File
"/usr/lib64/python2.7/httplib.py", line 881, in _send_output#012
self.send(msg)#012 File "/usr/lib64/python2.7/httplib.py", line 843, in
send#012
self.connect()#012 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.py", line
52, in connect#012 self.sock.connect(b
ase64.b16decode(self.host))#012 File "/usr/lib64/python2.7/socket.py", line
224, in meth#012 return getattr(self._sock,name)(*args)#012error: [Er
rno 2] No such file or directory
Sep 1 09:48:15 ov-no2 journal: ovirt-ha-broker
ovirt_hosted_engine_ha.broker.notifications.Notifications ERROR [Errno 113] No route to
host#012Trace
back (most recent call last):#012 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/notifications.py",
line 26, in send_email#012
timeout=float(cfg["smtp-timeout"]))#012 File
"/usr/lib64/python2.7/smtplib.py", line 255, in __init__#012 (code, msg) =
self.connect(host, po
rt)#012 File "/usr/lib64/python2.7/smtplib.py", line 315, in connect#012
self.sock = self._get_socket(host, port, self.timeout)#012 File "/usr/l
ib64/python2.7/smtplib.py", line 290, in _get_socket#012 return
socket.create_connection((host, port), timeout)#012 File "/usr/lib64/python2.7/so
cket.py", line 571, in create_connection#012 raise err#012error: [Errno 113] No
route to host
Sep 1 09:48:25 ov-no2 journal: ovirt-ha-broker
ovirt_hosted_engine_ha.broker.notifications.Notifications ERROR [Errno 113] No route to
host#012Trace
back (most recent call last):#012 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/notifications.py",
line 26, in send_email#012
timeout=float(cfg["smtp-timeout"]))#012 File
"/usr/lib64/python2.7/smtplib.py", line 255, in __init__#012 (code, msg) =
self.connect(host, po
rt)#012 File "/usr/lib64/python2.7/smtplib.py", line 315, in connect#012
self.sock = self._get_socket(host, port, self.timeout)#012 File "/usr/l
ib64/python2.7/smtplib.py", line 290, in _get_socket#012 return
socket.create_connection((host, port), timeout)#012 File "/usr/lib64/python2.7/so
cket.py", line 571, in create_connection#012 raise err#012error: [Errno 113] No
route to host
Sep 1 09:49:15 ov-no2 journal: ovirt-ha-broker
ovirt_hosted_engine_ha.broker.notifications.Notifications ERROR [Errno 113] No route to
host#012Trace
back (most recent call last):#012 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/notifications.py",
line 26, in send_email#012
timeout=float(cfg["smtp-timeout"]))#012 File
"/usr/lib64/python2.7/smtplib.py", line 255, in __init__#012 (code, msg) =
self.connect(host, po
rt)#012 File "/usr/lib64/python2.7/smtplib.py", line 315, in connect#012
self.sock = self._get_socket(host, port, self.timeout)#012 File "/usr/l
ib64/python2.7/smtplib.py", line 290, in _get_socket#012 return
socket.create_connection((host, port), timeout)#012 File "/usr/lib64/python2.7/so
cket.py", line 571, in create_connection#012 raise err#012error: [Errno 113] No
route to host
Sep 1 09:49:16 ov-no2 journal: ovirt-ha-broker
ovirt_hosted_engine_ha.broker.notifications.Notifications ERROR [Errno 113] No route to
host#012Trace
back (most recent call last):#012 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/notifications.py",
line 26, in send_email#012
timeout=float(cfg["smtp-timeout"]))#012 File
"/usr/lib64/python2.7/smtplib.py", line 255, in __init__#012 (code, msg) =
self.connect(host, po
rt)#012 File "/usr/lib64/python2.7/smtplib.py", line 315, in connect#012
self.sock = self._get_socket(host, port, self.timeout)#012 File "/usr/l
ib64/python2.7/smtplib.py", line 290, in _get_socket#012 return
socket.create_connection((host, port), timeout)#012 File "/usr/lib64/python2.7/so
cket.py", line 571, in create_connection#012 raise err#012error: [Errno 113] No
route to host
Sep 1 09:49:17 ov-no2 vdsm[13586]: WARN Attempting to add an existing net user:
ovirtmgmt/baa3d4d1-6918-4447-80fa-64b86dbcb1f9
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
--concurrent -t nat -D PREROUTING -i vnet0 -j libvirt-J-vnet0' f
ailed: Illegal target name 'libvirt-J-vnet0'.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
--concurrent -t nat -D POSTROUTING -o vnet0 -j libvirt-P-vnet0'
failed: Illegal target name 'libvirt-P-vnet0'.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
--concurrent -t nat -L libvirt-J-vnet0' failed: Chain 'libvirt-J
-vnet0' doesn't exist.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
--concurrent -t nat -L libvirt-P-vnet0' failed: Chain 'libvirt-P
-vnet0' doesn't exist.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
--concurrent -t nat -F libvirt-J-vnet0' failed: Chain 'libvirt-J
-vnet0' doesn't exist.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
--concurrent -t nat -X libvirt-J-vnet0' failed: Chain 'libvirt-J
-vnet0' doesn't exist.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
--concurrent -t nat -F libvirt-P-vnet0' failed: Chain 'libvirt-P
-vnet0' doesn't exist.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
--concurrent -t nat -X libvirt-P-vnet0' failed: Chain 'libvirt-P
-vnet0' doesn't exist.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
--concurrent -t nat -F J-vnet0-mac' failed: Chain 'J-vnet0-mac'
doesn't exist.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
--concurrent -t nat -X J-vnet0-mac' failed: Chain 'J-vnet0-mac'
doesn't exist.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
--concurrent -t nat -F J-vnet0-arp-mac' failed: Chain 'J-vnet0-a
rp-mac' doesn't exist.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
--concurrent -t nat -X J-vnet0-arp-mac' failed: Chain 'J-vnet0-a
rp-mac' doesn't exist.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables
-w2 -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-
out vnet0 -g FO-vnet0' failed: iptables v1.4.21: goto 'FO-vnet0' is not a
chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables
-w2 -w -D libvirt-out -m physdev --physdev-out vnet0 -g FO-vnet0
' failed: iptables v1.4.21: goto 'FO-vnet0' is not a chain#012#012Try
`iptables -h' or 'iptables --help' for more information.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables
-w2 -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0'
failed: iptables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try `iptables
-h' or 'iptables --help' for more information.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables
-w2 -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HI-vn
et0' failed: iptables v1.4.21: goto 'HI-vnet0' is not a chain#012#012Try
`iptables -h' or 'iptables --help' for more information.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables
-w2 -w -F FO-vnet0' failed: iptables: No chain/target/match by t
hat name.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables
-w2 -w -X FO-vnet0' failed: iptables: No chain/target/match by t
hat name.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables
-w2 -w -F FI-vnet0' failed: iptables: No chain/target/match by t
hat name.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables
-w2 -w -X FI-vnet0' failed: iptables: No chain/target/match by t
hat name.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables
-w2 -w -F HI-vnet0' failed: iptables: No chain/target/match by t
hat name.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables
-w2 -w -X HI-vnet0' failed: iptables: No chain/target/match by t
hat name.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables
-w2 -w -E FP-vnet0 FO-vnet0' failed: iptables: No chain/target/m
atch by that name.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables
-w2 -w -E FJ-vnet0 FI-vnet0' failed: iptables: No chain/target/m
atch by that name.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables
-w2 -w -E HJ-vnet0 HI-vnet0' failed: iptables: No chain/target/m
atch by that name.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables
-w2 -w -D libvirt-out -m physdev --physdev-is-bridged --physdev
-out vnet0 -g FO-vnet0' failed: ip6tables v1.4.21: goto 'FO-vnet0' is not a
chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more informatio
n.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables
-w2 -w -D libvirt-out -m physdev --physdev-out vnet0 -g FO-vnet
0' failed: ip6tables v1.4.21: goto 'FO-vnet0' is not a chain#012#012Try
`ip6tables -h' or 'ip6tables --help' for more information.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables
-w2 -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0'
failed: ip6tables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try `ip6tables
-h' or 'ip6tables --help' for more information.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables
-w2 -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HI-v
net0' failed: ip6tables v1.4.21: goto 'HI-vnet0' is not a chain#012#012Try
`ip6tables -h' or 'ip6tables --help' for more information.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables
-w2 -w -F FO-vnet0' failed: ip6tables: No chain/target/match by
that name.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables
-w2 -w -X FO-vnet0' failed: ip6tables: No chain/target/match by
that name.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables
-w2 -w -F FI-vnet0' failed: ip6tables: No chain/target/match by
that name.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables
-w2 -w -X FI-vnet0' failed: ip6tables: No chain/target/match by
that name.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables
-w2 -w -F HI-vnet0' failed: ip6tables: No chain/target/match by
that name.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables
-w2 -w -X HI-vnet0' failed: ip6tables: No chain/target/match by
that name.
Sep 1 09:49:19 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables
-w2 -w -E FP-vnet0 FO-vnet0' failed: ip6tables: No chain/target
/match by that name.
Sep 1 09:49:20 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables
-w2 -w -E FJ-vnet0 FI-vnet0' failed: ip6tables: No chain/target
/match by that name.
Sep 1 09:49:20 ov-no2 firewalld[2428]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables
-w2 -w -E HJ-vnet0 HI-vnet0' failed: ip6tables: No chain/target
/match by that name.
.
.
.
.
.
Sep 1 09:49:22 ov-no2 vdsm[13586]: WARN File:
/var/lib/libvirt/qemu/channels/baa3d4d1-6918-4447-80fa-64b86dbcb1f9.ovirt-guest-agent.0
already remove
d
Sep 1 09:49:22 ov-no2 vdsm[13586]: WARN Attempting to remove a non existing net user:
ovirtmgmt/baa3d4d1-6918-4447-80fa-64b86dbcb1f9
Sep 1 09:49:22 ov-no2 vdsm[13586]: WARN File:
/var/lib/libvirt/qemu/channels/baa3d4d1-6918-4447-80fa-64b86dbcb1f9.org.qemu.guest_agent.0
already rem
oved
Sep 1 09:49:22 ov-no2 vdsm[13586]: WARN File:
/var/run/ovirt-vmconsole-console/baa3d4d1-6918-4447-80fa-64b86dbcb1f9.sock already
removed
Sep 1 09:49:27 ov-no2 journal: ovirt-ha-broker
ovirt_hosted_engine_ha.broker.notifications.Notifications ERROR [Errno 113] No route to
host#012Trace
back (most recent call last):#012 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/notifications.py",
line 26, in send_email#012
timeout=float(cfg["smtp-timeout"]))#012 File
"/usr/lib64/python2.7/smtplib.py", line 255, in __init__#012 (code, msg) =
self.connect(host, po
rt)#012 File "/usr/lib64/python2.7/smtplib.py", line 315, in connect#012
self.sock = self._get_socket(host, port, self.timeout)#012 File "/usr/l
ib64/python2.7/smtplib.py", line 290, in _get_socket#012 return
socket.create_connection((host, port), timeout)#012 File "/usr/lib64/python2.7/so
cket.py", line 571, in create_connection#012 raise err#012error: [Errno 113] No
route to host
Sep 1 09:49:27 ov-no2 journal: ovirt-ha-broker
ovirt_hosted_engine_ha.broker.notifications.Notifications ERROR [Errno 113] No route to
host#012Trace
back (most recent call last):#012 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/notifications.py",
line 26, in send_email#012
timeout=float(cfg["smtp-timeout"]))#012 File
"/usr/lib64/python2.7/smtplib.py", line 255, in __init__#012 (code, msg) =
self.connect(host, po
rt)#012 File "/usr/lib64/python2.7/smtplib.py", line 315, in connect#012
self.sock = self._get_socket(host, port, self.timeout)#012 File "/usr/l
ib64/python2.7/smtplib.py", line 290, in _get_socket#012 return
socket.create_connection((host, port), timeout)#012 File "/usr/lib64/python2.7/so
cket.py", line 571, in create_connection#012 raise err#012error: [Errno 113] No
route to host
Sep 1 09:49:37 ov-no2 journal: ovirt-ha-broker
ovirt_hosted_engine_ha.broker.notifications.Notifications ERROR [Errno 113] No route to
host#012Trace
back (most recent call last):#012 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/notifications.py",
line 26, in send_email#012
timeout=float(cfg["smtp-timeout"]))#012 File
"/usr/lib64/python2.7/smtplib.py", line 255, in __init__#012 (code, msg) =
self.connect(host, po
rt)#012 File "/usr/lib64/python2.7/smtplib.py", line 315, in connect#012
self.sock = self._get_socket(host, port, self.timeout)#012 File "/usr/l
ib64/python2.7/smtplib.py", line 290, in _get_socket#012 return
socket.create_connection((host, port), timeout)#012 File "/usr/lib64/python2.7/so
cket.py", line 571, in create_connection#012 raise err#012error: [Errno 113] No
route to host
Sep 1 10:01:31 ov-no2 vdsm[13586]: WARN Worker blocked: <Worker name=jsonrpc/4 running
<Task <JsonRpcTask {'params': {}, 'jsonrpc': '2.0',
'method':
u'GlusterVolume.geoRepSessionList', 'id':
u'7f800636-f107-4b98-a35f-2175ec72b7b8'} at 0x7f44805aa110> timeout=60,
duration=60.00 at 0x7f44805aad10>
task#=5775347 at 0x7f44d81c4110>, traceback:#012File:
"/usr/lib64/python2.7/threading.py", line 785, in __bootstrap#012
self.__bootstrap_inner()#012
File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner#012
self.run()#012File: "/usr/lib64/python2.7/threading.py", line 765, in
run#012 self.__target(*self.__args, **self.__kwargs)#012File:
"/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 195, in
run#012 re
t = func(*args, **kwargs)#012File:
"/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run#012
self._execute_task()#012File: "/usr/li
b/python2.7/site-packages/vdsm/executor.py", line 315, in _execute_task#012
task()#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", lin
e 391, in __call__#012 self._callable()#012File:
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in
__call__#012 self._handler(
self._ctx, self._req)#012File:
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in
_serveRequest#012 response = self._handle_requ
est(req, ctx)#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py",
line 345, in _handle_request#012 res = method(**params)#012File: "/
usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in _dynamicMethod#012
result = fn(*methodArgs)#012File: "/usr/lib/python2.7/site-pack
ages/vdsm/gluster/apiwrapper.py", line 237, in geoRepSessionList#012
remoteUserName)#012File: "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py"
, line 93, in wrapper#012 rv = func(*args, **kwargs)#012File:
"/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 551, in
volumeGeoRepSessi
onList#012 remoteUserName,#012File:
"/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 56, in
__call__#012 return callMethod()#012Fi
le: "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 54, in
<lambda>#012 **kwargs)#012File: "<string>", line 2, in
glusterVolumeGeo
RepStatus#012File: "/usr/lib64/python2.7/multiprocessing/managers.py", line 759,
in _callmethod#012 kind, result = conn.recv()
[root@ov-no2 ~]# xzcat /var/log/vdsm/vdsm.log.57.xz |egrep "(WARN|error)" |more
2020-09-01 09:24:06,889+0300 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call
VM.getStats failed (error 1) in 0.00 seconds (__init__:312)
2020-09-01 09:24:16,910+0300 INFO (jsonrpc/6) [api] FINISH getStats error=Virtual machine
does not exist: {'vmId': u'baa3d4d1-6918-4447-80fa-64b86db
cb1f9'} (api:129)
2020-09-01 09:24:16,911+0300 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call
VM.getStats failed (error 1) in 0.00 seconds (__init__:312)
2020-09-01 09:24:26,932+0300 INFO (jsonrpc/5) [api] FINISH getStats error=Virtual machine
does not exist: {'vmId': u'baa3d4d1-6918-4447-80fa-64b86db
cb1f9'} (api:129)
2020-09-01 09:24:26,933+0300 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call
VM.getStats failed (error 1) in 0.01 seconds (__init__:312)
2020-09-01 09:24:30,375+0300 WARN (monitor/0a5e552) [storage.PersistentDict] Could not
parse line `
2020-09-01 09:24:30,379+0300 WARN (monitor/e2e7282) [storage.PersistentDict] Could not
parse line `
.
.
.
.
2020-09-01 09:47:37,008+0300 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call
VM.getStats failed (error 1) in 0.00 seconds (__init__:312)
2020-09-01 09:47:47,030+0300 INFO (jsonrpc/5) [api] FINISH getStats error=Virtual machine
does not exist: {'vmId': u'baa3d4d1-6918-4447-80fa-64b86db
cb1f9'} (api:129)
2020-09-01 09:47:47,030+0300 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call
VM.getStats failed (error 1) in 0.00 seconds (__init__:312)
2020-09-01 09:48:01,712+0300 WARN (jsonrpc/6) [storage.PersistentDict] Could not parse
line `
2020-09-01 09:48:01,743+0300 WARN (jsonrpc/1) [storage.PersistentDict] Could not parse
line `
2020-09-01 09:48:02,220+0300 WARN (jsonrpc/0) [storage.PersistentDict] Could not parse
line `
2020-09-01 09:48:05,192+0300 WARN (periodic/22) [storage.PersistentDict] Could not parse
line `
2020-09-01 09:48:11,745+0300 WARN (monitor/e2e7282) [storage.PersistentDict] Could not
parse line `
2020-09-01 09:48:13,711+0300 INFO (jsonrpc/7) [api] FINISH getStats error=Virtual machine
does not exist: {'vmId': u'baa3d4d1-6918-4447-80fa-64b86db
cb1f9'} (api:129)
2020-09-01 09:48:13,711+0300 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call
VM.getStats failed (error 1) in 0.00 seconds (__init__:312)
2020-09-01 09:48:23,591+0300 INFO (jsonrpc/0) [api] FINISH getStats error=Virtual machine
does not exist: {'vmId': u'baa3d4d1-6918-4447-80fa-64b86db
cb1f9'} (api:129)
2020-09-01 09:48:23,591+0300 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call
VM.getStats failed (error 1) in 0.00 seconds (__init__:312)
2020-09-01 09:48:23,731+0300 INFO (jsonrpc/5) [api] FINISH getStats error=Virtual machine
does not exist: {'vmId': u'baa3d4d1-6918-4447-80fa-64b86db
cb1f9'} (api:129)
2020-09-01 09:48:23,731+0300 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call
VM.getStats failed (error 1) in 0.00 seconds (__init__:312)
2020-09-01 09:48:33,754+0300 INFO (jsonrpc/4) [api] FINISH getStats error=Virtual machine
does not exist: {'vmId': u'baa3d4d1-6918-4447-80fa-64b86db
cb1f9'} (api:129)
2020-09-01 09:48:33,754+0300 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call
VM.getStats failed (error 1) in 0.00 seconds (__init__:312)
2020-09-01 09:48:43,787+0300 INFO (jsonrpc/2) [api] FINISH getStats error=Virtual machine
does not exist: {'vmId': u'baa3d4d1-6918-4447-80fa-64b86db
cb1f9'} (api:129)
2020-09-01 09:48:43,787+0300 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call
VM.getStats failed (error 1) in 0.00 seconds (__init__:312)
2020-09-01 09:48:53,808+0300 INFO (jsonrpc/4) [api] FINISH getStats error=Virtual machine
does not exist: {'vmId': u'baa3d4d1-6918-4447-80fa-64b86db
cb1f9'} (api:129)
2020-09-01 09:48:53,809+0300 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call
VM.getStats failed (error 1) in 0.00 seconds (__init__:312)
2020-09-01 09:49:03,830+0300 INFO (jsonrpc/4) [api] FINISH getStats error=Virtual machine
does not exist: {'vmId': u'baa3d4d1-6918-4447-80fa-64b86db
cb1f9'} (api:129)
2020-09-01 09:49:03,831+0300 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call
VM.getStats failed (error 1) in 0.00 seconds (__init__:312)
2020-09-01 09:49:13,852+0300 INFO (jsonrpc/0) [api] FINISH getStats error=Virtual machine
does not exist: {'vmId': u'baa3d4d1-6918-4447-80fa-64b86db
cb1f9'} (api:129)
2020-09-01 09:49:13,853+0300 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call
VM.getStats failed (error 1) in 0.01 seconds (__init__:312)
2020-09-01 09:49:16,042+0300 INFO (jsonrpc/4) [api] FINISH getStats error=Virtual machine
does not exist: {'vmId': u'baa3d4d1-6918-4447-80fa-64b86db
cb1f9'} (api:129)
2020-09-01 09:49:16,042+0300 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call
VM.getStats failed (error 1) in 0.01 seconds (__init__:312)
2020-09-01 09:49:16,405+0300 INFO (jsonrpc/0) [api] FINISH getStats error=Virtual machine
does not exist: {'vmId': u'baa3d4d1-6918-4447-80fa-64b86db
cb1f9'} (api:129)
2020-09-01 09:49:16,405+0300 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call
VM.getStats failed (error 1) in 0.00 seconds (__init__:312)
2020-09-01 09:49:16,797+0300 INFO (jsonrpc/7) [api.virt] START
create(vmParams={u'xml': u'<?xml version=\'1.0\'
encoding=\'UTF-8\'?>\n<domain xmlns:
ovirt-tune="http://ovirt.org/vm/tune/1.0"
xmlns:ovirt-vm="http://ovirt.org/vm/1.0"
type="kvm"><name>HostedEngine</name><uuid>baa3d4d1-6918-4447-80fa-
64b86dbcb1f9</uuid><memory>16777216</memory><currentMemory>16777216</currentMemory><iothreads>1</iothreads><maxMemory
slots="16">67108864</maxMemory>
<vcpu current="4">64</vcpu><sysinfo
type="smbios"><system><entry
name="manufacturer">oVirt</entry><entry
name="product">OS-NAME:</entry><entry name="
version">OS-VERSION:</entry><entry
name="serial">HOST-SERIAL:</entry><entry
name="uuid">baa3d4d1-6918-4447-80fa-64b86dbcb1f9</entry></system></sysinf
o><clock offset="variable" adjustment="0"><timer
name="rtc" tickpolicy="catchup"/><timer name="pit"
tickpolicy="delay"/><timer name="hpet" present="n
o"/></clock><features><acpi/></features><cpu
match="exact"><model>Westmere</model><feature
name="pcid" policy="require"/><feature
name="spec-ctrl" po
licy="require"/><feature name="ssbd"
policy="require"/><topology cores="4" threads="1"
sockets="16"/><numa><cell id="0" cpus="0,1,2,3"
memory="167772
16"/></numa></cpu><cputune/><devices><input
type="tablet" bus="usb"/><channel
type="unix"><target type="virtio"
name="ovirt-guest-agent.0"/><source m
ode="bind"
path="/var/lib/libvirt/qemu/channels/baa3d4d1-6918-4447-80fa-64b86dbcb1f9.ovirt-guest-agent.0"/></channel><channel
type="unix"><target typ
e="virtio" name="org.qemu.guest_agent.0"/><source
mode="bind"
path="/var/lib/libvirt/qemu/channels/baa3d4d1-6918-4447-80fa-64b86dbcb1f9.org.qemu.gues
t_agent.0"/></channel><controller type="usb"
model="piix3-uhci" index="0"><address bus="0x00"
domain="0x0000" function="0x2" slot="0x01"
type="pci"/>
</controller><console type="unix"><source
path="/var/run/ovirt-vmconsole-console/baa3d4d1-6918-4447-80fa-64b86dbcb1f9.sock"
mode="bind"/><target type
="serial" port="0"/><alias
name="ua-228f9c56-c195-4f1e-8cd2-5f7ef60a78f1"/></console><memballoon
model="virtio"><stats period="5"/><alias name="ua-42
10968c-acf1-46cc-8a35-6e29e3c58b38"/><address bus="0x00"
domain="0x0000" function="0x0" slot="0x08"
type="pci"/></memballoon><controller type="scsi"
model="virtio-scsi" index="0"><driver
iothread="1"/><alias
name="ua-473cb0f7-09da-4077-a0e2-8eb5a5efdbe4"/><address
bus="0x00" domain="0x0000" functi
on="0x0" slot="0x05"
type="pci"/></controller><sound model="ich6"><alias
name="ua-496f9a72-5484-4dc0-83f4-caa8225100ae"/><address
bus="0x00" domain="
0x0000" function="0x0" slot="0x04"
type="pci"/></sound><graphics type="vnc"
port="-1" autoport="yes" passwd="*****"
passwdValidTo="1970-01-01T00:00:0
1" keymap="en-us"><listen type="network"
network="vdsm-ovirtmgmt"/></graphics><controller
type="ide" index="0"><address bus="0x00"
domain="0x0000" fu
nction="0x1" slot="0x01"
type="pci"/></controller><graphics type="spice"
port="-1" autoport="yes" passwd="*****"
passwdValidTo="1970-01-01T00:00:01"
tlsPort="-1"><channel name="main"
mode="secure"/><channel name="inputs"
mode="secure"/><channel name="cursor"
mode="secure"/><channel name="playback"
mode="secure"/><channel name="record"
mode="secure"/><channel name="display"
mode="secure"/><channel name="smartcard"
mode="secure"/><channel name="
usbredir" mode="secure"/><listen type="network"
network="vdsm-ovirtmgmt"/></graphics><rng
model="virtio"><backend
model="random">/dev/urandom</backen
d><alias
name="ua-9edfa822-3b11-424e-83f8-4846c6221a6e"/></rng><video><model
type="qxl" vram="32768" heads="1" ram="65536"
vgamem="16384"/><alias nam
e="ua-a4ad75db-c91d-45df-a767-33a71009587f"/><address bus="0x00"
domain="0x0000" function="0x0" slot="0x02"
type="pci"/></video><controller type="vir
tio-serial" index="0" ports="16"><alias
name="ua-f60f1742-07fe-464e-a160-1a16c94db61a"/><address
bus="0x00" domain="0x0000" function="0x0" slot="0x06
" type="pci"/></controller><serial
type="unix"><source
path="/var/run/ovirt-vmconsole-console/baa3d4d1-6918-4447-80fa-64b86dbcb1f9.sock"
mode="bind"/
<target port="0"/></serial><channel
type="spicevmc"><target type="virtio"
name="com.redhat.spice.0"/></channel><interface
type="bridge"><model type=
"virtio"/><link
state="up"/><source bridge="ovirtmgmt"/><driver
queues="4" name="vhost"/><alias
name="ua-e97c94c0-ffc2-46dd-b5cd-9d9c123f4db0"/><addr
ess bus="0x00" domain="0x0000" function="0x0"
slot="0x03" type="pci"/><mac
address="00:16:3e:27:6e:ef"/><mtu size="1500"/><filterref
filter="vdsm-no-
mac-spoofing"/><bandwidth/></interface><disk type="file"
device="cdrom" snapshot="no"><driver name="qemu"
type="raw" error_policy="report"/><source f
ile="" startupPolicy="optional"><seclabel model="dac"
type="none" relabel="no"/></source><target
dev="hdc" bus="ide"/><readonly/><alias name="ua-9034
9138-d20e-4306-b0e2-271e6baaad4a"/><address bus="1"
controller="0" unit="0" type="drive"
target="0"/></disk><disk snapshot="no"
type="file" device="d
isk"><target dev="vda" bus="virtio"/><source
file="/rhev/data-center/00000000-0000-0000-0000-000000000000/80f6e393-9718-4738-a14a-64cf43c3d8c2/images
/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7"><seclabel
model="dac" type="none"
relabel="no"/></source><driver name="qe
mu" iothread="1" io="threads" type="raw"
error_policy="stop" cache="none"/><alias
name="ua-d5de54b6-9f8e-4fba-819b-ebf6780757d2"/><address
bus="0x00"
domain="0x0000" function="0x0" slot="0x07"
type="pci"/><serial>d5de54b6-9f8e-4fba-819b-ebf6780757d2</serial></disk><lease><key>a48555f4-be23-4467-8a
54-400ae7baf9d7</key><lockspace>80f6e393-9718-4738-a14a-64cf43c3d8c2</lockspace><target
offset="LEASE-OFFSET:a48555f4-be23-4467-8a54-400ae7baf9d7:80f
6e393-9718-4738-a14a-64cf43c3d8c2"
path="LEASE-PATH:a48555f4-be23-4467-8a54-400ae7baf9d7:80f6e393-9718-4738-a14a-64cf43c3d8c2"/></lease></devices><pm
<suspend-to-disk enabled="no"/><suspend-to-mem
enabled="no"/></pm><os><type arch="x86_64"
machine="pc-i440fx-rhel7.6.0">hvm</type><smbios mode="sysi
nfo"/><bios
useserial="yes"/></os><metadata><ovirt-tune:qos/><ovirt-vm:vm><ovirt-vm:minGuaranteedMemoryMb
type="int">1024</ovirt-vm:minGuaranteedMemo
ryMb><ovirt-vm:clusterVersion>4.3</ovirt-vm:clusterVersion><ovirt-vm:custom/><ovirt-vm:device
mac_address="00:16:3e:27:6e:ef"><ovirt-vm:custom/></ovi
rt-vm:device><ovirt-vm:device devtype="disk"
name="vda"><ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID><ovirt-vm:volumeID>a48
555f4-be23-4467-8a54-400ae7baf9d7</ovirt-vm:volumeID><ovirt-vm:shared>exclusive</ovirt-vm:shared><ovirt-vm:imageID>d5de54b6-9f8e-4fba-819b-ebf6780757
d2</ovirt-vm:imageID><ovirt-vm:domainID>80f6e393-9718-4738-a14a-64cf43c3d8c2</ovirt-vm:domainID></ovirt-vm:device><ovirt-vm:launchPaused>false</ovirt
-vm:launchPaused><ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior></ovirt-vm:vm></metadata></domain>'})
from=::1,36826, vmId=baa3d4d1-69
18-4447-80fa-64b86dbcb1f9 (api:48)
[root@ov-no2 ~]# xzcat /var/log/vdsm/supervdsm.log.30.xz |egrep
"(WARN|error|ERR)" |more
MainProcess|jsonrpc/4::ERROR::2020-09-01
06:56:30,652::supervdsm_server::103::SuperVdsm.ServerCallback::(wrapper) Error in
volumeGeoRepStatus
MainProcess|jsonrpc/3::WARNING::2020-09-01
07:32:27,206::util::196::blivet::(get_sysfs_attr) queue/logical_block_size is not a valid
attribute
MainProcess|jsonrpc/3::WARNING::2020-09-01
07:32:27,213::util::196::blivet::(get_sysfs_attr) queue/optimal_io_size is not a valid
attribute
MainProcess|jsonrpc/3::WARNING::2020-09-01
07:32:27,217::util::196::blivet::(get_sysfs_attr) queue/minimum_io_size is not a valid
attribute
MainProcess|jsonrpc/3::WARNING::2020-09-01
07:32:27,565::util::196::blivet::(get_sysfs_attr) queue/logical_block_size is not a valid
attribute
MainProcess|jsonrpc/3::WARNING::2020-09-01
07:32:27,571::util::196::blivet::(get_sysfs_attr) queue/optimal_io_size is not a valid
attribute
MainProcess|jsonrpc/3::WARNING::2020-09-01
07:32:27,573::util::196::blivet::(get_sysfs_attr) queue/minimum_io_size is not a valid
attribute
MainProcess|jsonrpc/3::WARNING::2020-09-01 07:32:27,735::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Read-only locking type set. Write loc
ks are prohibited.
MainProcess|jsonrpc/3::WARNING::2020-09-01 07:32:27,735::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Recovery of standalone physical volum
es failed.
MainProcess|jsonrpc/3::WARNING::2020-09-01 07:32:27,736::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Cannot process standalone physical vo
lumes
MainProcess|jsonrpc/3::WARNING::2020-09-01 07:32:27,736::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Read-only locking type set. Write loc
ks are prohibited.
MainProcess|jsonrpc/3::WARNING::2020-09-01 07:32:27,737::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Recovery of standalone physical volum
es failed.
MainProcess|jsonrpc/3::WARNING::2020-09-01 07:32:27,738::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Cannot process standalone physical vo
lumes
MainProcess|jsonrpc/3::WARNING::2020-09-01 07:32:27,740::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Read-only locking type set. Write loc
ks are prohibited.
MainProcess|jsonrpc/3::WARNING::2020-09-01 07:32:27,740::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Recovery of standalone physical volum
es failed.
MainProcess|jsonrpc/3::WARNING::2020-09-01 07:32:27,741::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Cannot process standalone physical vo
lumes
MainProcess|jsonrpc/3::ERROR::2020-09-01 07:32:31,889::udev::53::blivet::(get_device) No
device at ''
MainProcess|jsonrpc/3::ERROR::2020-09-01 07:32:31,893::udev::53::blivet::(get_device) No
device at ''
MainProcess|jsonrpc/3::ERROR::2020-09-01 07:32:31,894::udev::53::blivet::(get_device) No
device at ''
MainProcess|jsonrpc/3::ERROR::2020-09-01 07:32:31,896::udev::53::blivet::(get_device) No
device at ''
MainProcess|jsonrpc/7::ERROR::2020-09-01
07:58:30,879::supervdsm_server::103::SuperVdsm.ServerCallback::(wrapper) Error in
volumeGeoRepStatus
[root@ov-no2 ~]# xzcat /var/log/vdsm/supervdsm.log.29.xz |egrep
"(WARN|error|ERR)" |more
MainProcess|jsonrpc/1::WARNING::2020-09-01
09:32:32,179::util::196::blivet::(get_sysfs_attr) queue/logical_block_size is not a valid
attribute
MainProcess|jsonrpc/1::WARNING::2020-09-01
09:32:32,180::util::196::blivet::(get_sysfs_attr) queue/optimal_io_size is not a valid
attribute
MainProcess|jsonrpc/1::WARNING::2020-09-01
09:32:32,181::util::196::blivet::(get_sysfs_attr) queue/minimum_io_size is not a valid
attribute
MainProcess|jsonrpc/1::WARNING::2020-09-01
09:32:32,390::util::196::blivet::(get_sysfs_attr) queue/logical_block_size is not a valid
attribute
MainProcess|jsonrpc/1::WARNING::2020-09-01
09:32:32,393::util::196::blivet::(get_sysfs_attr) queue/optimal_io_size is not a valid
attribute
MainProcess|jsonrpc/1::WARNING::2020-09-01
09:32:32,395::util::196::blivet::(get_sysfs_attr) queue/minimum_io_size is not a valid
attribute
MainProcess|jsonrpc/1::WARNING::2020-09-01 09:32:32,510::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Read-only locking type set. Write loc
ks are prohibited.
MainProcess|jsonrpc/1::WARNING::2020-09-01 09:32:32,510::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Recovery of standalone physical volum
es failed.
MainProcess|jsonrpc/1::WARNING::2020-09-01 09:32:32,511::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Cannot process standalone physical vo
lumes
MainProcess|jsonrpc/1::WARNING::2020-09-01 09:32:32,511::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Read-only locking type set. Write loc
ks are prohibited.
MainProcess|jsonrpc/1::WARNING::2020-09-01 09:32:32,512::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Recovery of standalone physical volum
es failed.
MainProcess|jsonrpc/1::WARNING::2020-09-01 09:32:32,512::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Cannot process standalone physical vo
lumes
MainProcess|jsonrpc/1::WARNING::2020-09-01 09:32:32,513::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Read-only locking type set. Write loc
ks are prohibited.
MainProcess|jsonrpc/1::WARNING::2020-09-01 09:32:32,513::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Recovery of standalone physical volum
es failed.
MainProcess|jsonrpc/1::WARNING::2020-09-01 09:32:32,514::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Cannot process standalone physical vo
lumes
MainProcess|jsonrpc/1::ERROR::2020-09-01 09:32:36,440::udev::53::blivet::(get_device) No
device at ''
MainProcess|jsonrpc/1::ERROR::2020-09-01 09:32:36,444::udev::53::blivet::(get_device) No
device at ''
MainProcess|jsonrpc/1::ERROR::2020-09-01 09:32:36,446::udev::53::blivet::(get_device) No
device at ''
MainProcess|jsonrpc/1::ERROR::2020-09-01 09:32:36,448::udev::53::blivet::(get_device) No
device at ''
[root@ov-no2 ~]# cat /var/log/vdsm/supervdsm.log |egrep "(WARN|error|ERR)"
|more
MainProcess|jsonrpc/1::WARNING::2020-09-03
19:34:55,056::util::196::blivet::(get_sysfs_attr) queue/logical_block_size is not a valid
attribute
MainProcess|jsonrpc/1::WARNING::2020-09-03
19:34:55,057::util::196::blivet::(get_sysfs_attr) queue/optimal_io_size is not a valid
attribute
MainProcess|jsonrpc/1::WARNING::2020-09-03
19:34:55,059::util::196::blivet::(get_sysfs_attr) queue/minimum_io_size is not a valid
attribute
MainProcess|jsonrpc/1::WARNING::2020-09-03
19:34:55,177::util::196::blivet::(get_sysfs_attr) queue/logical_block_size is not a valid
attribute
MainProcess|jsonrpc/1::WARNING::2020-09-03
19:34:55,179::util::196::blivet::(get_sysfs_attr) queue/optimal_io_size is not a valid
attribute
MainProcess|jsonrpc/1::WARNING::2020-09-03
19:34:55,180::util::196::blivet::(get_sysfs_attr) queue/minimum_io_size is not a valid
attribute
MainProcess|jsonrpc/1::WARNING::2020-09-03 19:34:55,292::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Read-only locking type set. Write loc
ks are prohibited.
MainProcess|jsonrpc/1::WARNING::2020-09-03 19:34:55,292::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Recovery of standalone physical volum
es failed.
MainProcess|jsonrpc/1::WARNING::2020-09-03 19:34:55,293::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Cannot process standalone physical vo
lumes
MainProcess|jsonrpc/1::WARNING::2020-09-03 19:34:55,294::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Read-only locking type set. Write loc
ks are prohibited.
MainProcess|jsonrpc/1::WARNING::2020-09-03 19:34:55,294::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Recovery of standalone physical volum
es failed.
MainProcess|jsonrpc/1::WARNING::2020-09-03 19:34:55,295::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Cannot process standalone physical vo
lumes
MainProcess|jsonrpc/1::WARNING::2020-09-03 19:34:55,295::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Read-only locking type set. Write loc
ks are prohibited.
MainProcess|jsonrpc/1::WARNING::2020-09-03 19:34:55,296::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Recovery of standalone physical volum
es failed.
MainProcess|jsonrpc/1::WARNING::2020-09-03 19:34:55,296::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Cannot process standalone physical vo
lumes
MainProcess|jsonrpc/1::ERROR::2020-09-03 19:34:59,217::udev::53::blivet::(get_device) No
device at ''
MainProcess|jsonrpc/1::ERROR::2020-09-03 19:34:59,220::udev::53::blivet::(get_device) No
device at ''
MainProcess|jsonrpc/1::ERROR::2020-09-03 19:34:59,222::udev::53::blivet::(get_device) No
device at ''
MainProcess|jsonrpc/1::ERROR::2020-09-03 19:34:59,223::udev::53::blivet::(get_device) No
device at ''
[root@ov-no2 ~]# cat /var/log/ovirt-hosted-engine-ha/agent.log.2020-09-01 |egrep -i
"(WARN|ERR)" |more
MainThread::ERROR::2020-09-01
09:48:02,766::hosted_engine::452::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Unhandled
monitoring loop exception
error: [Errno 2] No such file or directory
MainThread::ERROR::2020-09-01
09:48:02,786::agent::144::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) Traceback
(most recent call last):
error: [Errno 2] No such file or directory
MainThread::ERROR::2020-09-01
09:48:02,786::agent::145::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) Trying to
restart agent
MainThread::INFO::2020-09-01
09:49:16,925::hosted_engine::854::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm)
stderr: Co
mmand VM.getStats with args {'vmID':
'baa3d4d1-6918-4447-80fa-64b86dbcb1f9'} failed:
MainThread::INFO::2020-09-01
09:49:27,616::hosted_engine::931::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm)
stderr: Com
mand VM.destroy with args {'vmID': 'baa3d4d1-6918-4447-80fa-64b86dbcb1f9'}
failed:
MainThread::ERROR::2020-09-01
09:49:27,616::hosted_engine::936::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm)
Failed to
stop engine vm with /usr/sbin/hosted-engine --vm-poweroff: Command VM.destroy with args
{'vmID': 'baa3d4d1-6918-4447-80fa-64b86dbcb1f9'} failed:
MainThread::ERROR::2020-09-01
09:49:27,616::hosted_engine::942::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_stop_engine_vm)
Failed to
stop engine VM: Command VM.destroy with args {'vmID':
'baa3d4d1-6918-4447-80fa-64b86dbcb1f9'} failed:
[root@ov-no2 ~]# cat /var/log/ovirt-hosted-engine-ha/broker.log.2020-09-01 |egrep -i
"(WARN|ERR)" |more
StatusStorageThread::ERROR::2020-09-01
09:47:52,786::status_broker::90::ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update::(run)
Failed
to update state.
raise ex.HostIdNotLockedError("Host id is not set")
HostIdNotLockedError: Host id is not set
StatusStorageThread::ERROR::2020-09-01
09:47:52,787::status_broker::70::ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update::(trigger_res
tart) Trying to restart the broker
Listener::ERROR::2020-09-01
09:48:15,543::notifications::39::ovirt_hosted_engine_ha.broker.notifications.Notifications::(send_email)
[Errno 113] No r
oute to host
raise err
error: [Errno 113] No route to host
Listener::ERROR::2020-09-01
09:48:25,768::notifications::39::ovirt_hosted_engine_ha.broker.notifications.Notifications::(send_email)
[Errno 113] No r
oute to host
raise err
error: [Errno 113] No route to host
Listener::ERROR::2020-09-01
09:49:15,838::notifications::39::ovirt_hosted_engine_ha.broker.notifications.Notifications::(send_email)
[Errno 113] No r
oute to host
raise err
The Gluster status and info I could gather for the present are the following:
[root@ov-no2 ~]# gluster volume status
Status of volume: data
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ov-no1.ariadne-t.local:/gluster_brick
s/data/data 49152 0 Y 6070
Brick ov-no2.ariadne-t.local:/gluster_brick
s/data/data 49152 0 Y 28860
Brick ov-no3.ariadne-t.local:/gluster_brick
s/data/data 49152 0 Y 5752
Self-heal Daemon on localhost N/A N/A Y 813
Self-heal Daemon on ov-no1.ariadne-t.local N/A N/A Y 6539
Self-heal Daemon on ov-no3.ariadne-t.local N/A N/A Y 19353
Task Status of Volume data
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: engine
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ov-no1.ariadne-t.local:/gluster_brick
s/engine/engine 49153 0 Y 6098
Brick ov-no2.ariadne-t.local:/gluster_brick
s/engine/engine 49156 0 Y 28867
Brick ov-no3.ariadne-t.local:/gluster_brick
s/engine/engine 49153 0 Y 5789
Self-heal Daemon on localhost N/A N/A Y 813
Self-heal Daemon on ov-no1.ariadne-t.local N/A N/A Y 6539
Self-heal Daemon on ov-no3.ariadne-t.local N/A N/A Y 19353
Task Status of Volume engine
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: vmstore
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ov-no1.ariadne-t.local:/gluster_brick
s/vmstore/vmstore 49154 0 Y 6102
Brick ov-no2.ariadne-t.local:/gluster_brick
s/vmstore/vmstore 49157 0 Y 28878
Brick ov-no3.ariadne-t.local:/gluster_brick
s/vmstore/vmstore 49154 0 Y 5772
Self-heal Daemon on localhost N/A N/A Y 813
Self-heal Daemon on ov-no1.ariadne-t.local N/A N/A Y 6539
Self-heal Daemon on ov-no3.ariadne-t.local N/A N/A Y 19353
Task Status of Volume vmstore
------------------------------------------------------------------------------
There are no active volume tasks
[root@ov-no2 ~]# gluster volume heal engine info
Brick ov-no1.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0
Brick ov-no2.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0
Brick ov-no3.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0
[root@ov-no2 ~]# gluster volume heal engine info summary
Brick ov-no1.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick ov-no2.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick ov-no3.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
[root@ov-no2 ~]# gluster volume heal data info
Brick ov-no1.ariadne-t.local:/gluster_bricks/data/data
Status: Connected
Number of entries: 0
Brick ov-no2.ariadne-t.local:/gluster_bricks/data/data
Status: Connected
Number of entries: 0
Brick ov-no3.ariadne-t.local:/gluster_bricks/data/data
Status: Connected
Number of entries: 0
[root@ov-no2 ~]# gluster volume heal data info summary
Brick ov-no1.ariadne-t.local:/gluster_bricks/data/data
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick ov-no2.ariadne-t.local:/gluster_bricks/data/data
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick ov-no3.ariadne-t.local:/gluster_bricks/data/data
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
[root@ov-no2 ~]# gluster volume heal vmstore info
Brick ov-no1.ariadne-t.local:/gluster_bricks/vmstore/vmstore
Status: Connected
Number of entries: 0
Brick ov-no2.ariadne-t.local:/gluster_bricks/vmstore/vmstore
Status: Connected
Number of entries: 0
Brick ov-no3.ariadne-t.local:/gluster_bricks/vmstore/vmstore
Status: Connected
Number of entries: 0
[root@ov-no2 ~]# gluster volume heal vmstore info summary
Brick ov-no1.ariadne-t.local:/gluster_bricks/vmstore/vmstore
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick ov-no2.ariadne-t.local:/gluster_bricks/vmstore/vmstore
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick ov-no3.ariadne-t.local:/gluster_bricks/vmstore/vmstore
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
[root@ov-no2 ~]# ps -A |grep gluster
813 ? 02:18:00 glusterfs
14183 ? 1-12:02:26 glusterfs
17944 ? 01:19:03 glustereventsd
17995 ? 00:31:50 glustereventsd
28722 ? 00:02:09 glusterd
28860 ? 02:19:48 glusterfsd
28867 ? 05:23:00 glusterfsd
28878 ? 00:20:15 glusterfsd
32187 ? 1-14:42:54 glusterfs
32309 ? 09:59:54 glusterfs
Node3:
[root@ov-no3 ~]# virsh qemu-agent-command HostedEngine
'{"execute":"guest-info"}'
Please enter your authentication name: root
Please enter your password:
{"return":{"version":"2.12.0","supported_commands":[{"enabled":true,"name":"guest-get-osinfo","success-response":true},{"enabled":true,"name":"guest-get-timezone","success-response":true},{"enabled":true,"name":"guest-get-users","success-response":true},{"enabled":true,"name":"guest-get-host-name","success-response":true},{"enabled":false,"name":"guest-exec","success-response":true},{"enabled":false,"name":"guest-exec-status","success-response":true},{"enabled":true,"name":"guest-get-memory-block-info","success-response":true},{"enabled":true,"name":"guest-set-memory-blocks","success-response":true},{"enabled":true,"name":"guest-get-memory-blocks","success-response":true},{"enabled":true,"name":"guest-set-user-password","success-response":true},{"enabled":true,"name":"guest-get-fsinfo","success-response":true},{"enabled":true,"name":"guest-set-vcpus","success-response":true},{"enabled":true,"name":"guest-get-vcpus","success-response":true},{"enabled":true,"name":"guest-network-get-in
terfaces","success-response":true},{"enabled":true,"name":"guest-suspend-hybrid","success-response":false},{"enabled":true,"name":"guest-suspend-ram","success-response":false},{"enabled":true,"name":"guest-suspend-disk","success-response":false},{"enabled":true,"name":"guest-fstrim","success-response":true},{"enabled":true,"name":"guest-fsfreeze-thaw","success-response":true},{"enabled":true,"name":"guest-fsfreeze-freeze-list","success-response":true},{"enabled":true,"name":"guest-fsfreeze-freeze","success-response":true},{"enabled":true,"name":"guest-fsfreeze-status","success-response":true},{"enabled":false,"name":"guest-file-flush","success-response":true},{"enabled":false,"name":"guest-file-seek","success-response":true},{"enabled":false,"name":"guest-file-write","success-response":true},{"enabled":false,"name":"guest-file-read","success-response":true},{"enabled":false,"name":"guest-file-close","success-response":true},{"enabled":false,"name":"guest-file-open","success-response
":true},{"enabled":true,"name":"guest-shutdown","success-response":false},{"enabled":true,"name":"guest-info","success-response":true},{"enabled":true,"name":"guest-set-time","success-response":true},{"enabled":true,"name":"guest-get-time","success-response":true},{"enabled":true,"name":"guest-ping","success-response":true},{"enabled":true,"name":"guest-sync","success-response":true},{"enabled":true,"name":"guest-sync-delimited","success-response":true}]}}
[root@ov-no3 ~]# cat /var/log/messages|egrep -i "(WARN|ERR)" |more
Sep 1 08:55:17 ov-no3 libvirtd: 2020-09-01 05:55:17.816+0000: 5284: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU gues
t agent is not connected
Sep 1 08:59:30 ov-no3 vdsm[6112]: WARN Worker blocked: <Worker name=jsonrpc/3 running
<Task <JsonRpcTask {'params': {}, 'jsonrpc': '2.0',
'method':
u'GlusterVolume.geoRepSessionList', 'id':
u'8298ea63-eadc-47b8-b8b4-d54982200010'} at 0x7f3c14739d90> timeout=60,
duration=60.00 at 0x7f3c14739dd0> t
ask#=72655 at 0x7f3c34306550>, traceback:#012File:
"/usr/lib64/python2.7/threading.py", line 785, in __bootstrap#012
self.__bootstrap_inner()#012Fil
e: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner#012
self.run()#012File: "/usr/lib64/python2.7/threading.py", line 765, in run
#012 self.__target(*self.__args, **self.__kwargs)#012File:
"/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 195, in
run#012 ret =
func(*args, **kwargs)#012File:
"/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run#012
self._execute_task()#012File: "/usr/lib/p
ython2.7/site-packages/vdsm/executor.py", line 315, in _execute_task#012
task()#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 3
91, in __call__#012 self._callable()#012File:
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in
__call__#012 self._handler(sel
f._ctx, self._req)#012File:
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in
_serveRequest#012 response = self._handle_request
(req, ctx)#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py",
line 345, in _handle_request#012 res = method(**params)#012File: "/usr
/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in _dynamicMethod#012
result = fn(*methodArgs)#012File: "/usr/lib/python2.7/site-package
s/vdsm/gluster/apiwrapper.py", line 237, in geoRepSessionList#012
remoteUserName)#012File: "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py",
l
ine 93, in wrapper#012 rv = func(*args, **kwargs)#012File:
"/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 551, in
volumeGeoRepSessionL
ist#012 remoteUserName,#012File:
"/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 56, in
__call__#012 return callMethod()#012File:
"/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 54, in
<lambda>#012 **kwargs)#012File: "<string>", line 2, in
glusterVolumeGeoRep
Status#012File: "/usr/lib64/python2.7/multiprocessing/managers.py", line 759, in
_callmethod#012 kind, result = conn.recv()
Sep 1 09:00:17 ov-no3 libvirtd: 2020-09-01 06:00:17.817+0000: 5281: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU gues
t agent is not connected
Sep 1 09:00:30 ov-no3 vdsm[6112]: WARN Worker blocked: <Worker name=jsonrpc/3 running
<Task <JsonRpcTask {'params': {}, 'jsonrpc': '2.0',
'method':
u'GlusterVolume.geoRepSessionList', 'id':
u'8298ea63-eadc-47b8-b8b4-d54982200010'} at 0x7f3c14739d90> timeout=60,
duration=120.00 at 0x7f3c14739dd0>
task#=72655 at 0x7f3c34306550>, traceback:#012File:
"/usr/lib64/python2.7/threading.py", line 785, in __bootstrap#012
self.__bootstrap_inner()#012Fi
le: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner#012
self.run()#012File: "/usr/lib64/python2.7/threading.py", line 765, in ru
n#012 self.__target(*self.__args, **self.__kwargs)#012File:
"/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 195, in
run#012 ret
= func(*args, **kwargs)#012File:
"/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run#012
self._execute_task()#012File: "/usr/lib/
python2.7/site-packages/vdsm/executor.py", line 315, in _execute_task#012
task()#012File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line
391, in __call__#012 self._callable()#012File:
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in
__call__#012 self._handler(se
lf._ctx, self._req)#012File:
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in
_serveRequest#012 response = self._handle_reques
t(req, ctx)#012File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py",
line 345, in _handle_request#012 res = method(**params)#012File: "/us
r/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in _dynamicMethod#012
result = fn(*methodArgs)#012File: "/usr/lib/python2.7/site-packag
es/vdsm/gluster/apiwrapper.py", line 237, in geoRepSessionList#012
remoteUserName)#012File: "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py",
line 93, in wrapper#012 rv = func(*args, **kwargs)#012File:
"/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 551, in
volumeGeoRepSession
List#012 remoteUserName,#012File:
"/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 56, in
__call__#012 return callMethod()#012File
: "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 54, in
<lambda>#012 **kwargs)#012File: "<string>", line 2, in
glusterVolumeGeoRe
pStatus#012File: "/usr/lib64/python2.7/multiprocessing/managers.py", line 759,
in _callmethod#012 kind, result = conn.recv()
Sep 1 09:05:17 ov-no3 libvirtd: 2020-09-01 06:05:17.816+0000: 5283: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU gues
t agent is not connected
Sep 1 09:10:17 ov-no3 libvirtd: 2020-09-01 06:10:17.823+0000: 5282: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU gues
t agent is not connected
Sep 1 09:15:17 ov-no3 libvirtd: 2020-09-01 06:15:17.817+0000: 5281: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU gues
t agent is not connected
Sep 1 09:20:17 ov-no3 libvirtd: 2020-09-01 06:20:17.820+0000: 5282: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU gues
t agent is not connected
Sep 1 09:25:17 ov-no3 libvirtd: 2020-09-01 06:25:17.818+0000: 5280: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU gues
t agent is not connected
Sep 1 09:30:17 ov-no3 libvirtd: 2020-09-01 06:30:17.820+0000: 5282: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU gues
t agent is not connected
Sep 1 09:35:17 ov-no3 libvirtd: 2020-09-01 06:35:17.815+0000: 5281: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU gues
t agent is not connected
Sep 1 09:40:17 ov-no3 libvirtd: 2020-09-01 06:40:17.823+0000: 5284: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU gues
t agent is not connected
Sep 1 09:45:17 ov-no3 libvirtd: 2020-09-01 06:45:17.819+0000: 5280: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU gues
t agent is not connected
Sep 1 09:50:17 ov-no3 libvirtd: 2020-09-01 06:50:17.824+0000: 5280: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU gues
t agent is not connected
Sep 1 09:55:17 ov-no3 libvirtd: 2020-09-01 06:55:17.818+0000: 5284: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU gues
t agent is not connected
Sep 1 10:00:17 ov-no3 libvirtd: 2020-09-01 07:00:17.822+0000: 5284: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU gues
t agent is not connected
Sep 1 10:05:17 ov-no3 libvirtd: 2020-09-01 07:05:17.824+0000: 5282: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU gues
t agent is not connected
.
.
.
.
[which continues as so 'till now]
[root@ov-no3 ~]# xzcat /var/log/vdsm/vdsm.log.57.xz |egrep "(WARN|error)" |more
2020-09-01 09:31:35,023+0300 WARN (monitor/0a5e552) [storage.PersistentDict] Could not
parse line ``. (persistent:241)
2020-09-01 09:31:35,026+0300 WARN (monitor/e2e7282) [storage.PersistentDict] Could not
parse line ``. (persistent:241)
2020-09-01 09:35:19,868+0300 WARN (monitor/80f6e39) [storage.PersistentDict] Could not
parse line ``. (persistent:241)
2020-09-01 09:35:22,210+0300 WARN (qgapoller/1) [virt.periodic.VmDispatcher] could not
run <function <lambda> at 0x7f3c680470c8> on
['5836fc52-b585-46f8-8780-2cc5accd5847'] (periodic:289)
2020-09-01 09:35:32,207+0300 WARN (qgapoller/3) [virt.periodic.VmDispatcher] could not
run <function <lambda> at 0x7f3c680470c8> on
['5836fc52-b585-46f8-8780-2cc5accd5847'] (periodic:289)
2020-09-01 09:35:42,213+0300 WARN (qgapoller/1) [virt.periodic.VmDispatcher] could not
run <function <lambda> at 0x7f3c680470c8> on
['5836fc52-b585-46f8-8780-2cc5accd5847'] (periodic:289)
2020-09-01 09:35:52,212+0300 WARN (qgapoller/2) [virt.periodic.VmDispatcher] could not
run <function <lambda> at 0x7f3c680470c8> on
['5836fc52-b585-46f8-8780-2cc5accd5847'] (periodic:289)
2020-09-01 09:36:02,207+0300 WARN (qgapoller/1) [virt.periodic.VmDispatcher] could not
run <function <lambda> at 0x7f3c680470c8> on
['5836fc52-b585-46f8-8780-2cc5accd5847'] (periodic:289)
2020-09-01 09:36:12,215+0300 WARN (qgapoller/2) [virt.periodic.VmDispatcher] could not
run <function <lambda> at 0x7f3c680470c8> on
['5836fc52-b585-46f8-8780-2cc5accd5847'] (periodic:289)
2020-09-01 09:36:35,276+0300 WARN (monitor/0a5e552) [storage.PersistentDict] Could not
parse line ``. (persistent:241)
2020-09-01 09:36:35,281+0300 WARN (monitor/e2e7282) [storage.PersistentDict] Could not
parse line ``. (persistent:241)
2020-09-01 09:40:18,494+0300 WARN (qgapoller/3) [virt.periodic.VmDispatcher] could not
run <function <lambda> at 0x7f3c6803ff50> on
['5836fc52-b585-46f8-8780-2cc5accd5847'] (periodic:289)
2020-09-01 09:40:18,534+0300 WARN (qgapoller/1) [virt.periodic.VmDispatcher] could not
run <function <lambda> at 0x7f3c6803fe60> on
['5836fc52-b585-46f8-8780-2cc5accd5847'] (periodic:289)
2020-09-01 09:40:20,244+0300 WARN (monitor/80f6e39) [storage.PersistentDict] Could not
parse line ``. (persistent:241)
2020-09-01 09:40:22,209+0300 WARN (qgapoller/3) [virt.periodic.VmDispatcher] could not
run <function <lambda> at 0x7f3c680470c8> on
['5836fc52-b585-46f8-8780-2cc5accd5847'] (periodic:289)
2020-09-01 09:40:32,207+0300 WARN (qgapoller/1) [virt.periodic.VmDispatcher] could not
run <function <lambda> at 0x7f3c680470c8> on
['5836fc52-b585-46f8-8780-2cc5accd5847'] (periodic:289)
2020-09-01 09:40:42,214+0300 WARN (qgapoller/3) [virt.periodic.VmDispatcher] could not
run <function <lambda> at 0x7f3c680470c8> on
['5836fc52-b585-46f8-8780-2cc5accd5847'] (periodic:289)
2020-09-01 09:40:52,215+0300 WARN (qgapoller/2) [virt.periodic.VmDispatcher] could not
run <function <lambda> at 0x7f3c680470c8> on
['5836fc52-b585-46f8-8780-2cc5accd5847'] (periodic:289)
[root@ov-no3 ~]# xzcat /var/log/vdsm/supervdsm.log.29.xz|egrep "(WARN|ERR)"
|more
MainProcess|jsonrpc/4::WARNING::2020-09-01
09:32:32,165::util::196::blivet::(get_sysfs_attr) queue/logical_block_size is not a valid
attribute
MainProcess|jsonrpc/4::WARNING::2020-09-01
09:32:32,166::util::196::blivet::(get_sysfs_attr) queue/optimal_io_size is not a valid
attribute
MainProcess|jsonrpc/4::WARNING::2020-09-01
09:32:32,168::util::196::blivet::(get_sysfs_attr) queue/minimum_io_size is not a valid
attribute
MainProcess|jsonrpc/4::WARNING::2020-09-01
09:32:32,285::util::196::blivet::(get_sysfs_attr) queue/logical_block_size is not a valid
attribute
MainProcess|jsonrpc/4::WARNING::2020-09-01
09:32:32,287::util::196::blivet::(get_sysfs_attr) queue/optimal_io_size is not a valid
attribute
MainProcess|jsonrpc/4::WARNING::2020-09-01
09:32:32,288::util::196::blivet::(get_sysfs_attr) queue/minimum_io_size is not a valid
attribute
MainProcess|jsonrpc/4::WARNING::2020-09-01 09:32:32,396::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Read-only locking type set. Write loc
ks are prohibited.
MainProcess|jsonrpc/4::WARNING::2020-09-01 09:32:32,397::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Recovery of standalone physical volum
es failed.
MainProcess|jsonrpc/4::WARNING::2020-09-01 09:32:32,397::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Cannot process standalone physical vo
lumes
MainProcess|jsonrpc/4::WARNING::2020-09-01 09:32:32,398::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Read-only locking type set. Write loc
ks are prohibited.
MainProcess|jsonrpc/4::WARNING::2020-09-01 09:32:32,398::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Recovery of standalone physical volum
es failed.
MainProcess|jsonrpc/4::WARNING::2020-09-01 09:32:32,399::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Cannot process standalone physical vo
lumes
MainProcess|jsonrpc/4::WARNING::2020-09-01 09:32:32,399::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Read-only locking type set. Write loc
ks are prohibited.
MainProcess|jsonrpc/4::WARNING::2020-09-01 09:32:32,400::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Recovery of standalone physical volum
es failed.
MainProcess|jsonrpc/4::WARNING::2020-09-01 09:32:32,400::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Cannot process standalone physical vo
lumes
MainProcess|jsonrpc/4::ERROR::2020-09-01 09:32:36,337::udev::53::blivet::(get_device) No
device at ''
MainProcess|jsonrpc/4::ERROR::2020-09-01 09:32:36,341::udev::53::blivet::(get_device) No
device at ''
MainProcess|jsonrpc/4::ERROR::2020-09-01 09:32:36,342::udev::53::blivet::(get_device) No
device at ''
MainProcess|jsonrpc/4::ERROR::2020-09-01 09:32:36,344::udev::53::blivet::(get_device) No
device at ''
[root@ov-no3 ~]# cat /var/log/vdsm/supervdsm.log|egrep "(WARN|ERR)" |more
MainProcess|jsonrpc/1::WARNING::2020-09-03
19:34:55,041::util::196::blivet::(get_sysfs_attr) queue/logical_block_size is not a valid
attribute
MainProcess|jsonrpc/1::WARNING::2020-09-03
19:34:55,043::util::196::blivet::(get_sysfs_attr) queue/optimal_io_size is not a valid
attribute
MainProcess|jsonrpc/1::WARNING::2020-09-03
19:34:55,044::util::196::blivet::(get_sysfs_attr) queue/minimum_io_size is not a valid
attribute
MainProcess|jsonrpc/1::WARNING::2020-09-03
19:34:55,158::util::196::blivet::(get_sysfs_attr) queue/logical_block_size is not a valid
attribute
MainProcess|jsonrpc/1::WARNING::2020-09-03
19:34:55,160::util::196::blivet::(get_sysfs_attr) queue/optimal_io_size is not a valid
attribute
MainProcess|jsonrpc/1::WARNING::2020-09-03
19:34:55,161::util::196::blivet::(get_sysfs_attr) queue/minimum_io_size is not a valid
attribute
MainProcess|jsonrpc/1::WARNING::2020-09-03 19:34:55,265::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Read-only locking type set. Write loc
ks are prohibited.
MainProcess|jsonrpc/1::WARNING::2020-09-03 19:34:55,266::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Recovery of standalone physical volum
es failed.
MainProcess|jsonrpc/1::WARNING::2020-09-03 19:34:55,266::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Cannot process standalone physical vo
lumes
MainProcess|jsonrpc/1::WARNING::2020-09-03 19:34:55,267::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Read-only locking type set. Write loc
ks are prohibited.
MainProcess|jsonrpc/1::WARNING::2020-09-03 19:34:55,267::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Recovery of standalone physical volum
es failed.
MainProcess|jsonrpc/1::WARNING::2020-09-03 19:34:55,268::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Cannot process standalone physical vo
lumes
MainProcess|jsonrpc/1::WARNING::2020-09-03 19:34:55,269::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Read-only locking type set. Write loc
ks are prohibited.
MainProcess|jsonrpc/1::WARNING::2020-09-03 19:34:55,269::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Recovery of standalone physical volum
es failed.
MainProcess|jsonrpc/1::WARNING::2020-09-03 19:34:55,270::lvm::360::blivet::(pvinfo)
ignoring pvs output line: Cannot process standalone physical vo
lumes
MainProcess|jsonrpc/1::ERROR::2020-09-03 19:34:59,399::udev::53::blivet::(get_device) No
device at ''
MainProcess|jsonrpc/1::ERROR::2020-09-03 19:34:59,402::udev::53::blivet::(get_device) No
device at ''
MainProcess|jsonrpc/1::ERROR::2020-09-03 19:34:59,404::udev::53::blivet::(get_device) No
device at ''
MainProcess|jsonrpc/1::ERROR::2020-09-03 19:34:59,406::udev::53::blivet::(get_device) No
device at ''
MainProcess|jsonrpc/7::ERROR::2020-09-03
19:54:43,983::supervdsm_server::103::SuperVdsm.ServerCallback::(wrapper) Error in
volumeGeoRepStatus
[root@ov-no3 ~]# cat /var/log/ovirt-hosted-engine-ha/agent.log.2020-09-01 |egrep -i
"(WARN|ERR)" |more
[nothing in the output]
[root@ov-no3 ~]# cat /var/log/ovirt-hosted-engine-ha/broker.log.2020-09-01 |egrep -i
"(WARN|ERR)" |more
Thread-1::WARNING::2020-09-02 12:46:59,657::network::92::network.Network::(action) Failed
to verify network status, (4 out of 5)
[root@ov-no3 ~]# gluster volume status
Status of volume: data
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ov-no1.ariadne-t.local:/gluster_brick
s/data/data 49152 0 Y 6070
Brick ov-no2.ariadne-t.local:/gluster_brick
s/data/data 49152 0 Y 28860
Brick ov-no3.ariadne-t.local:/gluster_brick
s/data/data 49152 0 Y 5752
Self-heal Daemon on localhost N/A N/A Y 19353
Self-heal Daemon on ov-no1.ariadne-t.local N/A N/A Y 6539
Self-heal Daemon on ov-no2.ariadne-t.local N/A N/A Y 813
Task Status of Volume data
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: engine
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ov-no1.ariadne-t.local:/gluster_brick
s/engine/engine 49153 0 Y 6098
Brick ov-no2.ariadne-t.local:/gluster_brick
s/engine/engine 49156 0 Y 28867
Brick ov-no3.ariadne-t.local:/gluster_brick
s/engine/engine 49153 0 Y 5789
Self-heal Daemon on localhost N/A N/A Y 19353
Self-heal Daemon on ov-no1.ariadne-t.local N/A N/A Y 6539
Self-heal Daemon on ov-no2.ariadne-t.local N/A N/A Y 813
Task Status of Volume engine
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: vmstore
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ov-no1.ariadne-t.local:/gluster_brick
s/vmstore/vmstore 49154 0 Y 6102
Brick ov-no2.ariadne-t.local:/gluster_brick
s/vmstore/vmstore 49157 0 Y 28878
Brick ov-no3.ariadne-t.local:/gluster_brick
s/vmstore/vmstore 49154 0 Y 5772
Self-heal Daemon on localhost N/A N/A Y 19353
Self-heal Daemon on ov-no1.ariadne-t.local N/A N/A Y 6539
Self-heal Daemon on ov-no2.ariadne-t.local N/A N/A Y 813
Task Status of Volume vmstore
------------------------------------------------------------------------------
There are no active volume tasks
[root@ov-no3 ~]# gluster volume heal engine info
Brick ov-no1.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0
Brick ov-no2.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0
Brick ov-no3.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0
[root@ov-no3 ~]# gluster volume heal engine info summary
Brick ov-no1.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick ov-no2.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick ov-no3.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
[root@ov-no3 ~]# gluster volume heal data info
Brick ov-no1.ariadne-t.local:/gluster_bricks/data/data
Status: Connected
Number of entries: 0
Brick ov-no2.ariadne-t.local:/gluster_bricks/data/data
Status: Connected
Number of entries: 0
Brick ov-no3.ariadne-t.local:/gluster_bricks/data/data
Status: Connected
Number of entries: 0
[root@ov-no3 ~]# gluster volume heal data info summary
Brick ov-no1.ariadne-t.local:/gluster_bricks/data/data
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick ov-no2.ariadne-t.local:/gluster_bricks/data/data
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick ov-no3.ariadne-t.local:/gluster_bricks/data/data
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
[root@ov-no3 ~]# gluster volume heal vmstore info
Brick ov-no1.ariadne-t.local:/gluster_bricks/vmstore/vmstore
Status: Connected
Number of entries: 0
Brick ov-no2.ariadne-t.local:/gluster_bricks/vmstore/vmstore
Status: Connected
Number of entries: 0
Brick ov-no3.ariadne-t.local:/gluster_bricks/vmstore/vmstore
Status: Connected
Number of entries: 0
[root@ov-no3 ~]# gluster volume heal vmstore info summary
Brick ov-no1.ariadne-t.local:/gluster_bricks/vmstore/vmstore
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick ov-no2.ariadne-t.local:/gluster_bricks/vmstore/vmstore
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick ov-no3.ariadne-t.local:/gluster_bricks/vmstore/vmstore
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
[root@ov-no3 ~]# ps -A|grep gluster
5143 ? 00:01:31 glustereventsd
5752 ? 02:01:08 glusterfsd
5772 ? 00:17:26 glusterfsd
5789 ? 05:12:24 glusterfsd
5906 ? 00:00:38 glustereventsd
6382 ? 02:37:00 glusterfs
9843 ? 00:02:18 glusterd
10613 ? 00:22:08 glusterfs
10750 ? 00:13:24 glusterfs
19353 ? 01:20:36 glusterfs
Hosted Engine:
[root@ov-eng ~]# grep ERROR /var/log/ovirt-engine/engine.log |more
2020-09-02 03:36:35,199+03 ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob]
(DefaultQuartzScheduler8) [1a87b58
d] VDS error Command execution failed: rc=1 out='Error : Request timed
out\ngeo-replication command failed\n' err=''
2020-09-02 04:38:35,429+03 ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob]
(DefaultQuartzScheduler2) [15fadfa
2] VDS error Command execution failed: rc=1 out='Error : Request timed
out\ngeo-replication command failed\n' err=''
2020-09-02 05:40:35,655+03 ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob]
(DefaultQuartzScheduler9) [50fa85f
0] VDS error Command execution failed: rc=1 out='Error : Request timed
out\ngeo-replication command failed\n' err=''
2020-09-02 06:42:35,885+03 ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob]
(DefaultQuartzScheduler7) [3092fdc
c] VDS error Command execution failed: rc=1 out='Error : Request timed
out\ngeo-replication command failed\n' err=''
2020-09-02 07:44:36,119+03 ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob]
(DefaultQuartzScheduler6) [696ffca
7] VDS error Command execution failed: rc=1 out='Error : Request timed
out\ngeo-replication command failed\n' err=''
2020-09-02 08:46:36,341+03 ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob]
(DefaultQuartzScheduler9) [180bb50
] VDS error Command execution failed: rc=1 out='Error : Request timed
out\ngeo-replication command failed\n' err=''
2020-09-02 09:48:36,562+03 ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob]
(DefaultQuartzScheduler6) [3a45e1b
3] VDS error Command execution failed: rc=1 out='Error : Request timed
out\ngeo-replication command failed\n' err=''
2020-09-02 10:50:36,788+03 ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob]
(DefaultQuartzScheduler4) [7da1197
3] VDS error Command execution failed: rc=1 out='Error : Request timed
out\ngeo-replication command failed\n' err=''
2020-09-02 11:52:37,011+03 ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob]
(DefaultQuartzScheduler6) [76b3cb6
5] VDS error Command execution failed: rc=1 out='Error : Request timed
out\ngeo-replication command failed\n' err=''
2020-09-02 12:54:37,234+03 ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob]
(DefaultQuartzScheduler5) [6a0d904
9] VDS error Command execution failed: rc=1 out='Error : Request timed
out\ngeo-replication command failed\n' err=''
2020-09-02 13:56:37,459+03 ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob]
(DefaultQuartzScheduler4) [1e25ecf
f] VDS error Command execution failed: rc=1 out='Error : Request timed
out\ngeo-replication command failed\n' err=''
2020-09-02 14:58:37,674+03 ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob]
(DefaultQuartzScheduler1) [64144e3
4] VDS error Command execution failed: rc=1 out='Error : Request timed
out\ngeo-replication command failed\n' err=''
How can I find why the HE failover happened? I believe it's the gluster geo
replication, but I haven't configured geo replication in the enviroment. Can you help
me how to proceed?
Thank you all in advance,
Maria Souvalioti