Hello,
after updating hosted engine from 4.3.3 to 4.3.5 and then the only host
composing the environment (plain CentOS 7.6) it seems it is not able to
start vdsm daemons
kernel installed with update is kernel-3.10.0-957.27.2.el7.x86_64
Same problem also if using previous running kernel
3.10.0-957.12.2.el7.x86_64
[root@ovirt01 vdsm]# uptime
00:50:08 up 25 min, 3 users, load average: 0.60, 0.67, 0.60
[root@ovirt01 vdsm]#
[root@ovirt01 vdsm]# systemctl status vdsmd -l
● vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/etc/systemd/system/vdsmd.service; enabled; vendor
preset: enabled)
Active: failed (Result: start-limit) since Fri 2019-08-23 00:37:27 CEST;
7s ago
Process: 25810 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh
--pre-start (code=exited, status=1/FAILURE)
Aug 23 00:37:27 ovirt01.mydomain systemd[1]: Failed to start Virtual
Desktop Server Manager.
Aug 23 00:37:27 ovirt01.mydomain systemd[1]: Unit vdsmd.service entered
failed state.
Aug 23 00:37:27 ovirt01.mydomain systemd[1]: vdsmd.service failed.
Aug 23 00:37:27 ovirt01.mydomain systemd[1]: vdsmd.service holdoff time
over, scheduling restart.
Aug 23 00:37:27 ovirt01.mydomain systemd[1]: Stopped Virtual Desktop Server
Manager.
Aug 23 00:37:27 ovirt01.mydomain systemd[1]: start request repeated too
quickly for vdsmd.service
Aug 23 00:37:27 ovirt01.mydomain systemd[1]: Failed to start Virtual
Desktop Server Manager.
Aug 23 00:37:27 ovirt01.mydomain systemd[1]: Unit vdsmd.service entered
failed state.
Aug 23 00:37:27 ovirt01.mydomain systemd[1]: vdsmd.service failed.
[root@ovirt01 vdsm]#
[root@ovirt01 vdsm]# pwd
/var/log/vdsm
[root@ovirt01 vdsm]# ll -t | head
total 118972
-rw-r--r--. 1 root root 3406465 Aug 23 00:25 supervdsm.log
-rw-r--r--. 1 root root 73621 Aug 23 00:25 upgrade.log
-rw-r--r--. 1 vdsm kvm 0 Aug 23 00:01 vdsm.log
-rw-r--r--. 1 vdsm kvm 538480 Aug 22 23:46 vdsm.log.1.xz
-rw-r--r--. 1 vdsm kvm 187486 Aug 22 23:46 mom.log
-rw-r--r--. 1 vdsm kvm 621320 Aug 22 22:01 vdsm.log.2.xz
-rw-r--r--. 1 root root 374464 Aug 22 22:00 supervdsm.log.1.xz
-rw-r--r--. 1 vdsm kvm 2097122 Aug 22 21:53 mom.log.1
-rw-r--r--. 1 vdsm kvm 636212 Aug 22 20:01 vdsm.log.3.xz
[root@ovirt01 vdsm]#
link to upgrade.log contents here:
https://drive.google.com/file/d/17jtX36oH1hlbNUAiVhdBkVDbd28QegXG/view?us...
link to supervdsm.log (in gzip format) here:
https://drive.google.com/file/d/1l61ePU-eFS_xVHEAHnJthzTTnTyzu0MP/view?us...
It seems that since update I get these kind of lines inside it...
restore-net::DEBUG::2019-08-22
23:56:38,591::cmdutils::133::root::(exec_cmd) /sbin/tc filter del dev eth0
pref 5000 (cwd None)
restore-net::DEBUG::2019-08-22
23:56:38,595::cmdutils::141::root::(exec_cmd) FAILED: <err> = 'RTNETLINK
answers: Invalid argument\nWe have an error talking to the kernel\n'; <rc>
= 2
[root@ovirt01 vdsm]# systemctl status supervdsmd -l
● supervdsmd.service - Auxiliary vdsm service for running helper functions
as root
Loaded: loaded (/usr/lib/systemd/system/supervdsmd.service; static;
vendor preset: enabled)
Active: active (running) since Fri 2019-08-23 00:25:17 CEST; 23min ago
Main PID: 4540 (supervdsmd)
Tasks: 3
CGroup: /system.slice/supervdsmd.service
└─4540 /usr/bin/python2 /usr/share/vdsm/supervdsmd --sockfile
/var/run/vdsm/svdsm.sock
Aug 23 00:25:17 ovirt01.mydomain systemd[1]: Started Auxiliary vdsm service
for running helper functions as root.
[root@ovirt01 vdsm]#
[root@ovirt01 vdsm]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master
ovirtmgmt state UP group default qlen 1000
link/ether b8:ae:ed:7f:17:11 brd ff:ff:ff:ff:ff:ff
3: wlan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether 00:c2:c6:a4:18:c5 brd ff:ff:ff:ff:ff:ff
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether 36:21:c1:5e:70:aa brd ff:ff:ff:ff:ff:ff
5: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether 46:d8:db:81:41:4e brd ff:ff:ff:ff:ff:ff
22: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP group default qlen 1000
link/ether b8:ae:ed:7f:17:11 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.211/24 brd 192.168.1.255 scope global ovirtmgmt
valid_lft forever preferred_lft forever
[root@ovirt01 vdsm]#
[root@ovirt01 vdsm]# ip route show
default via 192.168.1.1 dev ovirtmgmt
192.168.1.0/24 dev ovirtmgmt proto kernel scope link src 192.168.1.211
[root@ovirt01 vdsm]#
[root@ovirt01 vdsm]# brctl show
bridge name bridge id STP enabled interfaces
ovirtmgmt 8000.b8aeed7f1711 no eth0
[root@ovirt01 vdsm]#
[root@ovirt01 vdsm]# systemctl status openvswitch
● openvswitch.service - Open vSwitch
Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; enabled;
vendor preset: disabled)
Active: active (exited) since Fri 2019-08-23 00:25:09 CEST; 26min ago
Process: 3894 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
Main PID: 3894 (code=exited, status=0/SUCCESS)
Tasks: 0
CGroup: /system.slice/openvswitch.service
Aug 23 00:25:09 ovirt01.mydomain systemd[1]: Starting Open vSwitch...
Aug 23 00:25:09 ovirt01.mydomain systemd[1]: Started Open vSwitch.
[root@ovirt01 vdsm]# ovs-vsctl show
02539902-1788-4796-9cdf-cf11ce8436bb
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
ovs_version: "2.11.0"
[root@ovirt01 vdsm]#
any hints? Thanks
Gianluca