Failed to start self hosted engine after upgrading oVirt to 4.0

Hi! I've just upgrade oVirt from 3.6 to 4.0 and I'm not able to start the self hosted engine. first thing is that the host network lose the degaut gateway configuration. But this is not the problem. Logs: ==> /var/log/ovirt-hosted-engine-ha/agent.log <== MainThread::INFO::2016-06-23 18:28:40,833::hosted_engine::669::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images) Reloading vm.conf from the shared storage domain MainThread::INFO::2016-06-23 18:28:40,833::config::206::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) Trying to get a fresher copy of vm configuration from the OVF_STORE MainThread::INFO::2016-06-23 18:28:44,535::ovf_store::102::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:8d07965c-a5c4-4057-912d-901f80cf246c, volUUID:ce3aa63e-e1c4-498e-bdca-9d2e9f47f0f9 MainThread::INFO::2016-06-23 18:28:44,582::ovf_store::102::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:bd9aaf0b-8435-4d78-9871-8c7a7f7fa02a, volUUID:3c477b06-063e-4f01-bd05-84c7d467742b MainThread::INFO::2016-06-23 18:28:44,674::ovf_store::111::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) Extracting Engine VM OVF from the OVF_STORE MainThread::INFO::2016-06-23 18:28:44,675::ovf_store::118::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7/images/bd9aaf0b-8435-4d78-9871-8c7a7f7fa02a/3c477b06-063e-4f01-bd05-84c7d467742b MainThread::INFO::2016-06-23 18:28:44,682::config::226::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) Found an OVF for HE VM, trying to convert MainThread::INFO::2016-06-23 18:28:44,684::config::231::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) Got vm.conf from OVF_STORE MainThread::INFO::2016-06-23 18:28:44,684::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker) Initializing ha-broker connection MainThread::INFO::2016-06-23 18:28:44,685::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Starting monitor ping, options {'addr': '192.168.1.254'} ==> /var/log/ovirt-hosted-engine-ha/broker.log <== Thread-25::ERROR::2016-06-23 18:28:44,697::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) Error while serving connection Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 166, in handle data) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 299, in _dispatch .set_storage_domain(client, sd_type, **options) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 66, in set_storage_domain self._backends[client].connect() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 400, in connect volUUID=volume.volume_uuid File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 245, in _get_volume_path volUUID File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable ==> /var/log/ovirt-hosted-engine-ha/agent.log <== MainThread::INFO::2016-06-23 18:28:44,697::hosted_engine::602::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker) Failed set the storage domain: 'Failed to set storage domain VdsmBackend, options {'hosted-engine.lockspace': '7B22696D6167655F75756964223A202265663131373139322D623564662D346534362D383939622D626666333362663862663135222C202270617468223A206E756C6C2C2022766F6C756D655F75756964223A202230613363393433652D633032392D343134372D623364342D396366353364663161356262227D', 'sp_uuid': '00000000-0000-0000-0000-000000000000', 'dom_type': 'nfs3', 'hosted-engine.metadata': '7B22696D6167655F75756964223A202233353066396361302D363933302D346336342D626438632D613262656631656336383934222C202270617468223A206E756C6C2C2022766F6C756D655F75756964223A202235393335373437392D656664612D343563312D386363392D656131653739633231646561227D', 'sd_uuid': '46f55a31-f35f-465c-b3e2-df45c05e06a7'}: Request failed: <class 'socket.error'>'. Waiting '5's before the next attempt ==> /var/log/vdsm/vdsm.log <== periodic/3::WARNING::2016-06-23 18:34:41,731::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/2::WARNING::2016-06-23 18:34:43,657::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.BlockjobMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/0::WARNING::2016-06-23 18:34:43,732::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/3::WARNING::2016-06-23 18:34:45,732::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/2::WARNING::2016-06-23 18:34:47,723::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/1::WARNING::2016-06-23 18:34:49,724::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/0::WARNING::2016-06-23 18:34:51,725::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/3::WARNING::2016-06-23 18:34:53,726::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/2::WARNING::2016-06-23 18:34:55,727::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] ==> /var/log/vdsm/mom.log <== 2016-06-23 18:34:56,376 - mom.vdsmInterface - ERROR - Cannot connect to VDSM! [Errno 101] Network is unreachable ==> /var/log/vdsm/vdsm.log <== periodic/1::WARNING::2016-06-23 18:34:57,727::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/2::WARNING::2016-06-23 18:34:58,652::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.BlockjobMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/2::WARNING::2016-06-23 18:34:59,728::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] Reactor thread::INFO::2016-06-23 18:35:00,048::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from ::ffff:192.168.1.50:60604 Reactor thread::DEBUG::2016-06-23 18:35:00,057::protocoldetector::85::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2016-06-23 18:35:00,058::protocoldetector::121::ProtocolDetector.Detector::(handle_read) Detected protocol stomp from ::ffff:192.168.1.50:60604 Reactor thread::INFO::2016-06-23 18:35:00,058::stompreactor::101::Broker.StompAdapter::(_cmd_connect) Processing CONNECT request Reactor thread::DEBUG::2016-06-23 18:35:00,059::stompreactor::483::protocoldetector.StompDetector::(handle_socket) Stomp detected from ('::ffff:192.168.1.50', 60604) JsonRpc (StompReactor)::INFO::2016-06-23 18:35:00,059::stompreactor::128::Broker.StompAdapter::(_cmd_subscribe) Subscribe command received jsonrpc.Executor/2::DEBUG::2016-06-23 18:35:01,688::__init__::522::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'Host.getHardwareInfo' in bridge with {} [root@ovirt01 ~]# netstat -lnp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 985/rpcbind tcp 0 0 0.0.0.0:882 0.0.0.0:* LISTEN 1970/glusterfs tcp 0 0 0.0.0.0:54322 0.0.0.0:* LISTEN 925/python tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1600/sshd tcp 0 0 0.0.0.0:49152 0.0.0.0:* LISTEN 1978/glusterfsd tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN 1970/glusterfs tcp 0 0 0.0.0.0:38465 0.0.0.0:* LISTEN 1970/glusterfs tcp 0 0 0.0.0.0:49153 0.0.0.0:* LISTEN 1963/glusterfsd tcp 0 0 0.0.0.0:32802 0.0.0.0:* LISTEN 2193/rpc.statd tcp 0 0 0.0.0.0:38466 0.0.0.0:* LISTEN 1970/glusterfs tcp 0 0 0.0.0.0:16514 0.0.0.0:* LISTEN 1633/libvirtd tcp 0 0 0.0.0.0:38468 0.0.0.0:* LISTEN 1970/glusterfs tcp 0 0 0.0.0.0:38469 0.0.0.0:* LISTEN 1970/glusterfs tcp 0 0 0.0.0.0:24007 0.0.0.0:* LISTEN 1652/glusterd tcp6 0 0 :::54321 :::* LISTEN 1921/python tcp6 0 0 :::22 :::* LISTEN 1600/sshd tcp6 0 0 :::16514 :::* LISTEN 1633/libvirtd udp 0 0 0.0.0.0:40503 0.0.0.0:* 937/avahi-daemon: r udp 0 0 0.0.0.0:40873 0.0.0.0:* 2193/rpc.statd udp 0 0 0.0.0.0:111 0.0.0.0:* 985/rpcbind udp 0 0 127.0.0.1:323 0.0.0.0:* 952/chronyd udp 0 0 0.0.0.0:694 0.0.0.0:* 985/rpcbind udp 0 0 127.0.0.1:703 0.0.0.0:* 2193/rpc.statd udp 0 0 0.0.0.0:879 0.0.0.0:* 1970/glusterfs udp 0 0 0.0.0.0:5353 0.0.0.0:* 937/avahi-daemon: r udp6 0 0 ::1:323 :::* 952/chronyd Active UNIX domain sockets (only servers) Proto RefCnt Flags Type State I-Node PID/Program name Path unix 2 [ ACC ] STREAM LISTENING 19455 1633/libvirtd /var/run/libvirt/libvirt-sock unix 2 [ ACC ] STREAM LISTENING 22529 1633/libvirtd /var/run/libvirt/libvirt-sock-ro unix 2 [ ACC ] STREAM LISTENING 16902 1/systemd /var/run/rpcbind.sock unix 2 [ ACC ] STREAM LISTENING 24077 1978/glusterfsd /var/run/gluster/changelog-36d1d6bf77f0ad218a0db9ada4237737.sock unix 2 [ ACC ] STREAM LISTENING 20494 992/gssproxy /run/gssproxy.sock unix 2 [ ACC ] STREAM LISTENING 14641 682/multipathd @/org/kernel/linux/storage/multipathd unix 2 [ ACC ] STREAM LISTENING 16914 1/systemd /var/run/avahi-daemon/socket unix 2 [ ACC ] STREAM LISTENING 19484 993/sanlock /var/run/sanlock/sanlock.sock unix 2 [ ACC ] STREAM LISTENING 24095 1963/glusterfsd /var/run/gluster/changelog-c54f38f29c5c1e84d2766564ea967b8d.sock unix 2 [ ACC ] STREAM LISTENING 16929 1/systemd /var/run/dbus/system_bus_socket unix 2 [ ACC ] STREAM LISTENING 12093 1/systemd /run/systemd/private unix 2 [ ACC ] STREAM LISTENING 16924 1/systemd @ISCSID_UIP_ABSTRACT_NAMESPACE unix 2 [ ACC ] STREAM LISTENING 18255 925/python /var/run/vdsm/ovirt-imageio-daemon.sock unix 2 [ ACC ] STREAM LISTENING 23905 1963/glusterfsd /var/run/gluster/b3162364606c81381a4c768484c6f091.socket unix 2 [ ACC ] SEQPACKET LISTENING 13409 1/systemd /run/udev/control unix 2 [ ACC ] STREAM LISTENING 90477 1970/glusterfs /var/run/gluster/b6606e2cc1634ac3c57dae961f6a6cc3.socket unix 2 [ ACC ] STREAM LISTENING 13428 1/systemd /run/lvm/lvmetad.socket unix 2 [ ACC ] STREAM LISTENING 18302 920/python /var/run/ovirt-hosted-engine-ha/broker.socket unix 2 [ ACC ] STREAM LISTENING 23947 1978/glusterfsd /var/run/gluster/65f923a8c0294836aec3ffce760511cb.socket unix 2 [ ACC ] STREAM LISTENING 20112 1652/glusterd /var/run/glusterd.socket unix 2 [ ACC ] STREAM LISTENING 18321 939/python /var/run/vdsm/svdsm.sock unix 2 [ ACC ] STREAM LISTENING 20493 992/gssproxy /var/lib/gssproxy/default.sock unix 2 [ ACC ] STREAM LISTENING 13470 1/systemd /run/lvm/lvmpolld.socket unix 2 [ ACC ] STREAM LISTENING 16909 1/systemd @ISCSIADM_ABSTRACT_NAMESPACE unix 2 [ ACC ] STREAM LISTENING 18160 990/wdmd /var/run/wdmd/wdmd.sock unix 2 [ ACC ] STREAM LISTENING 2291 1/systemd /run/systemd/journal/stdout unix 2 [ ACC ] STREAM LISTENING 90622 1923/python /var/run/vdsm/mom-vdsm.sock

On 23 Jun 2016, at 18:36, Stefano Danzi <s.danzi@hawai.it> wrote:
Hi! I've just upgrade oVirt from 3.6 to 4.0 and I'm not able to start the self hosted engine.
first thing is that the host network lose the degaut gateway configuration. But this is not the problem.
Logs:
==> /var/log/ovirt-hosted-engine-ha/agent.log <== MainThread::INFO::2016-06-23 18:28:40,833::hosted_engine::669::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images) Reloading vm.conf from the shared storage domain MainThread::INFO::2016-06-23 18:28:40,833::config::206::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) Trying to get a fresher copy of vm configuration from the OVF_STORE MainThread::INFO::2016-06-23 18:28:44,535::ovf_store::102::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:8d07965c-a5c4-4057-912d-901f80cf246c, volUUID:ce3aa63e-e1c4-498e-bdca-9d2e9f47f0f9 MainThread::INFO::2016-06-23 18:28:44,582::ovf_store::102::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:bd9aaf0b-8435-4d78-9871-8c7a7f7fa02a, volUUID:3c477b06-063e-4f01-bd05-84c7d467742b MainThread::INFO::2016-06-23 18:28:44,674::ovf_store::111::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) Extracting Engine VM OVF from the OVF_STORE MainThread::INFO::2016-06-23 18:28:44,675::ovf_store::118::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7/images/bd9aaf0b-8435-4d78-9871-8c7a7f7fa02a/3c477b06-063e-4f01-bd05-84c7d467742b MainThread::INFO::2016-06-23 18:28:44,682::config::226::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) Found an OVF for HE VM, trying to convert MainThread::INFO::2016-06-23 18:28:44,684::config::231::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) Got vm.conf from OVF_STORE MainThread::INFO::2016-06-23 18:28:44,684::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker) Initializing ha-broker connection MainThread::INFO::2016-06-23 18:28:44,685::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Starting monitor ping, options {'addr': '192.168.1.254'}
==> /var/log/ovirt-hosted-engine-ha/broker.log <== Thread-25::ERROR::2016-06-23 18:28:44,697::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) Error while serving connection Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 166, in handle data) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 299, in _dispatch .set_storage_domain(client, sd_type, **options) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 66, in set_storage_domain self._backends[client].connect() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 400, in connect volUUID=volume.volume_uuid File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 245, in _get_volume_path volUUID File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable
==> /var/log/ovirt-hosted-engine-ha/agent.log <== MainThread::INFO::2016-06-23 18:28:44,697::hosted_engine::602::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker) Failed set the storage domain: 'Failed to set storage domain VdsmBackend, options {'hosted-engine.lockspace': '7B22696D6167655F75756964223A202265663131373139322D623564662D346534362D383939622D626666333362663862663135222C202270617468223A206E756C6C2C2022766F6C756D655F75756964223A202230613363393433652D633032392D343134372D623364342D396366353364663161356262227D', 'sp_uuid': '00000000-0000-0000-0000-000000000000', 'dom_type': 'nfs3', 'hosted-engine.metadata': '7B22696D6167655F75756964223A202233353066396361302D363933302D346336342D626438632D613262656631656336383934222C202270617468223A206E756C6C2C2022766F6C756D655F75756964223A202235393335373437392D656664612D343563312D386363392D656131653739633231646561227D', 'sd_uuid': '46f55a31-f35f-465c-b3e2-df45c05e06a7'}: Request failed: <class 'socket.error'>'. Waiting '5's before the next attempt
==> /var/log/vdsm/vdsm.log <== periodic/3::WARNING::2016-06-23 18:34:41,731::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/2::WARNING::2016-06-23 18:34:43,657::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.BlockjobMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/0::WARNING::2016-06-23 18:34:43,732::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/3::WARNING::2016-06-23 18:34:45,732::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/2::WARNING::2016-06-23 18:34:47,723::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/1::WARNING::2016-06-23 18:34:49,724::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/0::WARNING::2016-06-23 18:34:51,725::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/3::WARNING::2016-06-23 18:34:53,726::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/2::WARNING::2016-06-23 18:34:55,727::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8']
==> /var/log/vdsm/mom.log <== 2016-06-23 18:34:56,376 - mom.vdsmInterface - ERROR - Cannot connect to VDSM! [Errno 101] Network is unreachable
==> /var/log/vdsm/vdsm.log <== periodic/1::WARNING::2016-06-23 18:34:57,727::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/2::WARNING::2016-06-23 18:34:58,652::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.BlockjobMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/2::WARNING::2016-06-23 18:34:59,728::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] Reactor thread::INFO::2016-06-23 18:35:00,048::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from ::ffff:192.168.1.50:60604 Reactor thread::DEBUG::2016-06-23 18:35:00,057::protocoldetector::85::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2016-06-23 18:35:00,058::protocoldetector::121::ProtocolDetector.Detector::(handle_read) Detected protocol stomp from ::ffff:192.168.1.50:60604 Reactor thread::INFO::2016-06-23 18:35:00,058::stompreactor::101::Broker.StompAdapter::(_cmd_connect) Processing CONNECT request Reactor thread::DEBUG::2016-06-23 18:35:00,059::stompreactor::483::protocoldetector.StompDetector::(handle_socket) Stomp detected from ('::ffff:192.168.1.50', 60604) JsonRpc (StompReactor)::INFO::2016-06-23 18:35:00,059::stompreactor::128::Broker.StompAdapter::(_cmd_subscribe) Subscribe command received jsonrpc.Executor/2::DEBUG::2016-06-23 18:35:01,688::__init__::522::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'Host.getHardwareInfo' in bridge with {}
can you please post a bit more of vdsm.log, before and after, inclding also time around 18:28 as per the error above in hosted-engine-ha?
[root@ovirt01 ~]# netstat -lnp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 985/rpcbind tcp 0 0 0.0.0.0:882 0.0.0.0:* LISTEN 1970/glusterfs tcp 0 0 0.0.0.0:54322 0.0.0.0:* LISTEN 925/python tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1600/sshd tcp 0 0 0.0.0.0:49152 0.0.0.0:* LISTEN 1978/glusterfsd tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN 1970/glusterfs tcp 0 0 0.0.0.0:38465 0.0.0.0:* LISTEN 1970/glusterfs tcp 0 0 0.0.0.0:49153 0.0.0.0:* LISTEN 1963/glusterfsd tcp 0 0 0.0.0.0:32802 0.0.0.0:* LISTEN 2193/rpc.statd tcp 0 0 0.0.0.0:38466 0.0.0.0:* LISTEN 1970/glusterfs tcp 0 0 0.0.0.0:16514 0.0.0.0:* LISTEN 1633/libvirtd tcp 0 0 0.0.0.0:38468 0.0.0.0:* LISTEN 1970/glusterfs tcp 0 0 0.0.0.0:38469 0.0.0.0:* LISTEN 1970/glusterfs tcp 0 0 0.0.0.0:24007 0.0.0.0:* LISTEN 1652/glusterd tcp6 0 0 :::54321 :::* LISTEN 1921/python tcp6 0 0 :::22 :::* LISTEN 1600/sshd tcp6 0 0 :::16514 :::* LISTEN 1633/libvirtd udp 0 0 0.0.0.0:40503 0.0.0.0:* 937/avahi-daemon: r udp 0 0 0.0.0.0:40873 0.0.0.0:* 2193/rpc.statd udp 0 0 0.0.0.0:111 0.0.0.0:* 985/rpcbind udp 0 0 127.0.0.1:323 0.0.0.0:* 952/chronyd udp 0 0 0.0.0.0:694 0.0.0.0:* 985/rpcbind udp 0 0 127.0.0.1:703 0.0.0.0:* 2193/rpc.statd udp 0 0 0.0.0.0:879 0.0.0.0:* 1970/glusterfs udp 0 0 0.0.0.0:5353 0.0.0.0:* 937/avahi-daemon: r udp6 0 0 ::1:323 :::* 952/chronyd Active UNIX domain sockets (only servers) Proto RefCnt Flags Type State I-Node PID/Program name Path unix 2 [ ACC ] STREAM LISTENING 19455 1633/libvirtd /var/run/libvirt/libvirt-sock unix 2 [ ACC ] STREAM LISTENING 22529 1633/libvirtd /var/run/libvirt/libvirt-sock-ro unix 2 [ ACC ] STREAM LISTENING 16902 1/systemd /var/run/rpcbind.sock unix 2 [ ACC ] STREAM LISTENING 24077 1978/glusterfsd /var/run/gluster/changelog-36d1d6bf77f0ad218a0db9ada4237737.sock unix 2 [ ACC ] STREAM LISTENING 20494 992/gssproxy /run/gssproxy.sock unix 2 [ ACC ] STREAM LISTENING 14641 682/multipathd @/org/kernel/linux/storage/multipathd unix 2 [ ACC ] STREAM LISTENING 16914 1/systemd /var/run/avahi-daemon/socket unix 2 [ ACC ] STREAM LISTENING 19484 993/sanlock /var/run/sanlock/sanlock.sock unix 2 [ ACC ] STREAM LISTENING 24095 1963/glusterfsd /var/run/gluster/changelog-c54f38f29c5c1e84d2766564ea967b8d.sock unix 2 [ ACC ] STREAM LISTENING 16929 1/systemd /var/run/dbus/system_bus_socket unix 2 [ ACC ] STREAM LISTENING 12093 1/systemd /run/systemd/private unix 2 [ ACC ] STREAM LISTENING 16924 1/systemd @ISCSID_UIP_ABSTRACT_NAMESPACE unix 2 [ ACC ] STREAM LISTENING 18255 925/python /var/run/vdsm/ovirt-imageio-daemon.sock unix 2 [ ACC ] STREAM LISTENING 23905 1963/glusterfsd /var/run/gluster/b3162364606c81381a4c768484c6f091.socket unix 2 [ ACC ] SEQPACKET LISTENING 13409 1/systemd /run/udev/control unix 2 [ ACC ] STREAM LISTENING 90477 1970/glusterfs /var/run/gluster/b6606e2cc1634ac3c57dae961f6a6cc3.socket unix 2 [ ACC ] STREAM LISTENING 13428 1/systemd /run/lvm/lvmetad.socket unix 2 [ ACC ] STREAM LISTENING 18302 920/python /var/run/ovirt-hosted-engine-ha/broker.socket unix 2 [ ACC ] STREAM LISTENING 23947 1978/glusterfsd /var/run/gluster/65f923a8c0294836aec3ffce760511cb.socket unix 2 [ ACC ] STREAM LISTENING 20112 1652/glusterd /var/run/glusterd.socket unix 2 [ ACC ] STREAM LISTENING 18321 939/python /var/run/vdsm/svdsm.sock unix 2 [ ACC ] STREAM LISTENING 20493 992/gssproxy /var/lib/gssproxy/default.sock unix 2 [ ACC ] STREAM LISTENING 13470 1/systemd /run/lvm/lvmpolld.socket unix 2 [ ACC ] STREAM LISTENING 16909 1/systemd @ISCSIADM_ABSTRACT_NAMESPACE unix 2 [ ACC ] STREAM LISTENING 18160 990/wdmd /var/run/wdmd/wdmd.sock unix 2 [ ACC ] STREAM LISTENING 2291 1/systemd /run/systemd/journal/stdout unix 2 [ ACC ] STREAM LISTENING 90622 1923/python /var/run/vdsm/mom-vdsm.sock
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Thu, Jun 23, 2016 at 6:36 PM, Stefano Danzi <s.danzi@hawai.it> wrote:
Hi! I've just upgrade oVirt from 3.6 to 4.0 and I'm not able to start the self hosted engine.
Hi Stefano, can you please try "yum clean metadata" "yum update" again? You should get vdsm 4.18.4.1, please let us know if this solve your issue.
first thing is that the host network lose the degaut gateway configuration. But this is not the problem.
Logs:
==> /var/log/ovirt-hosted-engine-ha/agent.log <== MainThread::INFO::2016-06-23 18:28:40,833::hosted_engine::669::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images) Reloading vm.conf from the shared storage domain MainThread::INFO::2016-06-23 18:28:40,833::config::206::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) Trying to get a fresher copy of vm configuration from the OVF_STORE MainThread::INFO::2016-06-23 18:28:44,535::ovf_store::102::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:8d07965c-a5c4-4057-912d-901f80cf246c, volUUID:ce3aa63e-e1c4-498e-bdca-9d2e9f47f0f9 MainThread::INFO::2016-06-23 18:28:44,582::ovf_store::102::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:bd9aaf0b-8435-4d78-9871-8c7a7f7fa02a, volUUID:3c477b06-063e-4f01-bd05-84c7d467742b MainThread::INFO::2016-06-23 18:28:44,674::ovf_store::111::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) Extracting Engine VM OVF from the OVF_STORE MainThread::INFO::2016-06-23 18:28:44,675::ovf_store::118::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7/images/bd9aaf0b-8435-4d78-9871-8c7a7f7fa02a/3c477b06-063e-4f01-bd05-84c7d467742b MainThread::INFO::2016-06-23 18:28:44,682::config::226::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) Found an OVF for HE VM, trying to convert MainThread::INFO::2016-06-23 18:28:44,684::config::231::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) Got vm.conf from OVF_STORE MainThread::INFO::2016-06-23 18:28:44,684::hosted_engine::517::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker) Initializing ha-broker connection MainThread::INFO::2016-06-23 18:28:44,685::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Starting monitor ping, options {'addr': '192.168.1.254'}
==> /var/log/ovirt-hosted-engine-ha/broker.log <== Thread-25::ERROR::2016-06-23 18:28:44,697::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) Error while serving connection Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 166, in handle data) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 299, in _dispatch .set_storage_domain(client, sd_type, **options) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 66, in set_storage_domain self._backends[client].connect() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 400, in connect volUUID=volume.volume_uuid File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 245, in _get_volume_path volUUID File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable
==> /var/log/ovirt-hosted-engine-ha/agent.log <== MainThread::INFO::2016-06-23 18:28:44,697::hosted_engine::602::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker) Failed set the storage domain: 'Failed to set storage domain VdsmBackend, options {'hosted-engine.lockspace': '7B22696D6167655F75756964223A202265663131373139322D623564662D346534362D383939622D626666333362663862663135222C202270617468223A206E756C6C2C2022766F6C756D655F75756964223A202230613363393433652D633032392D343134372D623364342D396366353364663161356262227D', 'sp_uuid': '00000000-0000-0000-0000-000000000000', 'dom_type': 'nfs3', 'hosted-engine.metadata': '7B22696D6167655F75756964223A202233353066396361302D363933302D346336342D626438632D613262656631656336383934222C202270617468223A206E756C6C2C2022766F6C756D655F75756964223A202235393335373437392D656664612D343563312D386363392D656131653739633231646561227D', 'sd_uuid': '46f55a31-f35f-465c-b3e2-df45c05e06a7'}: Request failed: <class 'socket.error'>'. Waiting '5's before the next attempt
==> /var/log/vdsm/vdsm.log <== periodic/3::WARNING::2016-06-23 18:34:41,731::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/2::WARNING::2016-06-23 18:34:43,657::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.BlockjobMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/0::WARNING::2016-06-23 18:34:43,732::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/3::WARNING::2016-06-23 18:34:45,732::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/2::WARNING::2016-06-23 18:34:47,723::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/1::WARNING::2016-06-23 18:34:49,724::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/0::WARNING::2016-06-23 18:34:51,725::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/3::WARNING::2016-06-23 18:34:53,726::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/2::WARNING::2016-06-23 18:34:55,727::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8']
==> /var/log/vdsm/mom.log <== 2016-06-23 18:34:56,376 - mom.vdsmInterface - ERROR - Cannot connect to VDSM! [Errno 101] Network is unreachable
==> /var/log/vdsm/vdsm.log <== periodic/1::WARNING::2016-06-23 18:34:57,727::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/2::WARNING::2016-06-23 18:34:58,652::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.BlockjobMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] periodic/2::WARNING::2016-06-23 18:34:59,728::periodic::269::virt.periodic.VmDispatcher::(__call__) could not run <class 'vdsm.virt.periodic.DriveWatermarkMonitor'> on ['b66ae2c5-de0f-4361-953b-f10226da7eb8'] Reactor thread::INFO::2016-06-23 18:35:00,048::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from ::ffff:192.168.1.50:60604 Reactor thread::DEBUG::2016-06-23 18:35:00,057::protocoldetector::85::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2016-06-23 18:35:00,058::protocoldetector::121::ProtocolDetector.Detector::(handle_read) Detected protocol stomp from ::ffff:192.168.1.50:60604 Reactor thread::INFO::2016-06-23 18:35:00,058::stompreactor::101::Broker.StompAdapter::(_cmd_connect) Processing CONNECT request Reactor thread::DEBUG::2016-06-23 18:35:00,059::stompreactor::483::protocoldetector.StompDetector::(handle_socket) Stomp detected from ('::ffff:192.168.1.50', 60604) JsonRpc (StompReactor)::INFO::2016-06-23 18:35:00,059::stompreactor::128::Broker.StompAdapter::(_cmd_subscribe) Subscribe command received jsonrpc.Executor/2::DEBUG::2016-06-23 18:35:01,688::__init__::522::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'Host.getHardwareInfo' in bridge with {}
[root@ovirt01 ~]# netstat -lnp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 985/rpcbind tcp 0 0 0.0.0.0:882 0.0.0.0:* LISTEN 1970/glusterfs tcp 0 0 0.0.0.0:54322 0.0.0.0:* LISTEN 925/python tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1600/sshd tcp 0 0 0.0.0.0:49152 0.0.0.0:* LISTEN 1978/glusterfsd tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN 1970/glusterfs tcp 0 0 0.0.0.0:38465 0.0.0.0:* LISTEN 1970/glusterfs tcp 0 0 0.0.0.0:49153 0.0.0.0:* LISTEN 1963/glusterfsd tcp 0 0 0.0.0.0:32802 0.0.0.0:* LISTEN 2193/rpc.statd tcp 0 0 0.0.0.0:38466 0.0.0.0:* LISTEN 1970/glusterfs tcp 0 0 0.0.0.0:16514 0.0.0.0:* LISTEN 1633/libvirtd tcp 0 0 0.0.0.0:38468 0.0.0.0:* LISTEN 1970/glusterfs tcp 0 0 0.0.0.0:38469 0.0.0.0:* LISTEN 1970/glusterfs tcp 0 0 0.0.0.0:24007 0.0.0.0:* LISTEN 1652/glusterd tcp6 0 0 :::54321 :::* LISTEN 1921/python tcp6 0 0 :::22 :::* LISTEN 1600/sshd tcp6 0 0 :::16514 :::* LISTEN 1633/libvirtd udp 0 0 0.0.0.0:40503 0.0.0.0:* 937/avahi-daemon: r udp 0 0 0.0.0.0:40873 0.0.0.0:* 2193/rpc.statd udp 0 0 0.0.0.0:111 0.0.0.0:* 985/rpcbind udp 0 0 127.0.0.1:323 0.0.0.0:* 952/chronyd udp 0 0 0.0.0.0:694 0.0.0.0:* 985/rpcbind udp 0 0 127.0.0.1:703 0.0.0.0:* 2193/rpc.statd udp 0 0 0.0.0.0:879 0.0.0.0:* 1970/glusterfs udp 0 0 0.0.0.0:5353 0.0.0.0:* 937/avahi-daemon: r udp6 0 0 ::1:323 :::* 952/chronyd Active UNIX domain sockets (only servers) Proto RefCnt Flags Type State I-Node PID/Program name Path unix 2 [ ACC ] STREAM LISTENING 19455 1633/libvirtd /var/run/libvirt/libvirt-sock unix 2 [ ACC ] STREAM LISTENING 22529 1633/libvirtd /var/run/libvirt/libvirt-sock-ro unix 2 [ ACC ] STREAM LISTENING 16902 1/systemd /var/run/rpcbind.sock unix 2 [ ACC ] STREAM LISTENING 24077 1978/glusterfsd /var/run/gluster/changelog-36d1d6bf77f0ad218a0db9ada4237737.sock unix 2 [ ACC ] STREAM LISTENING 20494 992/gssproxy /run/gssproxy.sock unix 2 [ ACC ] STREAM LISTENING 14641 682/multipathd @/org/kernel/linux/storage/multipathd unix 2 [ ACC ] STREAM LISTENING 16914 1/systemd /var/run/avahi-daemon/socket unix 2 [ ACC ] STREAM LISTENING 19484 993/sanlock /var/run/sanlock/sanlock.sock unix 2 [ ACC ] STREAM LISTENING 24095 1963/glusterfsd /var/run/gluster/changelog-c54f38f29c5c1e84d2766564ea967b8d.sock unix 2 [ ACC ] STREAM LISTENING 16929 1/systemd /var/run/dbus/system_bus_socket unix 2 [ ACC ] STREAM LISTENING 12093 1/systemd /run/systemd/private unix 2 [ ACC ] STREAM LISTENING 16924 1/systemd @ISCSID_UIP_ABSTRACT_NAMESPACE unix 2 [ ACC ] STREAM LISTENING 18255 925/python /var/run/vdsm/ovirt-imageio-daemon.sock unix 2 [ ACC ] STREAM LISTENING 23905 1963/glusterfsd /var/run/gluster/b3162364606c81381a4c768484c6f091.socket unix 2 [ ACC ] SEQPACKET LISTENING 13409 1/systemd /run/udev/control unix 2 [ ACC ] STREAM LISTENING 90477 1970/glusterfs /var/run/gluster/b6606e2cc1634ac3c57dae961f6a6cc3.socket unix 2 [ ACC ] STREAM LISTENING 13428 1/systemd /run/lvm/lvmetad.socket unix 2 [ ACC ] STREAM LISTENING 18302 920/python /var/run/ovirt-hosted-engine-ha/broker.socket unix 2 [ ACC ] STREAM LISTENING 23947 1978/glusterfsd /var/run/gluster/65f923a8c0294836aec3ffce760511cb.socket unix 2 [ ACC ] STREAM LISTENING 20112 1652/glusterd /var/run/glusterd.socket unix 2 [ ACC ] STREAM LISTENING 18321 939/python /var/run/vdsm/svdsm.sock unix 2 [ ACC ] STREAM LISTENING 20493 992/gssproxy /var/lib/gssproxy/default.sock unix 2 [ ACC ] STREAM LISTENING 13470 1/systemd /run/lvm/lvmpolld.socket unix 2 [ ACC ] STREAM LISTENING 16909 1/systemd @ISCSIADM_ABSTRACT_NAMESPACE unix 2 [ ACC ] STREAM LISTENING 18160 990/wdmd /var/run/wdmd/wdmd.sock unix 2 [ ACC ] STREAM LISTENING 2291 1/systemd /run/systemd/journal/stdout unix 2 [ ACC ] STREAM LISTENING 90622 1923/python /var/run/vdsm/mom-vdsm.sock
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

Hi! After cleanin metadata yum do an update of vdsm: [root@ovirt01 ~]# rpm -qva | grep vdsm vdsm-yajsonrpc-4.18.4.1-0.el7.centos.noarch vdsm-infra-4.18.4.1-0.el7.centos.noarch vdsm-cli-4.18.4.1-0.el7.centos.noarch vdsm-python-4.18.4.1-0.el7.centos.noarch vdsm-hook-vmfex-dev-4.18.4.1-0.el7.centos.noarch vdsm-xmlrpc-4.18.4.1-0.el7.centos.noarch vdsm-4.18.4.1-0.el7.centos.x86_64 vdsm-api-4.18.4.1-0.el7.centos.noarch vdsm-gluster-4.18.4.1-0.el7.centos.noarch vdsm-jsonrpc-4.18.4.1-0.el7.centos.noarch But this not solve the issue. - Host haven't default gateway after a reboot - Self Hosted engine don't start. vdsm.log: https://drive.google.com/file/d/0ByMG4sDqvlZcVEJ5YVI1UWxrdE0/view?usp=sharin... Il 2016-06-23 21:41 Sandro Bonazzola ha scritto:
On Thu, Jun 23, 2016 at 6:36 PM, Stefano Danzi <s.danzi@hawai.it> wrote:
Hi! I've just upgrade oVirt from 3.6 to 4.0 and I'm not able to start the self hosted engine.
Hi Stefano, can you please try "yum clean metadata" "yum update" again? You should get vdsm 4.18.4.1, please let us know if this solve your issue.

On Thu, Jun 23, 2016 at 11:46 PM, Stefano Danzi <s.danzi@hawai.it> wrote:
Hi! After cleanin metadata yum do an update of vdsm:
[root@ovirt01 ~]# rpm -qva | grep vdsm vdsm-yajsonrpc-4.18.4.1-0.el7.centos.noarch vdsm-infra-4.18.4.1-0.el7.centos.noarch vdsm-cli-4.18.4.1-0.el7.centos.noarch vdsm-python-4.18.4.1-0.el7.centos.noarch vdsm-hook-vmfex-dev-4.18.4.1-0.el7.centos.noarch vdsm-xmlrpc-4.18.4.1-0.el7.centos.noarch vdsm-4.18.4.1-0.el7.centos.x86_64 vdsm-api-4.18.4.1-0.el7.centos.noarch vdsm-gluster-4.18.4.1-0.el7.centos.noarch vdsm-jsonrpc-4.18.4.1-0.el7.centos.noarch
But this not solve the issue.
- Host haven't default gateway after a reboot - Self Hosted engine don't start.
Martin, Dan, can you please look into this? Stefano, can you please share a full sos report from the host?
vdsm.log:
https://drive.google.com/file/d/0ByMG4sDqvlZcVEJ5YVI1UWxrdE0/view?usp=sharin...
Il 2016-06-23 21:41 Sandro Bonazzola ha scritto:
On Thu, Jun 23, 2016 at 6:36 PM, Stefano Danzi <s.danzi@hawai.it> wrote:
Hi!
I've just upgrade oVirt from 3.6 to 4.0 and I'm not able to start the self hosted engine.
Hi Stefano, can you please try "yum clean metadata" "yum update" again? You should get vdsm 4.18.4.1, please let us know if this solve your issue.
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

This is a multi-part message in MIME format. --------------FAB04AB2FA5A32FE906AB6B9 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit After an additional yum clean all && yum update was updated some other rpms. Something changed..... My setup has engine storage on gluster, but mounted with NFS. Now gluster daemon don't automatically start at boot. After starting manually gluster the error is the same: ==> /var/log/ovirt-hosted-engine-ha/broker.log <== Thread-19::ERROR::2016-06-24 10:10:36,758::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) Error while serving connection Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 166, in handle data) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 299, in _dispatch .set_storage_domain(client, sd_type, **options) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 66, in set_storage_domain self._backends[client].connect() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 400, in connect volUUID=volume.volume_uuid File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 245, in _get_volume_path volUUID File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable VDSM.log jsonrpc.Executor/5::DEBUG::2016-06-24 10:10:21,694::task::995::Storage.TaskManager.Task::(_decref) Task=`5c3b6f30-d3a8-431e-9dd0-8df79b171709`::ref 0 aborting False jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Following parameters ['type'] were not recogn ized jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Provided value "2" not defined in DiskType en um for Volume.getInfo jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter capacity is not uint type jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Required property allocType is not provided w hen calling Volume.getInfo jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter mtime is not uint type jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter ctime is not int type jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter truesize is not uint type jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter apparentsize is not uint type jsonrpc.Executor/5::DEBUG::2016-06-24 10:10:21,695::__init__::550::jsonrpc.JsonRpcServer::(_serveRequest) Return 'Volume.getInfo' in bridge with {'sta tus': 'OK', 'domain': '46f55a31-f35f-465c-b3e2-df45c05e06a7', 'voltype': 'LEAF', 'description': 'hosted-engine.lockspace', 'parent': '00000000-0000-00 00-0000-000000000000', 'format': 'RAW', 'image': '6838c974-7656-4b40-87cc-f562ff0b2a4c', 'ctime': '1423074433', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1048576', 'children': [], 'pool': '', 'capacity': '1048576', 'uuid': u'c66a14d3-112a-4104-9025-76bb2e7ad9f1', 'truesize ': '1048576', 'type': 'PREALLOCATED'} JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:36,514::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r eset by peer') JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:36,515::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:43,807::betterAsyncore::132::vds.dispatcher::(send) SSL error during sending data: (104, 'Connection r eset by peer') JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:43,959::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: bad write retry JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:43,959::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:47,859::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r eset by peer') JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:47,860::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:51,725::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r eset by peer') JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:51,726::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event Reactor thread::INFO::2016-06-24 10:10:53,851::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from ::ffff:1 92.168.1.50:48554 Reactor thread::DEBUG::2016-06-24 10:10:53,860::protocoldetector::85::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2016-06-24 10:10:53,861::protocoldetector::121::ProtocolDetector.Detector::(handle_read) Detected protocol stomp from ::ffff:192 .168.1.50:48554 Reactor thread::INFO::2016-06-24 10:10:53,862::stompreactor::101::Broker.StompAdapter::(_cmd_connect) Processing CONNECT request Reactor thread::DEBUG::2016-06-24 10:10:53,862::stompreactor::482::protocoldetector.StompDetector::(handle_socket) Stomp detected from ('::ffff:192.168.1.50', 48554) Il 24/06/2016 8.18, Sandro Bonazzola ha scritto:
On Thu, Jun 23, 2016 at 11:46 PM, Stefano Danzi <s.danzi@hawai.it <mailto:s.danzi@hawai.it>> wrote:
Hi! After cleanin metadata yum do an update of vdsm:
[root@ovirt01 ~]# rpm -qva | grep vdsm vdsm-yajsonrpc-4.18.4.1-0.el7.centos.noarch vdsm-infra-4.18.4.1-0.el7.centos.noarch vdsm-cli-4.18.4.1-0.el7.centos.noarch vdsm-python-4.18.4.1-0.el7.centos.noarch vdsm-hook-vmfex-dev-4.18.4.1-0.el7.centos.noarch vdsm-xmlrpc-4.18.4.1-0.el7.centos.noarch vdsm-4.18.4.1-0.el7.centos.x86_64 vdsm-api-4.18.4.1-0.el7.centos.noarch vdsm-gluster-4.18.4.1-0.el7.centos.noarch vdsm-jsonrpc-4.18.4.1-0.el7.centos.noarch
But this not solve the issue.
- Host haven't default gateway after a reboot - Self Hosted engine don't start.
Martin, Dan, can you please look into this? Stefano, can you please share a full sos report from the host?
vdsm.log: https://drive.google.com/file/d/0ByMG4sDqvlZcVEJ5YVI1UWxrdE0/view?usp=sharin...
Il 2016-06-23 21:41 Sandro Bonazzola ha scritto:
On Thu, Jun 23, 2016 at 6:36 PM, Stefano Danzi <s.danzi@hawai.it <mailto:s.danzi@hawai.it>> wrote:
Hi! I've just upgrade oVirt from 3.6 to 4.0 and I'm not able to start the self hosted engine.
Hi Stefano, can you please try "yum clean metadata" "yum update" again? You should get vdsm 4.18.4.1, please let us know if this solve your issue.
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com <http://redhat.com>
-- Stefano Danzi Responsabile sistemi informativi HAWAI ITALIA S.r.l. Via Forte Garofolo, 16 37057 S. Giovanni Lupatoto Verona Italia P. IVA 01680700232 tel. +39/045/8266400 fax +39/045/8266401 Web www.hawai.it --------------FAB04AB2FA5A32FE906AB6B9 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> After an additional yum clean all && yum update was updated some other rpms.<br> <br> Something changed.....<br> My setup has engine storage on gluster, but mounted with NFS.<br> Now gluster daemon don't automatically start at boot. After starting manually gluster the error is the same:<br> <br> ==> /var/log/ovirt-hosted-engine-ha/broker.log <==<br> Thread-19::ERROR::2016-06-24 10:10:36,758::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) Error while serving connection<br> Traceback (most recent call last):<br> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 166, in handle<br> data)<br> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 299, in _dispatch<br> .set_storage_domain(client, sd_type, **options)<br> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 66, in set_storage_domain<br> self._backends[client].connect()<br> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 400, in connect<br> volUUID=volume.volume_uuid<br> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 245, in _get_volume_path<br> volUUID<br> File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__<br> return self.__send(self.__name, args)<br> File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request<br> verbose=self.__verbose<br> File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request<br> return self.single_request(host, handler, request_body, verbose)<br> File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request<br> self.send_content(h, request_body)<br> File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content<br> connection.endheaders(request_body)<br> File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders<br> self._send_output(message_body)<br> File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output<br> self.send(msg)<br> File "/usr/lib64/python2.7/httplib.py", line 797, in send<br> self.connect()<br> File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect<br> sock = socket.create_connection((self.host, self.port), self.timeout)<br> File "/usr/lib64/python2.7/socket.py", line 571, in create_connection<br> raise err<br> error: [Errno 101] Network is unreachable<br> <br> <br> VDSM.log<br> <br> jsonrpc.Executor/5::DEBUG::2016-06-24 10:10:21,694::task::995::Storage.TaskManager.Task::(_decref) Task=`5c3b6f30-d3a8-431e-9dd0-8df79b171709`::ref 0<br> aborting False<br> jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Following parameters ['type'] were not recogn<br> ized<br> jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Provided value "2" not defined in DiskType en<br> um for Volume.getInfo<br> jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter capacity is not uint type<br> jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Required property allocType is not provided w<br> hen calling Volume.getInfo<br> jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter mtime is not uint type<br> jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter ctime is not int type<br> jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter truesize is not uint type<br> jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter apparentsize is not uint type<br> jsonrpc.Executor/5::DEBUG::2016-06-24 10:10:21,695::__init__::550::jsonrpc.JsonRpcServer::(_serveRequest) Return 'Volume.getInfo' in bridge with {'sta<br> tus': 'OK', 'domain': '46f55a31-f35f-465c-b3e2-df45c05e06a7', 'voltype': 'LEAF', 'description': 'hosted-engine.lockspace', 'parent': '00000000-0000-00<br> 00-0000-000000000000', 'format': 'RAW', 'image': '6838c974-7656-4b40-87cc-f562ff0b2a4c', 'ctime': '1423074433', 'disktype': '2', 'legality': 'LEGAL',<br> 'mtime': '0', 'apparentsize': '1048576', 'children': [], 'pool': '', 'capacity': '1048576', 'uuid': u'c66a14d3-112a-4104-9025-76bb2e7ad9f1', 'truesize<br> ': '1048576', 'type': 'PREALLOCATED'}<br> JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:36,514::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r<br> eset by peer')<br> JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:36,515::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event<br> JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:43,807::betterAsyncore::132::vds.dispatcher::(send) SSL error during sending data: (104, 'Connection r<br> eset by peer')<br> JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:43,959::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: bad write retry<br> JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:43,959::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event<br> JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:47,859::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r<br> eset by peer')<br> JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:47,860::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event<br> JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:51,725::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r<br> eset by peer')<br> JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:51,726::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event<br> Reactor thread::INFO::2016-06-24 10:10:53,851::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from ::ffff:1<br> 92.168.1.50:48554<br> Reactor thread::DEBUG::2016-06-24 10:10:53,860::protocoldetector::85::ProtocolDetector.Detector::(__init__) Using required_size=11<br> Reactor thread::INFO::2016-06-24 10:10:53,861::protocoldetector::121::ProtocolDetector.Detector::(handle_read) Detected protocol stomp from ::ffff:192<br> .168.1.50:48554<br> Reactor thread::INFO::2016-06-24 10:10:53,862::stompreactor::101::Broker.StompAdapter::(_cmd_connect) Processing CONNECT request<br> Reactor thread::DEBUG::2016-06-24 10:10:53,862::stompreactor::482::protocoldetector.StompDetector::(handle_socket) Stomp detected from ('::ffff:192.168.1.50', 48554)<br> <br> <br> <br> <div class="moz-cite-prefix">Il 24/06/2016 8.18, Sandro Bonazzola ha scritto:<br> </div> <blockquote cite="mid:CAPQRNTnR-G=+gmG1YpuNvSuYRe_sgtyu-R=fyzX3kOa-MeiqMw@mail.gmail.com" type="cite"> <div dir="ltr"><br> <div class="gmail_extra"><br> <div class="gmail_quote">On Thu, Jun 23, 2016 at 11:46 PM, Stefano Danzi <span dir="ltr"><<a moz-do-not-send="true" href="mailto:s.danzi@hawai.it" target="_blank">s.danzi@hawai.it</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi!<br> After cleanin metadata yum do an update of vdsm:<br> <br> [root@ovirt01 ~]# rpm -qva | grep vdsm<br> vdsm-yajsonrpc-4.18.4.1-0.el7.centos.noarch<br> vdsm-infra-4.18.4.1-0.el7.centos.noarch<br> vdsm-cli-4.18.4.1-0.el7.centos.noarch<br> vdsm-python-4.18.4.1-0.el7.centos.noarch<br> vdsm-hook-vmfex-dev-4.18.4.1-0.el7.centos.noarch<br> vdsm-xmlrpc-4.18.4.1-0.el7.centos.noarch<br> vdsm-4.18.4.1-0.el7.centos.x86_64<br> vdsm-api-4.18.4.1-0.el7.centos.noarch<br> vdsm-gluster-4.18.4.1-0.el7.centos.noarch<br> vdsm-jsonrpc-4.18.4.1-0.el7.centos.noarch<br> <br> But this not solve the issue.<br> <br> - Host haven't default gateway after a reboot<br> - Self Hosted engine don't start.<br> </blockquote> <div><br> </div> <div>Martin, Dan, can you please look into this?</div> <div>Stefano, can you please share a full sos report from the host?</div> <div><br> </div> <div> </div> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br> vdsm.log:<br> <a moz-do-not-send="true" href="https://drive.google.com/file/d/0ByMG4sDqvlZcVEJ5YVI1UWxrdE0/view?usp=sharin..." rel="noreferrer" target="_blank">https://drive.google.com/file/d/0ByMG4sDqvlZcVEJ5YVI1UWxrdE0/view?usp=sharing</a> <div class="HOEnZb"> <div class="h5"><br> <br> Il 2016-06-23 21:41 Sandro Bonazzola ha scritto:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> On Thu, Jun 23, 2016 at 6:36 PM, Stefano Danzi <<a moz-do-not-send="true" href="mailto:s.danzi@hawai.it" target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:s.danzi@hawai.it">s.danzi@hawai.it</a></a>><br> wrote:<br> <br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> Hi!<br> I've just upgrade oVirt from 3.6 to 4.0 and I'm not able to start<br> the self hosted engine.<br> </blockquote> <br> Hi Stefano, can you please try "yum clean metadata" "yum update"<br> again?<br> You should get vdsm 4.18.4.1, please let us know if this solve your<br> issue.<br> </blockquote> </div> </div> </blockquote> </div> <br> <br clear="all"> <div><br> </div> -- <br> <div class="gmail_signature" data-smartmail="gmail_signature"> <div dir="ltr"> <div> <div dir="ltr">Sandro Bonazzola<br> Better technology. Faster innovation. Powered by community collaboration.<br> See how it works at <a moz-do-not-send="true" href="http://redhat.com" target="_blank">redhat.com</a><br> </div> </div> </div> </div> </div> </div> </blockquote> <br> <pre class="moz-signature" cols="72">-- Stefano Danzi Responsabile sistemi informativi HAWAI ITALIA S.r.l. Via Forte Garofolo, 16 37057 S. Giovanni Lupatoto Verona Italia P. IVA 01680700232 tel. +39/045/8266400 fax +39/045/8266401 Web <a class="moz-txt-link-abbreviated" href="http://www.hawai.it">www.hawai.it</a> </pre> </body> </html> --------------FAB04AB2FA5A32FE906AB6B9--

This is a multi-part message in MIME format. --------------7056E4CA4E932165326EC11B Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit How I can change self hosted engine configuration to mount directly gluster storage without pass through gluster NFS? Maybe this solve.... Il 24/06/2016 10.16, Stefano Danzi ha scritto:
After an additional yum clean all && yum update was updated some other rpms.
Something changed..... My setup has engine storage on gluster, but mounted with NFS. Now gluster daemon don't automatically start at boot. After starting manually gluster the error is the same:
==> /var/log/ovirt-hosted-engine-ha/broker.log <== Thread-19::ERROR::2016-06-24 10:10:36,758::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) Error while serving connection Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 166, in handle data) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 299, in _dispatch .set_storage_domain(client, sd_type, **options) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 66, in set_storage_domain self._backends[client].connect() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 400, in connect volUUID=volume.volume_uuid File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 245, in _get_volume_path volUUID File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable
VDSM.log
jsonrpc.Executor/5::DEBUG::2016-06-24 10:10:21,694::task::995::Storage.TaskManager.Task::(_decref) Task=`5c3b6f30-d3a8-431e-9dd0-8df79b171709`::ref 0 aborting False jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Following parameters ['type'] were not recogn ized jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Provided value "2" not defined in DiskType en um for Volume.getInfo jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter capacity is not uint type jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Required property allocType is not provided w hen calling Volume.getInfo jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter mtime is not uint type jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter ctime is not int type jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter truesize is not uint type jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter apparentsize is not uint type jsonrpc.Executor/5::DEBUG::2016-06-24 10:10:21,695::__init__::550::jsonrpc.JsonRpcServer::(_serveRequest) Return 'Volume.getInfo' in bridge with {'sta tus': 'OK', 'domain': '46f55a31-f35f-465c-b3e2-df45c05e06a7', 'voltype': 'LEAF', 'description': 'hosted-engine.lockspace', 'parent': '00000000-0000-00 00-0000-000000000000', 'format': 'RAW', 'image': '6838c974-7656-4b40-87cc-f562ff0b2a4c', 'ctime': '1423074433', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1048576', 'children': [], 'pool': '', 'capacity': '1048576', 'uuid': u'c66a14d3-112a-4104-9025-76bb2e7ad9f1', 'truesize ': '1048576', 'type': 'PREALLOCATED'} JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:36,514::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r eset by peer') JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:36,515::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:43,807::betterAsyncore::132::vds.dispatcher::(send) SSL error during sending data: (104, 'Connection r eset by peer') JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:43,959::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: bad write retry JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:43,959::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:47,859::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r eset by peer') JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:47,860::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:51,725::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r eset by peer') JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:51,726::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event Reactor thread::INFO::2016-06-24 10:10:53,851::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from ::ffff:1 92.168.1.50:48554 Reactor thread::DEBUG::2016-06-24 10:10:53,860::protocoldetector::85::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2016-06-24 10:10:53,861::protocoldetector::121::ProtocolDetector.Detector::(handle_read) Detected protocol stomp from ::ffff:192 .168.1.50:48554 Reactor thread::INFO::2016-06-24 10:10:53,862::stompreactor::101::Broker.StompAdapter::(_cmd_connect) Processing CONNECT request Reactor thread::DEBUG::2016-06-24 10:10:53,862::stompreactor::482::protocoldetector.StompDetector::(handle_socket) Stomp detected from ('::ffff:192.168.1.50', 48554)
Il 24/06/2016 8.18, Sandro Bonazzola ha scritto:
On Thu, Jun 23, 2016 at 11:46 PM, Stefano Danzi <s.danzi@hawai.it <mailto:s.danzi@hawai.it>> wrote:
Hi! After cleanin metadata yum do an update of vdsm:
[root@ovirt01 ~]# rpm -qva | grep vdsm vdsm-yajsonrpc-4.18.4.1-0.el7.centos.noarch vdsm-infra-4.18.4.1-0.el7.centos.noarch vdsm-cli-4.18.4.1-0.el7.centos.noarch vdsm-python-4.18.4.1-0.el7.centos.noarch vdsm-hook-vmfex-dev-4.18.4.1-0.el7.centos.noarch vdsm-xmlrpc-4.18.4.1-0.el7.centos.noarch vdsm-4.18.4.1-0.el7.centos.x86_64 vdsm-api-4.18.4.1-0.el7.centos.noarch vdsm-gluster-4.18.4.1-0.el7.centos.noarch vdsm-jsonrpc-4.18.4.1-0.el7.centos.noarch
But this not solve the issue.
- Host haven't default gateway after a reboot - Self Hosted engine don't start.
Martin, Dan, can you please look into this? Stefano, can you please share a full sos report from the host?
vdsm.log: https://drive.google.com/file/d/0ByMG4sDqvlZcVEJ5YVI1UWxrdE0/view?usp=sharin...
Il 2016-06-23 21:41 Sandro Bonazzola ha scritto:
On Thu, Jun 23, 2016 at 6:36 PM, Stefano Danzi <s.danzi@hawai.it> wrote:
Hi! I've just upgrade oVirt from 3.6 to 4.0 and I'm not able to start the self hosted engine.
Hi Stefano, can you please try "yum clean metadata" "yum update" again? You should get vdsm 4.18.4.1, please let us know if this solve your issue.
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com <http://redhat.com>
--
Stefano Danzi Responsabile sistemi informativi
HAWAI ITALIA S.r.l. Via Forte Garofolo, 16 37057 S. Giovanni Lupatoto Verona Italia
P. IVA 01680700232
tel. +39/045/8266400 fax +39/045/8266401 Webwww.hawai.it
-- Stefano Danzi Responsabile sistemi informativi HAWAI ITALIA S.r.l. Via Forte Garofolo, 16 37057 S. Giovanni Lupatoto Verona Italia P. IVA 01680700232 tel. +39/045/8266400 fax +39/045/8266401 Web www.hawai.it --------------7056E4CA4E932165326EC11B Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> How I can change self hosted engine configuration to mount directly gluster storage without pass through gluster NFS?<br> <br> Maybe this solve.... <br> <br> <div class="moz-cite-prefix">Il 24/06/2016 10.16, Stefano Danzi ha scritto:<br> </div> <blockquote cite="mid:3d02484f-b8c7-ae29-5316-caaa4950e298@hawai.it" type="cite"> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> After an additional yum clean all && yum update was updated some other rpms.<br> <br> Something changed.....<br> My setup has engine storage on gluster, but mounted with NFS.<br> Now gluster daemon don't automatically start at boot. After starting manually gluster the error is the same:<br> <br> ==> /var/log/ovirt-hosted-engine-ha/broker.log <==<br> Thread-19::ERROR::2016-06-24 10:10:36,758::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) Error while serving connection<br> Traceback (most recent call last):<br> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 166, in handle<br> data)<br> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 299, in _dispatch<br> .set_storage_domain(client, sd_type, **options)<br> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 66, in set_storage_domain<br> self._backends[client].connect()<br> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 400, in connect<br> volUUID=volume.volume_uuid<br> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 245, in _get_volume_path<br> volUUID<br> File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__<br> return self.__send(self.__name, args)<br> File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request<br> verbose=self.__verbose<br> File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request<br> return self.single_request(host, handler, request_body, verbose)<br> File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request<br> self.send_content(h, request_body)<br> File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content<br> connection.endheaders(request_body)<br> File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders<br> self._send_output(message_body)<br> File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output<br> self.send(msg)<br> File "/usr/lib64/python2.7/httplib.py", line 797, in send<br> self.connect()<br> File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect<br> sock = socket.create_connection((self.host, self.port), self.timeout)<br> File "/usr/lib64/python2.7/socket.py", line 571, in create_connection<br> raise err<br> error: [Errno 101] Network is unreachable<br> <br> <br> VDSM.log<br> <br> jsonrpc.Executor/5::DEBUG::2016-06-24 10:10:21,694::task::995::Storage.TaskManager.Task::(_decref) Task=`5c3b6f30-d3a8-431e-9dd0-8df79b171709`::ref 0<br> aborting False<br> jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Following parameters ['type'] were not recogn<br> ized<br> jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Provided value "2" not defined in DiskType en<br> um for Volume.getInfo<br> jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter capacity is not uint type<br> jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Required property allocType is not provided w<br> hen calling Volume.getInfo<br> jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter mtime is not uint type<br> jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter ctime is not int type<br> jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter truesize is not uint type<br> jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter apparentsize is not uint type<br> jsonrpc.Executor/5::DEBUG::2016-06-24 10:10:21,695::__init__::550::jsonrpc.JsonRpcServer::(_serveRequest) Return 'Volume.getInfo' in bridge with {'sta<br> tus': 'OK', 'domain': '46f55a31-f35f-465c-b3e2-df45c05e06a7', 'voltype': 'LEAF', 'description': 'hosted-engine.lockspace', 'parent': '00000000-0000-00<br> 00-0000-000000000000', 'format': 'RAW', 'image': '6838c974-7656-4b40-87cc-f562ff0b2a4c', 'ctime': '1423074433', 'disktype': '2', 'legality': 'LEGAL',<br> 'mtime': '0', 'apparentsize': '1048576', 'children': [], 'pool': '', 'capacity': '1048576', 'uuid': u'c66a14d3-112a-4104-9025-76bb2e7ad9f1', 'truesize<br> ': '1048576', 'type': 'PREALLOCATED'}<br> JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:36,514::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r<br> eset by peer')<br> JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:36,515::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event<br> JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:43,807::betterAsyncore::132::vds.dispatcher::(send) SSL error during sending data: (104, 'Connection r<br> eset by peer')<br> JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:43,959::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: bad write retry<br> JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:43,959::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event<br> JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:47,859::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r<br> eset by peer')<br> JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:47,860::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event<br> JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:51,725::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r<br> eset by peer')<br> JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:51,726::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event<br> Reactor thread::INFO::2016-06-24 10:10:53,851::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from ::ffff:1<br> 92.168.1.50:48554<br> Reactor thread::DEBUG::2016-06-24 10:10:53,860::protocoldetector::85::ProtocolDetector.Detector::(__init__) Using required_size=11<br> Reactor thread::INFO::2016-06-24 10:10:53,861::protocoldetector::121::ProtocolDetector.Detector::(handle_read) Detected protocol stomp from ::ffff:192<br> .168.1.50:48554<br> Reactor thread::INFO::2016-06-24 10:10:53,862::stompreactor::101::Broker.StompAdapter::(_cmd_connect) Processing CONNECT request<br> Reactor thread::DEBUG::2016-06-24 10:10:53,862::stompreactor::482::protocoldetector.StompDetector::(handle_socket) Stomp detected from ('::ffff:192.168.1.50', 48554)<br> <br> <br> <br> <div class="moz-cite-prefix">Il 24/06/2016 8.18, Sandro Bonazzola ha scritto:<br> </div> <blockquote cite="mid:CAPQRNTnR-G=+gmG1YpuNvSuYRe_sgtyu-R=fyzX3kOa-MeiqMw@mail.gmail.com" type="cite"> <div dir="ltr"><br> <div class="gmail_extra"><br> <div class="gmail_quote">On Thu, Jun 23, 2016 at 11:46 PM, Stefano Danzi <span dir="ltr"><<a moz-do-not-send="true" href="mailto:s.danzi@hawai.it" target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:s.danzi@hawai.it">s.danzi@hawai.it</a></a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi!<br> After cleanin metadata yum do an update of vdsm:<br> <br> [root@ovirt01 ~]# rpm -qva | grep vdsm<br> vdsm-yajsonrpc-4.18.4.1-0.el7.centos.noarch<br> vdsm-infra-4.18.4.1-0.el7.centos.noarch<br> vdsm-cli-4.18.4.1-0.el7.centos.noarch<br> vdsm-python-4.18.4.1-0.el7.centos.noarch<br> vdsm-hook-vmfex-dev-4.18.4.1-0.el7.centos.noarch<br> vdsm-xmlrpc-4.18.4.1-0.el7.centos.noarch<br> vdsm-4.18.4.1-0.el7.centos.x86_64<br> vdsm-api-4.18.4.1-0.el7.centos.noarch<br> vdsm-gluster-4.18.4.1-0.el7.centos.noarch<br> vdsm-jsonrpc-4.18.4.1-0.el7.centos.noarch<br> <br> But this not solve the issue.<br> <br> - Host haven't default gateway after a reboot<br> - Self Hosted engine don't start.<br> </blockquote> <div><br> </div> <div>Martin, Dan, can you please look into this?</div> <div>Stefano, can you please share a full sos report from the host?</div> <div><br> </div> <div> </div> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br> vdsm.log:<br> <a moz-do-not-send="true" href="https://drive.google.com/file/d/0ByMG4sDqvlZcVEJ5YVI1UWxrdE0/view?usp=sharin..." rel="noreferrer" target="_blank">https://drive.google.com/file/d/0ByMG4sDqvlZcVEJ5YVI1UWxrdE0/view?usp=sharing</a> <div class="HOEnZb"> <div class="h5"><br> <br> Il 2016-06-23 21:41 Sandro Bonazzola ha scritto:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> On Thu, Jun 23, 2016 at 6:36 PM, Stefano Danzi <<a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:s.danzi@hawai.it">s.danzi@hawai.it</a>><br> wrote:<br> <br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> Hi!<br> I've just upgrade oVirt from 3.6 to 4.0 and I'm not able to start<br> the self hosted engine.<br> </blockquote> <br> Hi Stefano, can you please try "yum clean metadata" "yum update"<br> again?<br> You should get vdsm 4.18.4.1, please let us know if this solve your<br> issue.<br> </blockquote> </div> </div> </blockquote> </div> <br> <br clear="all"> <div><br> </div> -- <br> <div class="gmail_signature" data-smartmail="gmail_signature"> <div dir="ltr"> <div> <div dir="ltr">Sandro Bonazzola<br> Better technology. Faster innovation. Powered by community collaboration.<br> See how it works at <a moz-do-not-send="true" href="http://redhat.com" target="_blank">redhat.com</a><br> </div> </div> </div> </div> </div> </div> </blockquote> <br> <pre class="moz-signature" cols="72">-- Stefano Danzi Responsabile sistemi informativi HAWAI ITALIA S.r.l. Via Forte Garofolo, 16 37057 S. Giovanni Lupatoto Verona Italia P. IVA 01680700232 tel. +39/045/8266400 fax +39/045/8266401 Web <a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="http://www.hawai.it">www.hawai.it</a> </pre> </blockquote> <br> <pre class="moz-signature" cols="72">-- Stefano Danzi Responsabile sistemi informativi HAWAI ITALIA S.r.l. Via Forte Garofolo, 16 37057 S. Giovanni Lupatoto Verona Italia P. IVA 01680700232 tel. +39/045/8266400 fax +39/045/8266401 Web <a class="moz-txt-link-abbreviated" href="http://www.hawai.it">www.hawai.it</a> </pre> </body> </html> --------------7056E4CA4E932165326EC11B--

This is a multi-part message in MIME format. --------------CD09329DE72C6266C8F16609 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit HI!!!!!! I found a workaround!!!!!!!!!!!!!!!! the brocker process try to connect to vdsm to IPV4 host address using an IPV6 connection (I noticed that doing a strace to the process), but ipv6 is not intialized at boot. (why connect to IPV4 address using IPV6?) I added the following lines to crontab: @reboot echo 'echo 0 > /proc/sys/net/ipv6/conf/lo/disable_ipv6' | /usr/bin/at now+1 minutes @reboot echo 'echo 0 > /proc/sys/net/ipv6/conf/ovirtmgmt/disable_ipv6' | /usr/bin/at now+1 minutes @reboot echo '/usr/sbin/route add default gw 192.168.1.254' | /usr/bin/at now+1 minutes Il 24/06/2016 12.36, Stefano Danzi ha scritto:
How I can change self hosted engine configuration to mount directly gluster storage without pass through gluster NFS?
Maybe this solve....
Il 24/06/2016 10.16, Stefano Danzi ha scritto:
After an additional yum clean all && yum update was updated some other rpms.
Something changed..... My setup has engine storage on gluster, but mounted with NFS. Now gluster daemon don't automatically start at boot. After starting manually gluster the error is the same:
==> /var/log/ovirt-hosted-engine-ha/broker.log <== Thread-19::ERROR::2016-06-24 10:10:36,758::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) Error while serving connection Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 166, in handle data) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 299, in _dispatch .set_storage_domain(client, sd_type, **options) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 66, in set_storage_domain self._backends[client].connect() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 400, in connect volUUID=volume.volume_uuid File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 245, in _get_volume_path volUUID File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable
VDSM.log
jsonrpc.Executor/5::DEBUG::2016-06-24 10:10:21,694::task::995::Storage.TaskManager.Task::(_decref) Task=`5c3b6f30-d3a8-431e-9dd0-8df79b171709`::ref 0 aborting False jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Following parameters ['type'] were not recogn ized jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Provided value "2" not defined in DiskType en um for Volume.getInfo jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter capacity is not uint type jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Required property allocType is not provided w hen calling Volume.getInfo jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter mtime is not uint type jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter ctime is not int type jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter truesize is not uint type jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter apparentsize is not uint type jsonrpc.Executor/5::DEBUG::2016-06-24 10:10:21,695::__init__::550::jsonrpc.JsonRpcServer::(_serveRequest) Return 'Volume.getInfo' in bridge with {'sta tus': 'OK', 'domain': '46f55a31-f35f-465c-b3e2-df45c05e06a7', 'voltype': 'LEAF', 'description': 'hosted-engine.lockspace', 'parent': '00000000-0000-00 00-0000-000000000000', 'format': 'RAW', 'image': '6838c974-7656-4b40-87cc-f562ff0b2a4c', 'ctime': '1423074433', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1048576', 'children': [], 'pool': '', 'capacity': '1048576', 'uuid': u'c66a14d3-112a-4104-9025-76bb2e7ad9f1', 'truesize ': '1048576', 'type': 'PREALLOCATED'} JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:36,514::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r eset by peer') JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:36,515::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:43,807::betterAsyncore::132::vds.dispatcher::(send) SSL error during sending data: (104, 'Connection r eset by peer') JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:43,959::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: bad write retry JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:43,959::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:47,859::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r eset by peer') JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:47,860::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:51,725::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r eset by peer') JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:51,726::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event Reactor thread::INFO::2016-06-24 10:10:53,851::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from ::ffff:1 92.168.1.50:48554 Reactor thread::DEBUG::2016-06-24 10:10:53,860::protocoldetector::85::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2016-06-24 10:10:53,861::protocoldetector::121::ProtocolDetector.Detector::(handle_read) Detected protocol stomp from ::ffff:192 .168.1.50:48554 Reactor thread::INFO::2016-06-24 10:10:53,862::stompreactor::101::Broker.StompAdapter::(_cmd_connect) Processing CONNECT request Reactor thread::DEBUG::2016-06-24 10:10:53,862::stompreactor::482::protocoldetector.StompDetector::(handle_socket) Stomp detected from ('::ffff:192.168.1.50', 48554)
Il 24/06/2016 8.18, Sandro Bonazzola ha scritto:
On Thu, Jun 23, 2016 at 11:46 PM, Stefano Danzi <s.danzi@hawai.it> wrote:
Hi! After cleanin metadata yum do an update of vdsm:
[root@ovirt01 ~]# rpm -qva | grep vdsm vdsm-yajsonrpc-4.18.4.1-0.el7.centos.noarch vdsm-infra-4.18.4.1-0.el7.centos.noarch vdsm-cli-4.18.4.1-0.el7.centos.noarch vdsm-python-4.18.4.1-0.el7.centos.noarch vdsm-hook-vmfex-dev-4.18.4.1-0.el7.centos.noarch vdsm-xmlrpc-4.18.4.1-0.el7.centos.noarch vdsm-4.18.4.1-0.el7.centos.x86_64 vdsm-api-4.18.4.1-0.el7.centos.noarch vdsm-gluster-4.18.4.1-0.el7.centos.noarch vdsm-jsonrpc-4.18.4.1-0.el7.centos.noarch
But this not solve the issue.
- Host haven't default gateway after a reboot - Self Hosted engine don't start.
Martin, Dan, can you please look into this? Stefano, can you please share a full sos report from the host?
vdsm.log: https://drive.google.com/file/d/0ByMG4sDqvlZcVEJ5YVI1UWxrdE0/view?usp=sharin...
Il 2016-06-23 21:41 Sandro Bonazzola ha scritto:
On Thu, Jun 23, 2016 at 6:36 PM, Stefano Danzi <s.danzi@hawai.it> wrote:
Hi! I've just upgrade oVirt from 3.6 to 4.0 and I'm not able to start the self hosted engine.
Hi Stefano, can you please try "yum clean metadata" "yum update" again? You should get vdsm 4.18.4.1, please let us know if this solve your issue.
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com <http://redhat.com>
Webwww.hawai.it
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------CD09329DE72C6266C8F16609 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> HI!!!!!!<br> <br> I found a workaround!!!!!!!!!!!!!!!!<br> <br> the brocker process try to connect to vdsm to IPV4 host address using an IPV6 connection<br> (I noticed that doing a strace to the process),<br> but ipv6 is not intialized at boot. (why connect to IPV4 address using IPV6?)<br> <br> I added the following lines to crontab:<br> <br> @reboot echo 'echo 0 > /proc/sys/net/ipv6/conf/lo/disable_ipv6' | /usr/bin/at now+1 minutes<br> @reboot echo 'echo 0 > /proc/sys/net/ipv6/conf/ovirtmgmt/disable_ipv6' | /usr/bin/at now+1 minutes<br> @reboot echo '/usr/sbin/route add default gw 192.168.1.254' | /usr/bin/at now+1 minutes<br> <br> <br> <br> <div class="moz-cite-prefix">Il 24/06/2016 12.36, Stefano Danzi ha scritto:<br> </div> <blockquote cite="mid:27f2b3a5-cead-0056-6dd8-fde723e29ea7@hawai.it" type="cite"> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> How I can change self hosted engine configuration to mount directly gluster storage without pass through gluster NFS?<br> <br> Maybe this solve.... <br> <br> <div class="moz-cite-prefix">Il 24/06/2016 10.16, Stefano Danzi ha scritto:<br> </div> <blockquote cite="mid:3d02484f-b8c7-ae29-5316-caaa4950e298@hawai.it" type="cite"> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> After an additional yum clean all && yum update was updated some other rpms.<br> <br> Something changed.....<br> My setup has engine storage on gluster, but mounted with NFS.<br> Now gluster daemon don't automatically start at boot. After starting manually gluster the error is the same:<br> <br> ==> /var/log/ovirt-hosted-engine-ha/broker.log <==<br> Thread-19::ERROR::2016-06-24 10:10:36,758::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) Error while serving connection<br> Traceback (most recent call last):<br> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 166, in handle<br> data)<br> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 299, in _dispatch<br> .set_storage_domain(client, sd_type, **options)<br> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 66, in set_storage_domain<br> self._backends[client].connect()<br> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 400, in connect<br> volUUID=volume.volume_uuid<br> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 245, in _get_volume_path<br> volUUID<br> File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__<br> return self.__send(self.__name, args)<br> File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request<br> verbose=self.__verbose<br> File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request<br> return self.single_request(host, handler, request_body, verbose)<br> File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request<br> self.send_content(h, request_body)<br> File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content<br> connection.endheaders(request_body)<br> File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders<br> self._send_output(message_body)<br> File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output<br> self.send(msg)<br> File "/usr/lib64/python2.7/httplib.py", line 797, in send<br> self.connect()<br> File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect<br> sock = socket.create_connection((self.host, self.port), self.timeout)<br> File "/usr/lib64/python2.7/socket.py", line 571, in create_connection<br> raise err<br> error: [Errno 101] Network is unreachable<br> <br> <br> VDSM.log<br> <br> jsonrpc.Executor/5::DEBUG::2016-06-24 10:10:21,694::task::995::Storage.TaskManager.Task::(_decref) Task=`5c3b6f30-d3a8-431e-9dd0-8df79b171709`::ref 0<br> aborting False<br> jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Following parameters ['type'] were not recogn<br> ized<br> jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Provided value "2" not defined in DiskType en<br> um for Volume.getInfo<br> jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter capacity is not uint type<br> jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Required property allocType is not provided w<br> hen calling Volume.getInfo<br> jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter mtime is not uint type<br> jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter ctime is not int type<br> jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter truesize is not uint type<br> jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter apparentsize is not uint type<br> jsonrpc.Executor/5::DEBUG::2016-06-24 10:10:21,695::__init__::550::jsonrpc.JsonRpcServer::(_serveRequest) Return 'Volume.getInfo' in bridge with {'sta<br> tus': 'OK', 'domain': '46f55a31-f35f-465c-b3e2-df45c05e06a7', 'voltype': 'LEAF', 'description': 'hosted-engine.lockspace', 'parent': '00000000-0000-00<br> 00-0000-000000000000', 'format': 'RAW', 'image': '6838c974-7656-4b40-87cc-f562ff0b2a4c', 'ctime': '1423074433', 'disktype': '2', 'legality': 'LEGAL',<br> 'mtime': '0', 'apparentsize': '1048576', 'children': [], 'pool': '', 'capacity': '1048576', 'uuid': u'c66a14d3-112a-4104-9025-76bb2e7ad9f1', 'truesize<br> ': '1048576', 'type': 'PREALLOCATED'}<br> JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:36,514::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r<br> eset by peer')<br> JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:36,515::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event<br> JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:43,807::betterAsyncore::132::vds.dispatcher::(send) SSL error during sending data: (104, 'Connection r<br> eset by peer')<br> JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:43,959::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: bad write retry<br> JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:43,959::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event<br> JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:47,859::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r<br> eset by peer')<br> JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:47,860::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event<br> JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:51,725::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r<br> eset by peer')<br> JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:51,726::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event<br> Reactor thread::INFO::2016-06-24 10:10:53,851::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from ::ffff:1<br> 92.168.1.50:48554<br> Reactor thread::DEBUG::2016-06-24 10:10:53,860::protocoldetector::85::ProtocolDetector.Detector::(__init__) Using required_size=11<br> Reactor thread::INFO::2016-06-24 10:10:53,861::protocoldetector::121::ProtocolDetector.Detector::(handle_read) Detected protocol stomp from ::ffff:192<br> .168.1.50:48554<br> Reactor thread::INFO::2016-06-24 10:10:53,862::stompreactor::101::Broker.StompAdapter::(_cmd_connect) Processing CONNECT request<br> Reactor thread::DEBUG::2016-06-24 10:10:53,862::stompreactor::482::protocoldetector.StompDetector::(handle_socket) Stomp detected from ('::ffff:192.168.1.50', 48554)<br> <br> <br> <br> <div class="moz-cite-prefix">Il 24/06/2016 8.18, Sandro Bonazzola ha scritto:<br> </div> <blockquote cite="mid:CAPQRNTnR-G=+gmG1YpuNvSuYRe_sgtyu-R=fyzX3kOa-MeiqMw@mail.gmail.com" type="cite"> <div dir="ltr"><br> <div class="gmail_extra"><br> <div class="gmail_quote">On Thu, Jun 23, 2016 at 11:46 PM, Stefano Danzi <span dir="ltr"><<a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:s.danzi@hawai.it"><a class="moz-txt-link-abbreviated" href="mailto:s.danzi@hawai.it">s.danzi@hawai.it</a></a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi!<br> After cleanin metadata yum do an update of vdsm:<br> <br> [root@ovirt01 ~]# rpm -qva | grep vdsm<br> vdsm-yajsonrpc-4.18.4.1-0.el7.centos.noarch<br> vdsm-infra-4.18.4.1-0.el7.centos.noarch<br> vdsm-cli-4.18.4.1-0.el7.centos.noarch<br> vdsm-python-4.18.4.1-0.el7.centos.noarch<br> vdsm-hook-vmfex-dev-4.18.4.1-0.el7.centos.noarch<br> vdsm-xmlrpc-4.18.4.1-0.el7.centos.noarch<br> vdsm-4.18.4.1-0.el7.centos.x86_64<br> vdsm-api-4.18.4.1-0.el7.centos.noarch<br> vdsm-gluster-4.18.4.1-0.el7.centos.noarch<br> vdsm-jsonrpc-4.18.4.1-0.el7.centos.noarch<br> <br> But this not solve the issue.<br> <br> - Host haven't default gateway after a reboot<br> - Self Hosted engine don't start.<br> </blockquote> <div><br> </div> <div>Martin, Dan, can you please look into this?</div> <div>Stefano, can you please share a full sos report from the host?</div> <div><br> </div> <div> </div> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br> vdsm.log:<br> <a moz-do-not-send="true" href="https://drive.google.com/file/d/0ByMG4sDqvlZcVEJ5YVI1UWxrdE0/view?usp=sharin..." rel="noreferrer" target="_blank">https://drive.google.com/file/d/0ByMG4sDqvlZcVEJ5YVI1UWxrdE0/view?usp=sharing</a> <div class="HOEnZb"> <div class="h5"><br> <br> Il 2016-06-23 21:41 Sandro Bonazzola ha scritto:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> On Thu, Jun 23, 2016 at 6:36 PM, Stefano Danzi <<a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:s.danzi@hawai.it"><a class="moz-txt-link-abbreviated" href="mailto:s.danzi@hawai.it">s.danzi@hawai.it</a></a>><br> wrote:<br> <br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> Hi!<br> I've just upgrade oVirt from 3.6 to 4.0 and I'm not able to start<br> the self hosted engine.<br> </blockquote> <br> Hi Stefano, can you please try "yum clean metadata" "yum update"<br> again?<br> You should get vdsm 4.18.4.1, please let us know if this solve your<br> issue.<br> </blockquote> </div> </div> </blockquote> </div> <br> <br clear="all"> <div><br> </div> -- <br> <div class="gmail_signature" data-smartmail="gmail_signature"> <div dir="ltr"> <div> <div dir="ltr">Sandro Bonazzola<br> Better technology. Faster innovation. Powered by community collaboration.<br> See how it works at <a moz-do-not-send="true" href="http://redhat.com" target="_blank">redhat.com</a><br> </div> </div> </div> </div> </div> </div> </blockquote> <br> <pre class="moz-signature" cols="72"> </pre> </blockquote> <br> <pre class="moz-signature" cols="72">Web <a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="http://www.hawai.it">www.hawai.it</a> </pre> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </body> </html> --------------CD09329DE72C6266C8F16609--

On Fri, Jun 24, 2016 at 06:45:24PM +0200, Stefano Danzi wrote:
HI!!!!!!
I found a workaround!!!!!!!!!!!!!!!!
the brocker process try to connect to vdsm to IPV4 host address using an IPV6 connection (I noticed that doing a strace to the process), but ipv6 is not intialized at boot. (why connect to IPV4 address using IPV6?)
Acutally, we take an effort to disable ipv6 on ovirt host networks. Keeping them open without explicit request was deemed a security issue. Can you share your strace line and the relevant lines in vdsm.log? I don't understand what is the issue that you are reporting.
I added the following lines to crontab:
@reboot echo 'echo 0 > /proc/sys/net/ipv6/conf/lo/disable_ipv6' | /usr/bin/at now+1 minutes @reboot echo 'echo 0 > /proc/sys/net/ipv6/conf/ovirtmgmt/disable_ipv6' | /usr/bin/at now+1 minutes @reboot echo '/usr/sbin/route add default gw 192.168.1.254' | /usr/bin/at now+1 minutes
Il 24/06/2016 12.36, Stefano Danzi ha scritto:
How I can change self hosted engine configuration to mount directly gluster storage without pass through gluster NFS?
Maybe this solve....
Il 24/06/2016 10.16, Stefano Danzi ha scritto:
After an additional yum clean all && yum update was updated some other rpms.
Something changed..... My setup has engine storage on gluster, but mounted with NFS. Now gluster daemon don't automatically start at boot. After starting manually gluster the error is the same:
==> /var/log/ovirt-hosted-engine-ha/broker.log <== Thread-19::ERROR::2016-06-24 10:10:36,758::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) Error while serving connection Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 166, in handle data) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 299, in _dispatch .set_storage_domain(client, sd_type, **options) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 66, in set_storage_domain self._backends[client].connect() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 400, in connect volUUID=volume.volume_uuid File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 245, in _get_volume_path volUUID File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable
VDSM.log
jsonrpc.Executor/5::DEBUG::2016-06-24 10:10:21,694::task::995::Storage.TaskManager.Task::(_decref) Task=`5c3b6f30-d3a8-431e-9dd0-8df79b171709`::ref 0 aborting False jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Following parameters ['type'] were not recogn ized jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Provided value "2" not defined in DiskType en um for Volume.getInfo jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter capacity is not uint type jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Required property allocType is not provided w hen calling Volume.getInfo jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter mtime is not uint type jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter ctime is not int type jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter truesize is not uint type jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter apparentsize is not uint type jsonrpc.Executor/5::DEBUG::2016-06-24 10:10:21,695::__init__::550::jsonrpc.JsonRpcServer::(_serveRequest) Return 'Volume.getInfo' in bridge with {'sta tus': 'OK', 'domain': '46f55a31-f35f-465c-b3e2-df45c05e06a7', 'voltype': 'LEAF', 'description': 'hosted-engine.lockspace', 'parent': '00000000-0000-00 00-0000-000000000000', 'format': 'RAW', 'image': '6838c974-7656-4b40-87cc-f562ff0b2a4c', 'ctime': '1423074433', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1048576', 'children': [], 'pool': '', 'capacity': '1048576', 'uuid': u'c66a14d3-112a-4104-9025-76bb2e7ad9f1', 'truesize ': '1048576', 'type': 'PREALLOCATED'} JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:36,514::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r eset by peer') JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:36,515::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:43,807::betterAsyncore::132::vds.dispatcher::(send) SSL error during sending data: (104, 'Connection r eset by peer') JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:43,959::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: bad write retry JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:43,959::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:47,859::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r eset by peer') JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:47,860::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:51,725::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r eset by peer') JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:51,726::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event Reactor thread::INFO::2016-06-24 10:10:53,851::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from ::ffff:1 92.168.1.50:48554 Reactor thread::DEBUG::2016-06-24 10:10:53,860::protocoldetector::85::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2016-06-24 10:10:53,861::protocoldetector::121::ProtocolDetector.Detector::(handle_read) Detected protocol stomp from ::ffff:192 .168.1.50:48554 Reactor thread::INFO::2016-06-24 10:10:53,862::stompreactor::101::Broker.StompAdapter::(_cmd_connect) Processing CONNECT request Reactor thread::DEBUG::2016-06-24 10:10:53,862::stompreactor::482::protocoldetector.StompDetector::(handle_socket) Stomp detected from ('::ffff:192.168.1.50', 48554)
Il 24/06/2016 8.18, Sandro Bonazzola ha scritto:
On Thu, Jun 23, 2016 at 11:46 PM, Stefano Danzi <s.danzi@hawai.it> wrote:
Hi! After cleanin metadata yum do an update of vdsm:
[root@ovirt01 ~]# rpm -qva | grep vdsm vdsm-yajsonrpc-4.18.4.1-0.el7.centos.noarch vdsm-infra-4.18.4.1-0.el7.centos.noarch vdsm-cli-4.18.4.1-0.el7.centos.noarch vdsm-python-4.18.4.1-0.el7.centos.noarch vdsm-hook-vmfex-dev-4.18.4.1-0.el7.centos.noarch vdsm-xmlrpc-4.18.4.1-0.el7.centos.noarch vdsm-4.18.4.1-0.el7.centos.x86_64 vdsm-api-4.18.4.1-0.el7.centos.noarch vdsm-gluster-4.18.4.1-0.el7.centos.noarch vdsm-jsonrpc-4.18.4.1-0.el7.centos.noarch
But this not solve the issue.
- Host haven't default gateway after a reboot - Self Hosted engine don't start.
Martin, Dan, can you please look into this? Stefano, can you please share a full sos report from the host?
vdsm.log: https://drive.google.com/file/d/0ByMG4sDqvlZcVEJ5YVI1UWxrdE0/view?usp=sharin...
Il 2016-06-23 21:41 Sandro Bonazzola ha scritto:
On Thu, Jun 23, 2016 at 6:36 PM, Stefano Danzi <s.danzi@hawai.it> wrote:
Hi! I've just upgrade oVirt from 3.6 to 4.0 and I'm not able to start the self hosted engine.
Hi Stefano, can you please try "yum clean metadata" "yum update" again? You should get vdsm 4.18.4.1, please let us know if this solve your issue.
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com <http://redhat.com>
Webwww.hawai.it
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This is a multi-part message in MIME format. --------------E17E319BA575FC6D18E1B36B Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Hi! The broker error is: ==> /var/log/ovirt-hosted-engine-ha/agent.log <== MainThread::INFO::2016-06-27 09:27:03,311::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Success, id 140293563619152 ==> /var/log/ovirt-hosted-engine-ha/broker.log <== Thread-25::ERROR::2016-06-27 09:27:03,314::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) Error while serving connection Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 166, in handle data) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 299, in _dispatch .set_storage_domain(client, sd_type, **options) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 66, in set_storage_domain self._backends[client].connect() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 400, in connect volUUID=volume.volume_uuid File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 245, in _get_volume_path volUUID File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable Broker PID is: [root@ovirt01 ~]# ps -ef | grep bro vdsm 925 1 0 09:23 ? 00:00:01 /usr/bin/python /usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker --no-daemon root 3558 3405 0 09:28 pts/0 00:00:00 grep --color=auto bro [root@ovirt01 ~]# strace -f -p 925 [..] [pid 3748] munmap(0x7f98bded0000, 4096) = 0 _*[pid 3748] socket(PF_INET6, SOCK_STREAM, IPPROTO_TCP) = 8*__* *__*[pid 3748] connect(8, {sa_family=AF_INET6, sin6_port=htons(54321), inet_pton(AF_INET6, "::", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 ENETUNREACH (Network is unreachable)*_ [pid 3748] close(8) = 0 [pid 3748] select(0, NULL, NULL, NULL, {1, 0} <unfinished ...> [pid 3742] <... recvfrom resumed> "set-storage-domain VdsmBackend h"..., 8192, 0, NULL, NULL) = 676 [pid 3742] fcntl(6, F_GETFL) = 0x2 (flags O_RDWR) [pid 3742] fcntl(6, F_SETFL, O_RDWR|O_NONBLOCK) = 0 [pid 3742] openat(AT_FDCWD, "/rhev/data-center/mnt", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 8 [pid 3742] getdents(8, /* 7 entries */, 32768) = 312 [pid 3742] getdents(8, /* 0 entries */, 32768) = 0 [pid 3742] close(8) = 0 [pid 3742] access("/rhev/data-center/mnt/ovirt01.hawai.lan:_home_storage-ovirtShe/46f55a31-f35f-465c-b3e2-df45c05e06a7", F_OK) = -1 ENOENT (No such file or directory) [pid 3742] access("/rhev/data-center/mnt/glusterSD/46f55a31-f35f-465c-b3e2-df45c05e06a7", F_OK) = -1 ENOENT (No such file or directory) [pid 3742] access("/rhev/data-center/mnt/ovirtbk-mount.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7", F_OK) = -1 ENOENT (No such file or directory) [pid 3742] access("/rhev/data-center/mnt/ovirtbk-sheng.hawai.lan:_var_lib_exports_iso/46f55a31-f35f-465c-b3e2-df45c05e06a7", F_OK) = -1 ENOENT (No such file or directory) [pid 3742] access("/rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7", F_OK) = 0 [pid 3742] access("/etc/pki/vdsm/keys/vdsmkey.pem", R_OK) = 0 [pid 3742] access("/etc/pki/vdsm/certs/vdsmcert.pem", R_OK) = 0 [pid 3742] access("/etc/pki/vdsm/certs/cacert.pem", R_OK) = 0 [pid 3742] open("/etc/pki/vdsm/certs/vdsmcert.pem", O_RDONLY) = 8 [pid 3742] fstat(8, {st_mode=S_IFREG|0644, st_size=1574, ...}) = 0 [pid 3742] mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f98bded0000 [pid 3742] read(8, "-----BEGIN CERTIFICATE-----\nMIIE"..., 4096) = 1574 [pid 3742] read(8, "", 4096) = 0 [pid 3742] close(8) = 0 [pid 3742] munmap(0x7f98bded0000, 4096) = 0 [pid 3742] open("/etc/pki/vdsm/keys/vdsmkey.pem", O_RDONLY) = 8 [pid 3742] fstat(8, {st_mode=S_IFREG|0440, st_size=1675, ...}) = 0 [pid 3742] mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f98bded0000 [pid 3742] read(8, "-----BEGIN RSA PRIVATE KEY-----\n"..., 4096) = 1675 [pid 3742] close(8) = 0 [pid 3742] munmap(0x7f98bded0000, 4096) = 0 [*_pid 3742] socket(PF_INET6, SOCK_STREAM, IPPROTO_TCP) = 8_**_ _**_[pid 3742] connect(8, {sa_family=AF_INET6, sin6_port=htons(54321), inet_pton(AF_INET6, "::", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 ENETUNREACH (Network is unreachable)_**_ _**_[pid 3742] close(8) = 0_* [pid 3742] stat("/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", {st_mode=S_IFREG|0644, st_size=14078, ...}) = 0 [pid 3742] stat("/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", {st_mode=S_IFREG|0644, st_size=14078, ...}) = 0 [pid 3742] stat("/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", {st_mode=S_IFREG|0644, st_size=8885, ...}) = 0 [pid 3742] stat("/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", {st_mode=S_IFREG|0644, st_size=32723, ...}) = 0 [pid 3742] stat("/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", {st_mode=S_IFREG|0644, st_size=32723, ...}) = 0 [pid 3742] stat("/usr/lib64/python2.7/xmlrpclib.py", {st_mode=S_IFREG|0644, st_size=51801, ...}) = 0 [pid 3742] stat("/usr/lib64/python2.7/xmlrpclib.py", {st_mode=S_IFREG|0644, st_size=51801, ...}) = 0 [pid 3742] stat("/usr/lib64/python2.7/xmlrpclib.py", {st_mode=S_IFREG|0644, st_size=51801, ...}) = 0 [pid 3742] stat("/usr/lib64/python2.7/xmlrpclib.py", {st_mode=S_IFREG|0644, st_size=51801, ...}) = 0 [pid 3742] stat("/usr/lib64/python2.7/xmlrpclib.py", {st_mode=S_IFREG|0644, st_size=51801, ...}) = 0 [pid 3742] stat("/usr/lib64/python2.7/httplib.py", {st_mode=S_IFREG|0644, st_size=48234, ...}) = 0 [pid 3742] stat("/usr/lib64/python2.7/httplib.py", {st_mode=S_IFREG|0644, st_size=48234, ...}) = 0 [pid 3742] stat("/usr/lib64/python2.7/httplib.py", {st_mode=S_IFREG|0644, st_size=48234, ...}) = 0 [pid 3742] stat("/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", {st_mode=S_IFREG|0644, st_size=10720, ...}) = 0 [pid 3742] stat("/usr/lib64/python2.7/socket.py", {st_mode=S_IFREG|0644, st_size=20512, ...}) = 0 [pid 3742] sendto(3, "<11>ovirt-ha-broker ovirt_hosted"..., 1970, 0, NULL, 0) = 1970 [pid 3742] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2652, ...}) = 0 [pid 3742] write(4, "Thread-49::ERROR::2016-06-27 09:"..., 2012) = 2012 [pid 3742] write(2, "ERROR:ovirt_hosted_engine_ha.bro"..., 1950) = 1950 [pid 3742] poll([{fd=6, events=POLLOUT}], 1, 700) = 1 ([{fd=6, revents=POLLOUT}]) [pid 3742] sendto(6, "failure <class 'socket.error'>\n", 31, 0, NULL, 0) = 31 [pid 3742] fcntl(6, F_GETFL) = 0x802 (flags O_RDWR|O_NONBLOCK) [pid 3742] fcntl(6, F_SETFL, O_RDWR) = 0 [pid 3742] recvfrom(6, <unfinished ...> enabling IPV6 only in ovirtmgmt interface the broker still report python exception, but in vdsm log I see: jsonrpc.Executor/4::DEBUG::2016-06-27 10:01:36,697::__init__::550::jsonrpc.JsonRpcServer::(_serveRequest) Return 'Host.getHardwareInfo' in bridge with {'systemProductName': 'To be filled by O.E.M.', 'systemSerialNumber': 'To be filled by O.E.M.', 'systemFamily': 'To be filled by O.E.M.', 'systemVersion': 'To be filled by O.E.M.', 'systemUUID': 'F90B3100-D83F-11DD-8DD8-40167E3684F1', 'systemManufacturer': 'To be filled by O.E.M.'} Reactor thread::INFO::2016-06-27 10:01:38,703::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) *Accepting connection from ::ffff:192.168.1.50:56228* Reactor thread::DEBUG::2016-06-27 10:01:38,713::protocoldetector::85::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2016-06-27 10:01:38,714::protocoldetector::121::ProtocolDetector.Detector::(handle_read) Detected protocol stomp from ::ffff:192.168.1.50:56228 Reactor thread::INFO::2016-06-27 10:01:38,714::stompreactor::101::Broker.StompAdapter::(_cmd_connect) Processing CONNECT request JsonRpc (StompReactor)::INFO::2016-06-27 10:01:38,715::stompreactor::128::Broker.StompAdapter::(_cmd_subscribe) Subscribe command received Reactor thread::DEBUG::2016-06-27 10:01:38,716::stompreactor::482::protocoldetector.StompDetector::(handle_socket) Stomp detected from ('::ffff:192.168.1.50', 56228) jsonrpc.Executor/5::DEBUG::2016-06-27 10:01:40,319::__init__::522::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'Host.getHardwareInfo' in bridge with {} Il 26/06/2016 10.30, Dan Kenigsberg ha scritto:
On Fri, Jun 24, 2016 at 06:45:24PM +0200, Stefano Danzi wrote:
HI!!!!!!
I found a workaround!!!!!!!!!!!!!!!!
the brocker process try to connect to vdsm to IPV4 host address using an IPV6 connection (I noticed that doing a strace to the process), but ipv6 is not intialized at boot. (why connect to IPV4 address using IPV6?) Acutally, we take an effort to disable ipv6 on ovirt host networks. Keeping them open without explicit request was deemed a security issue.
Can you share your strace line and the relevant lines in vdsm.log? I don't understand what is the issue that you are reporting.
I added the following lines to crontab:
@reboot echo 'echo 0 > /proc/sys/net/ipv6/conf/lo/disable_ipv6' | /usr/bin/at now+1 minutes @reboot echo 'echo 0 > /proc/sys/net/ipv6/conf/ovirtmgmt/disable_ipv6' | /usr/bin/at now+1 minutes @reboot echo '/usr/sbin/route add default gw 192.168.1.254' | /usr/bin/at now+1 minutes
Il 24/06/2016 12.36, Stefano Danzi ha scritto:
How I can change self hosted engine configuration to mount directly gluster storage without pass through gluster NFS?
Maybe this solve....
Il 24/06/2016 10.16, Stefano Danzi ha scritto:
After an additional yum clean all && yum update was updated some other rpms.
Something changed..... My setup has engine storage on gluster, but mounted with NFS. Now gluster daemon don't automatically start at boot. After starting manually gluster the error is the same:
==> /var/log/ovirt-hosted-engine-ha/broker.log <== Thread-19::ERROR::2016-06-24 10:10:36,758::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) Error while serving connection Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 166, in handle data) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 299, in _dispatch .set_storage_domain(client, sd_type, **options) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 66, in set_storage_domain self._backends[client].connect() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 400, in connect volUUID=volume.volume_uuid File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 245, in _get_volume_path volUUID File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable
VDSM.log
jsonrpc.Executor/5::DEBUG::2016-06-24 10:10:21,694::task::995::Storage.TaskManager.Task::(_decref) Task=`5c3b6f30-d3a8-431e-9dd0-8df79b171709`::ref 0 aborting False jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Following parameters ['type'] were not recogn ized jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Provided value "2" not defined in DiskType en um for Volume.getInfo jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter capacity is not uint type jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Required property allocType is not provided w hen calling Volume.getInfo jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter mtime is not uint type jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter ctime is not int type jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter truesize is not uint type jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter apparentsize is not uint type jsonrpc.Executor/5::DEBUG::2016-06-24 10:10:21,695::__init__::550::jsonrpc.JsonRpcServer::(_serveRequest) Return 'Volume.getInfo' in bridge with {'sta tus': 'OK', 'domain': '46f55a31-f35f-465c-b3e2-df45c05e06a7', 'voltype': 'LEAF', 'description': 'hosted-engine.lockspace', 'parent': '00000000-0000-00 00-0000-000000000000', 'format': 'RAW', 'image': '6838c974-7656-4b40-87cc-f562ff0b2a4c', 'ctime': '1423074433', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1048576', 'children': [], 'pool': '', 'capacity': '1048576', 'uuid': u'c66a14d3-112a-4104-9025-76bb2e7ad9f1', 'truesize ': '1048576', 'type': 'PREALLOCATED'} JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:36,514::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r eset by peer') JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:36,515::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:43,807::betterAsyncore::132::vds.dispatcher::(send) SSL error during sending data: (104, 'Connection r eset by peer') JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:43,959::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: bad write retry JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:43,959::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:47,859::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r eset by peer') JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:47,860::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:51,725::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r eset by peer') JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:51,726::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event Reactor thread::INFO::2016-06-24 10:10:53,851::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from ::ffff:1 92.168.1.50:48554 Reactor thread::DEBUG::2016-06-24 10:10:53,860::protocoldetector::85::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2016-06-24 10:10:53,861::protocoldetector::121::ProtocolDetector.Detector::(handle_read) Detected protocol stomp from ::ffff:192 .168.1.50:48554 Reactor thread::INFO::2016-06-24 10:10:53,862::stompreactor::101::Broker.StompAdapter::(_cmd_connect) Processing CONNECT request Reactor thread::DEBUG::2016-06-24 10:10:53,862::stompreactor::482::protocoldetector.StompDetector::(handle_socket) Stomp detected from ('::ffff:192.168.1.50', 48554)
Il 24/06/2016 8.18, Sandro Bonazzola ha scritto:
On Thu, Jun 23, 2016 at 11:46 PM, Stefano Danzi <s.danzi@hawai.it> wrote:
Hi! After cleanin metadata yum do an update of vdsm:
[root@ovirt01 ~]# rpm -qva | grep vdsm vdsm-yajsonrpc-4.18.4.1-0.el7.centos.noarch vdsm-infra-4.18.4.1-0.el7.centos.noarch vdsm-cli-4.18.4.1-0.el7.centos.noarch vdsm-python-4.18.4.1-0.el7.centos.noarch vdsm-hook-vmfex-dev-4.18.4.1-0.el7.centos.noarch vdsm-xmlrpc-4.18.4.1-0.el7.centos.noarch vdsm-4.18.4.1-0.el7.centos.x86_64 vdsm-api-4.18.4.1-0.el7.centos.noarch vdsm-gluster-4.18.4.1-0.el7.centos.noarch vdsm-jsonrpc-4.18.4.1-0.el7.centos.noarch
But this not solve the issue.
- Host haven't default gateway after a reboot - Self Hosted engine don't start.
Martin, Dan, can you please look into this? Stefano, can you please share a full sos report from the host?
vdsm.log: https://drive.google.com/file/d/0ByMG4sDqvlZcVEJ5YVI1UWxrdE0/view?usp=sharin...
Il 2016-06-23 21:41 Sandro Bonazzola ha scritto:
On Thu, Jun 23, 2016 at 6:36 PM, Stefano Danzi <s.danzi@hawai.it> wrote:
Hi! I've just upgrade oVirt from 3.6 to 4.0 and I'm not able to start the self hosted engine.
Hi Stefano, can you please try "yum clean metadata" "yum update" again? You should get vdsm 4.18.4.1, please let us know if this solve your issue.
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com <http://redhat.com>
Webwww.hawai.it
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------E17E319BA575FC6D18E1B36B Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> Hi!<br> <br> The broker error is:<br> <br> ==> /var/log/ovirt-hosted-engine-ha/agent.log <==<br> MainThread::INFO::2016-06-27 09:27:03,311::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Success, id 140293563619152<br> <br> ==> /var/log/ovirt-hosted-engine-ha/broker.log <==<br> Thread-25::ERROR::2016-06-27 09:27:03,314::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) Error while serving connection<br> Traceback (most recent call last):<br> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 166, in handle<br> data)<br> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 299, in _dispatch<br> .set_storage_domain(client, sd_type, **options)<br> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 66, in set_storage_domain<br> self._backends[client].connect()<br> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 400, in connect<br> volUUID=volume.volume_uuid<br> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 245, in _get_volume_path<br> volUUID<br> File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__<br> return self.__send(self.__name, args)<br> File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request<br> verbose=self.__verbose<br> File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request<br> return self.single_request(host, handler, request_body, verbose)<br> File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request<br> self.send_content(h, request_body)<br> File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content<br> connection.endheaders(request_body)<br> File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders<br> self._send_output(message_body)<br> File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output<br> self.send(msg)<br> File "/usr/lib64/python2.7/httplib.py", line 797, in send<br> self.connect()<br> File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect<br> sock = socket.create_connection((self.host, self.port), self.timeout)<br> File "/usr/lib64/python2.7/socket.py", line 571, in create_connection<br> raise err<br> error: [Errno 101] Network is unreachable<br> <br> <br> Broker PID is:<br> <br> [root@ovirt01 ~]# ps -ef | grep bro<br> vdsm 925 1 0 09:23 ? 00:00:01 /usr/bin/python /usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker --no-daemon<br> root 3558 3405 0 09:28 pts/0 00:00:00 grep --color=auto bro<br> <br> <br> [root@ovirt01 ~]# strace -f -p 925<br> [..]<br> [pid 3748] munmap(0x7f98bded0000, 4096) = 0<br> <u><b>[pid 3748] socket(PF_INET6, SOCK_STREAM, IPPROTO_TCP) = 8</b></u><u><b><br> </b></u><u><b>[pid 3748] connect(8, {sa_family=AF_INET6, sin6_port=htons(54321), inet_pton(AF_INET6, "::", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 ENETUNREACH (Network is unreachable)</b></u><br> [pid 3748] close(8) = 0<br> [pid 3748] select(0, NULL, NULL, NULL, {1, 0} <unfinished ...><br> [pid 3742] <... recvfrom resumed> "set-storage-domain VdsmBackend h"..., 8192, 0, NULL, NULL) = 676<br> [pid 3742] fcntl(6, F_GETFL) = 0x2 (flags O_RDWR)<br> [pid 3742] fcntl(6, F_SETFL, O_RDWR|O_NONBLOCK) = 0<br> [pid 3742] openat(AT_FDCWD, "/rhev/data-center/mnt", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 8<br> [pid 3742] getdents(8, /* 7 entries */, 32768) = 312<br> [pid 3742] getdents(8, /* 0 entries */, 32768) = 0<br> [pid 3742] close(8) = 0<br> [pid 3742] access("/rhev/data-center/mnt/ovirt01.hawai.lan:_home_storage-ovirtShe/46f55a31-f35f-465c-b3e2-df45c05e06a7", F_OK) = -1 ENOENT (No such file or directory)<br> [pid 3742] access("/rhev/data-center/mnt/glusterSD/46f55a31-f35f-465c-b3e2-df45c05e06a7", F_OK) = -1 ENOENT (No such file or directory)<br> [pid 3742] access("/rhev/data-center/mnt/ovirtbk-mount.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7", F_OK) = -1 ENOENT (No such file or directory)<br> [pid 3742] access("/rhev/data-center/mnt/ovirtbk-sheng.hawai.lan:_var_lib_exports_iso/46f55a31-f35f-465c-b3e2-df45c05e06a7", F_OK) = -1 ENOENT (No such file or directory)<br> [pid 3742] access("/rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7", F_OK) = 0<br> [pid 3742] access("/etc/pki/vdsm/keys/vdsmkey.pem", R_OK) = 0<br> [pid 3742] access("/etc/pki/vdsm/certs/vdsmcert.pem", R_OK) = 0<br> [pid 3742] access("/etc/pki/vdsm/certs/cacert.pem", R_OK) = 0<br> [pid 3742] open("/etc/pki/vdsm/certs/vdsmcert.pem", O_RDONLY) = 8<br> [pid 3742] fstat(8, {st_mode=S_IFREG|0644, st_size=1574, ...}) = 0<br> [pid 3742] mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f98bded0000<br> [pid 3742] read(8, "-----BEGIN CERTIFICATE-----\nMIIE"..., 4096) = 1574<br> [pid 3742] read(8, "", 4096) = 0<br> [pid 3742] close(8) = 0<br> [pid 3742] munmap(0x7f98bded0000, 4096) = 0<br> [pid 3742] open("/etc/pki/vdsm/keys/vdsmkey.pem", O_RDONLY) = 8<br> [pid 3742] fstat(8, {st_mode=S_IFREG|0440, st_size=1675, ...}) = 0<br> [pid 3742] mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f98bded0000<br> [pid 3742] read(8, "-----BEGIN RSA PRIVATE KEY-----\n"..., 4096) = 1675<br> [pid 3742] close(8) = 0<br> [pid 3742] munmap(0x7f98bded0000, 4096) = 0<br> [<b><u>pid 3742] socket(PF_INET6, SOCK_STREAM, IPPROTO_TCP) = 8</u></b><b><u><br> </u></b><b><u>[pid 3742] connect(8, {sa_family=AF_INET6, sin6_port=htons(54321), inet_pton(AF_INET6, "::", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 ENETUNREACH (Network is unreachable)</u></b><b><u><br> </u></b><b><u>[pid 3742] close(8) = 0</u></b><br> [pid 3742] stat("/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", {st_mode=S_IFREG|0644, st_size=14078, ...}) = 0<br> [pid 3742] stat("/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", {st_mode=S_IFREG|0644, st_size=14078, ...}) = 0<br> [pid 3742] stat("/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", {st_mode=S_IFREG|0644, st_size=8885, ...}) = 0<br> [pid 3742] stat("/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", {st_mode=S_IFREG|0644, st_size=32723, ...}) = 0<br> [pid 3742] stat("/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", {st_mode=S_IFREG|0644, st_size=32723, ...}) = 0<br> [pid 3742] stat("/usr/lib64/python2.7/xmlrpclib.py", {st_mode=S_IFREG|0644, st_size=51801, ...}) = 0<br> [pid 3742] stat("/usr/lib64/python2.7/xmlrpclib.py", {st_mode=S_IFREG|0644, st_size=51801, ...}) = 0<br> [pid 3742] stat("/usr/lib64/python2.7/xmlrpclib.py", {st_mode=S_IFREG|0644, st_size=51801, ...}) = 0<br> [pid 3742] stat("/usr/lib64/python2.7/xmlrpclib.py", {st_mode=S_IFREG|0644, st_size=51801, ...}) = 0<br> [pid 3742] stat("/usr/lib64/python2.7/xmlrpclib.py", {st_mode=S_IFREG|0644, st_size=51801, ...}) = 0<br> [pid 3742] stat("/usr/lib64/python2.7/httplib.py", {st_mode=S_IFREG|0644, st_size=48234, ...}) = 0<br> [pid 3742] stat("/usr/lib64/python2.7/httplib.py", {st_mode=S_IFREG|0644, st_size=48234, ...}) = 0<br> [pid 3742] stat("/usr/lib64/python2.7/httplib.py", {st_mode=S_IFREG|0644, st_size=48234, ...}) = 0<br> [pid 3742] stat("/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", {st_mode=S_IFREG|0644, st_size=10720, ...}) = 0<br> [pid 3742] stat("/usr/lib64/python2.7/socket.py", {st_mode=S_IFREG|0644, st_size=20512, ...}) = 0<br> [pid 3742] sendto(3, "<11>ovirt-ha-broker ovirt_hosted"..., 1970, 0, NULL, 0) = 1970<br> [pid 3742] stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2652, ...}) = 0<br> [pid 3742] write(4, "Thread-49::ERROR::2016-06-27 09:"..., 2012) = 2012<br> [pid 3742] write(2, "ERROR:ovirt_hosted_engine_ha.bro"..., 1950) = 1950<br> [pid 3742] poll([{fd=6, events=POLLOUT}], 1, 700) = 1 ([{fd=6, revents=POLLOUT}])<br> [pid 3742] sendto(6, "failure <class 'socket.error'>\n", 31, 0, NULL, 0) = 31<br> [pid 3742] fcntl(6, F_GETFL) = 0x802 (flags O_RDWR|O_NONBLOCK)<br> [pid 3742] fcntl(6, F_SETFL, O_RDWR) = 0<br> [pid 3742] recvfrom(6, <unfinished ...><br> <br> enabling IPV6 only in ovirtmgmt interface the broker still report python exception, but in vdsm log I see:<br> <br> jsonrpc.Executor/4::DEBUG::2016-06-27 10:01:36,697::__init__::550::jsonrpc.JsonRpcServer::(_serveRequest) Return 'Host.getHardwareInfo' in bridge with {'systemProductName': 'To be filled by O.E.M.', 'systemSerialNumber': 'To be filled by O.E.M.', 'systemFamily': 'To be filled by O.E.M.', 'systemVersion': 'To be filled by O.E.M.', 'systemUUID': 'F90B3100-D83F-11DD-8DD8-40167E3684F1', 'systemManufacturer': 'To be filled by O.E.M.'}<br> Reactor thread::INFO::2016-06-27 10:01:38,703::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) <b>Accepting connection from ::ffff:192.168.1.50:56228</b><br> Reactor thread::DEBUG::2016-06-27 10:01:38,713::protocoldetector::85::ProtocolDetector.Detector::(__init__) Using required_size=11<br> Reactor thread::INFO::2016-06-27 10:01:38,714::protocoldetector::121::ProtocolDetector.Detector::(handle_read) Detected protocol stomp from ::ffff:192.168.1.50:56228<br> Reactor thread::INFO::2016-06-27 10:01:38,714::stompreactor::101::Broker.StompAdapter::(_cmd_connect) Processing CONNECT request<br> JsonRpc (StompReactor)::INFO::2016-06-27 10:01:38,715::stompreactor::128::Broker.StompAdapter::(_cmd_subscribe) Subscribe command received<br> Reactor thread::DEBUG::2016-06-27 10:01:38,716::stompreactor::482::protocoldetector.StompDetector::(handle_socket) Stomp detected from ('::ffff:192.168.1.50', 56228)<br> jsonrpc.Executor/5::DEBUG::2016-06-27 10:01:40,319::__init__::522::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'Host.getHardwareInfo' in bridge with {}<br> <br> <br> <br> <br> <div class="moz-cite-prefix">Il 26/06/2016 10.30, Dan Kenigsberg ha scritto:<br> </div> <blockquote cite="mid:20160626083017.GN25622@redhat.com" type="cite"> <pre wrap="">On Fri, Jun 24, 2016 at 06:45:24PM +0200, Stefano Danzi wrote: </pre> <blockquote type="cite"> <pre wrap="">HI!!!!!! I found a workaround!!!!!!!!!!!!!!!! the brocker process try to connect to vdsm to IPV4 host address using an IPV6 connection (I noticed that doing a strace to the process), but ipv6 is not intialized at boot. (why connect to IPV4 address using IPV6?) </pre> </blockquote> <pre wrap=""> Acutally, we take an effort to disable ipv6 on ovirt host networks. Keeping them open without explicit request was deemed a security issue. Can you share your strace line and the relevant lines in vdsm.log? I don't understand what is the issue that you are reporting. </pre> <blockquote type="cite"> <pre wrap=""> I added the following lines to crontab: @reboot echo 'echo 0 > /proc/sys/net/ipv6/conf/lo/disable_ipv6' | /usr/bin/at now+1 minutes @reboot echo 'echo 0 > /proc/sys/net/ipv6/conf/ovirtmgmt/disable_ipv6' | /usr/bin/at now+1 minutes @reboot echo '/usr/sbin/route add default gw 192.168.1.254' | /usr/bin/at now+1 minutes Il 24/06/2016 12.36, Stefano Danzi ha scritto: </pre> <blockquote type="cite"> <pre wrap="">How I can change self hosted engine configuration to mount directly gluster storage without pass through gluster NFS? Maybe this solve.... Il 24/06/2016 10.16, Stefano Danzi ha scritto: </pre> <blockquote type="cite"> <pre wrap="">After an additional yum clean all && yum update was updated some other rpms. Something changed..... My setup has engine storage on gluster, but mounted with NFS. Now gluster daemon don't automatically start at boot. After starting manually gluster the error is the same: ==> /var/log/ovirt-hosted-engine-ha/broker.log <== Thread-19::ERROR::2016-06-24 10:10:36,758::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) Error while serving connection Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 166, in handle data) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 299, in _dispatch .set_storage_domain(client, sd_type, **options) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 66, in set_storage_domain self._backends[client].connect() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 400, in connect volUUID=volume.volume_uuid File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 245, in _get_volume_path volUUID File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable VDSM.log jsonrpc.Executor/5::DEBUG::2016-06-24 10:10:21,694::task::995::Storage.TaskManager.Task::(_decref) Task=`5c3b6f30-d3a8-431e-9dd0-8df79b171709`::ref 0 aborting False jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Following parameters ['type'] were not recogn ized jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Provided value "2" not defined in DiskType en um for Volume.getInfo jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter capacity is not uint type jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Required property allocType is not provided w hen calling Volume.getInfo jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter mtime is not uint type jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter ctime is not int type jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter truesize is not uint type jsonrpc.Executor/5::WARNING::2016-06-24 10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) Parameter apparentsize is not uint type jsonrpc.Executor/5::DEBUG::2016-06-24 10:10:21,695::__init__::550::jsonrpc.JsonRpcServer::(_serveRequest) Return 'Volume.getInfo' in bridge with {'sta tus': 'OK', 'domain': '46f55a31-f35f-465c-b3e2-df45c05e06a7', 'voltype': 'LEAF', 'description': 'hosted-engine.lockspace', 'parent': '00000000-0000-00 00-0000-000000000000', 'format': 'RAW', 'image': '6838c974-7656-4b40-87cc-f562ff0b2a4c', 'ctime': '1423074433', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1048576', 'children': [], 'pool': '', 'capacity': '1048576', 'uuid': u'c66a14d3-112a-4104-9025-76bb2e7ad9f1', 'truesize ': '1048576', 'type': 'PREALLOCATED'} JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:36,514::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r eset by peer') JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:36,515::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:43,807::betterAsyncore::132::vds.dispatcher::(send) SSL error during sending data: (104, 'Connection r eset by peer') JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:43,959::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: bad write retry JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:43,959::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:47,859::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r eset by peer') JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:47,860::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event JsonRpc (StompReactor)::ERROR::2016-06-24 10:10:51,725::betterAsyncore::113::vds.dispatcher::(recv) SSL error during reading data: (104, 'Connection r eset by peer') JsonRpc (StompReactor)::WARNING::2016-06-24 10:10:51,726::betterAsyncore::154::vds.dispatcher::(log_info) unhandled close event Reactor thread::INFO::2016-06-24 10:10:53,851::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from ::ffff:1 92.168.1.50:48554 Reactor thread::DEBUG::2016-06-24 10:10:53,860::protocoldetector::85::ProtocolDetector.Detector::(__init__) Using required_size=11 Reactor thread::INFO::2016-06-24 10:10:53,861::protocoldetector::121::ProtocolDetector.Detector::(handle_read) Detected protocol stomp from ::ffff:192 .168.1.50:48554 Reactor thread::INFO::2016-06-24 10:10:53,862::stompreactor::101::Broker.StompAdapter::(_cmd_connect) Processing CONNECT request Reactor thread::DEBUG::2016-06-24 10:10:53,862::stompreactor::482::protocoldetector.StompDetector::(handle_socket) Stomp detected from ('::ffff:192.168.1.50', 48554) Il 24/06/2016 8.18, Sandro Bonazzola ha scritto: </pre> <blockquote type="cite"> <pre wrap=""> On Thu, Jun 23, 2016 at 11:46 PM, Stefano Danzi <a class="moz-txt-link-rfc2396E" href="mailto:s.danzi@hawai.it"><s.danzi@hawai.it></a> wrote: Hi! After cleanin metadata yum do an update of vdsm: [root@ovirt01 ~]# rpm -qva | grep vdsm vdsm-yajsonrpc-4.18.4.1-0.el7.centos.noarch vdsm-infra-4.18.4.1-0.el7.centos.noarch vdsm-cli-4.18.4.1-0.el7.centos.noarch vdsm-python-4.18.4.1-0.el7.centos.noarch vdsm-hook-vmfex-dev-4.18.4.1-0.el7.centos.noarch vdsm-xmlrpc-4.18.4.1-0.el7.centos.noarch vdsm-4.18.4.1-0.el7.centos.x86_64 vdsm-api-4.18.4.1-0.el7.centos.noarch vdsm-gluster-4.18.4.1-0.el7.centos.noarch vdsm-jsonrpc-4.18.4.1-0.el7.centos.noarch But this not solve the issue. - Host haven't default gateway after a reboot - Self Hosted engine don't start. Martin, Dan, can you please look into this? Stefano, can you please share a full sos report from the host? vdsm.log: <a class="moz-txt-link-freetext" href="https://drive.google.com/file/d/0ByMG4sDqvlZcVEJ5YVI1UWxrdE0/view?usp=sharing">https://drive.google.com/file/d/0ByMG4sDqvlZcVEJ5YVI1UWxrdE0/view?usp=sharing</a> Il 2016-06-23 21:41 Sandro Bonazzola ha scritto: On Thu, Jun 23, 2016 at 6:36 PM, Stefano Danzi <a class="moz-txt-link-rfc2396E" href="mailto:s.danzi@hawai.it"><s.danzi@hawai.it></a> wrote: Hi! I've just upgrade oVirt from 3.6 to 4.0 and I'm not able to start the self hosted engine. Hi Stefano, can you please try "yum clean metadata" "yum update" again? You should get vdsm 4.18.4.1, please let us know if this solve your issue. -- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com <a class="moz-txt-link-rfc2396E" href="http://redhat.com"><http://redhat.com></a> </pre> </blockquote> <pre wrap=""> </pre> </blockquote> <pre wrap=""> Webwww.hawai.it _______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <pre wrap=""> </pre> </blockquote> <pre wrap=""> </pre> </blockquote> <br> </body> </html> --------------E17E319BA575FC6D18E1B36B--

On Mon, Jun 27, 2016 at 10:08:33AM +0200, Stefano Danzi wrote:
Hi!
Thanks for the detailed logging!
The broker error is:
==> /var/log/ovirt-hosted-engine-ha/agent.log <== MainThread::INFO::2016-06-27 09:27:03,311::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Success, id 140293563619152
==> /var/log/ovirt-hosted-engine-ha/broker.log <== Thread-25::ERROR::2016-06-27 09:27:03,314::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) Error while serving connection Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 166, in handle data) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 299, in _dispatch .set_storage_domain(client, sd_type, **options) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 66, in set_storage_domain self._backends[client].connect() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 400, in connect volUUID=volume.volume_uuid File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 245, in _get_volume_path volUUID
We have two issues here. First is that https://gerrit.ovirt.org/gitweb?p=ovirt-hosted-engine-ha.git;a=blob;f=ovirt_... is still using vdscli to contact vdsm, instead of the preferred jsonrpccli. The second is that vdscli.connect's heuristic ends up reading the local server address from vdsm config, where it finds the default ipv6-local address of "::". Please try setting [addresses] management_ip='0.0.0.0' in your /etc/vdsm/vdsm.conf instead of the crontab hacks. Would you please open a bug about the two issues (ovirt-hosted-engine-ha and vdsm networking)? Would you report the output of `netstats -nltp` on your host, as I do not completely understand why no interface (not even the loopback one) was listening on ipv6? Regards, Dan.

On Tue, Jun 28, 2016 at 3:02 PM, Dan Kenigsberg <danken@redhat.com> wrote:
On Mon, Jun 27, 2016 at 10:08:33AM +0200, Stefano Danzi wrote:
Hi!
Thanks for the detailed logging!
The broker error is:
==> /var/log/ovirt-hosted-engine-ha/agent.log <== MainThread::INFO::2016-06-27 09:27:03,311::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Success, id 140293563619152
==> /var/log/ovirt-hosted-engine-ha/broker.log <== Thread-25::ERROR::2016-06-27 09:27:03,314::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) Error while serving connection Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 166, in handle data) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 299, in _dispatch .set_storage_domain(client, sd_type, **options) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 66, in set_storage_domain self._backends[client].connect() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 400, in connect volUUID=volume.volume_uuid File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 245, in _get_volume_path volUUID
We have two issues here. First is that https://gerrit.ovirt.org/gitweb?p=ovirt-hosted-engine-ha.git;a=blob;f=ovirt_... is still using vdscli to contact vdsm, instead of the preferred jsonrpccli.
We already have this one: https://bugzilla.redhat.com/show_bug.cgi?id=1101554
The second is that vdscli.connect's heuristic ends up reading the local server address from vdsm config, where it finds the default ipv6-local address of "::".
Please try setting
[addresses] management_ip='0.0.0.0'
in your /etc/vdsm/vdsm.conf instead of the crontab hacks.
Would you please open a bug about the two issues (ovirt-hosted-engine-ha and vdsm networking)?
Would you report the output of `netstats -nltp` on your host, as I do not completely understand why no interface (not even the loopback one) was listening on ipv6?
Regards, Dan.

On Tue, Jun 28, 2016 at 04:08:51PM +0200, Simone Tiraboschi wrote:
On Tue, Jun 28, 2016 at 3:02 PM, Dan Kenigsberg <danken@redhat.com> wrote:
On Mon, Jun 27, 2016 at 10:08:33AM +0200, Stefano Danzi wrote:
Hi!
Thanks for the detailed logging!
The broker error is:
==> /var/log/ovirt-hosted-engine-ha/agent.log <== MainThread::INFO::2016-06-27 09:27:03,311::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Success, id 140293563619152
==> /var/log/ovirt-hosted-engine-ha/broker.log <== Thread-25::ERROR::2016-06-27 09:27:03,314::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) Error while serving connection Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 166, in handle data) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 299, in _dispatch .set_storage_domain(client, sd_type, **options) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 66, in set_storage_domain self._backends[client].connect() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 400, in connect volUUID=volume.volume_uuid File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 245, in _get_volume_path volUUID
We have two issues here. First is that https://gerrit.ovirt.org/gitweb?p=ovirt-hosted-engine-ha.git;a=blob;f=ovirt_... is still using vdscli to contact vdsm, instead of the preferred jsonrpccli.
We already have this one: https://bugzilla.redhat.com/show_bug.cgi?id=1101554
Bug 1101554 - [RFE] HE-setup: use vdsm api instead of vdsClient It does not seem to be the same thing; here I'm referring to ovirt-hosted-engine-ha (not -setup). -ha *is* using vdsm API, but in one condition it's the soon-to-be-deprecated xmlrpc.

On Tue, Jun 28, 2016 at 5:24 PM, Dan Kenigsberg <danken@redhat.com> wrote:
On Tue, Jun 28, 2016 at 04:08:51PM +0200, Simone Tiraboschi wrote:
On Tue, Jun 28, 2016 at 3:02 PM, Dan Kenigsberg <danken@redhat.com> wrote:
On Mon, Jun 27, 2016 at 10:08:33AM +0200, Stefano Danzi wrote:
Hi!
Thanks for the detailed logging!
The broker error is:
==> /var/log/ovirt-hosted-engine-ha/agent.log <== MainThread::INFO::2016-06-27 09:27:03,311::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Success, id 140293563619152
==> /var/log/ovirt-hosted-engine-ha/broker.log <== Thread-25::ERROR::2016-06-27 09:27:03,314::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) Error while serving connection Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 166, in handle data) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 299, in _dispatch .set_storage_domain(client, sd_type, **options) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 66, in set_storage_domain self._backends[client].connect() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 400, in connect volUUID=volume.volume_uuid File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 245, in _get_volume_path volUUID
We have two issues here. First is that https://gerrit.ovirt.org/gitweb?p=ovirt-hosted-engine-ha.git;a=blob;f=ovirt_... is still using vdscli to contact vdsm, instead of the preferred jsonrpccli.
We already have this one: https://bugzilla.redhat.com/show_bug.cgi?id=1101554
Bug 1101554 - [RFE] HE-setup: use vdsm api instead of vdsClient
It does not seem to be the same thing; here I'm referring to ovirt-hosted-engine-ha (not -setup). -ha *is* using vdsm API, but in one condition it's the soon-to-be-deprecated xmlrpc.
ovirt-hosted-engine-setup is already using just API via jsonrpc. The issue is that ovirt-hosted-engine-ha somewhere uses vdsClient which AFAIK internally still uses xmlrpc

On Tue, Jun 28, 2016 at 06:06:39PM +0200, Simone Tiraboschi wrote:
On Tue, Jun 28, 2016 at 5:24 PM, Dan Kenigsberg <danken@redhat.com> wrote:
On Tue, Jun 28, 2016 at 04:08:51PM +0200, Simone Tiraboschi wrote:
On Tue, Jun 28, 2016 at 3:02 PM, Dan Kenigsberg <danken@redhat.com> wrote:
On Mon, Jun 27, 2016 at 10:08:33AM +0200, Stefano Danzi wrote:
Hi!
Thanks for the detailed logging!
The broker error is:
==> /var/log/ovirt-hosted-engine-ha/agent.log <== MainThread::INFO::2016-06-27 09:27:03,311::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Success, id 140293563619152
==> /var/log/ovirt-hosted-engine-ha/broker.log <== Thread-25::ERROR::2016-06-27 09:27:03,314::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) Error while serving connection Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 166, in handle data) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 299, in _dispatch .set_storage_domain(client, sd_type, **options) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 66, in set_storage_domain self._backends[client].connect() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 400, in connect volUUID=volume.volume_uuid File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 245, in _get_volume_path volUUID
We have two issues here. First is that https://gerrit.ovirt.org/gitweb?p=ovirt-hosted-engine-ha.git;a=blob;f=ovirt_... is still using vdscli to contact vdsm, instead of the preferred jsonrpccli.
We already have this one: https://bugzilla.redhat.com/show_bug.cgi?id=1101554
Bug 1101554 - [RFE] HE-setup: use vdsm api instead of vdsClient
It does not seem to be the same thing; here I'm referring to ovirt-hosted-engine-ha (not -setup). -ha *is* using vdsm API, but in one condition it's the soon-to-be-deprecated xmlrpc.
ovirt-hosted-engine-setup is already using just API via jsonrpc. The issue is that ovirt-hosted-engine-ha somewhere uses vdsClient which AFAIK internally still uses xmlrpc
It's not a general "somewhere", I gave the specific link ;-) https://gerrit.ovirt.org/gitweb?p=ovirt-hosted-engine-ha.git;a=blob;f=ovirt_...

Il 28/06/2016 15.02, Dan Kenigsberg ha scritto:
On Mon, Jun 27, 2016 at 10:08:33AM +0200, Stefano Danzi wrote:
Hi!
Thanks for the detailed logging!
The broker error is:
==> /var/log/ovirt-hosted-engine-ha/agent.log <== MainThread::INFO::2016-06-27 09:27:03,311::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Success, id 140293563619152
==> /var/log/ovirt-hosted-engine-ha/broker.log <== Thread-25::ERROR::2016-06-27 09:27:03,314::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) Error while serving connection Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 166, in handle data) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", line 299, in _dispatch .set_storage_domain(client, sd_type, **options) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 66, in set_storage_domain self._backends[client].connect() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 400, in connect volUUID=volume.volume_uuid File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 245, in _get_volume_path volUUID We have two issues here. First is that https://gerrit.ovirt.org/gitweb?p=ovirt-hosted-engine-ha.git;a=blob;f=ovirt_... is still using vdscli to contact vdsm, instead of the preferred jsonrpccli.
The second is that vdscli.connect's heuristic ends up reading the local server address from vdsm config, where it finds the default ipv6-local address of "::".
Please try setting
[addresses] management_ip='0.0.0.0'
in your /etc/vdsm/vdsm.conf instead of the crontab hacks.
this solve the issue, but I still to haven't default gateway on ovirtmgmt inteface.
Would you please open a bug about the two issues (ovirt-hosted-engine-ha and vdsm networking)?
Here: https://bugzilla.redhat.com/show_bug.cgi?id=1350883
Would you report the output of `netstats -nltp` on your host, as I do not completely understand why no interface (not even the loopback one) was listening on ipv6? Here: [root@ovirt01 ~]# netstat -nltp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:39373 0.0.0.0:* LISTEN 2200/rpc.statd tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 969/rpcbind tcp 0 0 0.0.0.0:54322 0.0.0.0:* LISTEN 943/python tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1571/sshd tcp 0 0 0.0.0.0:858 0.0.0.0:* LISTEN 1946/glusterfs tcp 0 0 0.0.0.0:49152 0.0.0.0:* LISTEN 1929/glusterfsd tcp 0 0 0.0.0.0:49153 0.0.0.0:* LISTEN 1968/glusterfsd tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN 1946/glusterfs tcp 0 0 0.0.0.0:38465 0.0.0.0:* LISTEN 1946/glusterfs tcp 0 0 0.0.0.0:38466 0.0.0.0:* LISTEN 1946/glusterfs tcp 0 0 0.0.0.0:16514 0.0.0.0:* LISTEN 1603/libvirtd tcp 0 0 0.0.0.0:38468 0.0.0.0:* LISTEN 1946/glusterfs tcp 0 0 0.0.0.0:38469 0.0.0.0:* LISTEN 1946/glusterfs tcp 0 0 0.0.0.0:24007 0.0.0.0:* LISTEN 1585/glusterd tcp6 0 0 :::54321 :::* LISTEN 1893/python tcp6 0 0 :::22 :::* LISTEN 1571/sshd tcp6 0 0 :::16514 :::* LISTEN 1603/libvirtd
Regards, Dan.

HI to All!! Il 28/06/2016 17.24, Stefano Danzi ha scritto:
[CUT] We have two issues here. First is that
https://gerrit.ovirt.org/gitweb?p=ovirt-hosted-engine-ha.git;a=blob;f=ovirt_...
is still using vdscli to contact vdsm, instead of the preferred jsonrpccli.
The second is that vdscli.connect's heuristic ends up reading the local server address from vdsm config, where it finds the default ipv6-local address of "::".
Please try setting
[addresses] management_ip='0.0.0.0'
in your /etc/vdsm/vdsm.conf instead of the crontab hacks.
this solve the issue, but I still to haven't default gateway on ovirtmgmt inteface.
Using this configuration all work, but every 50 minutes I receive 3 email from broker: - ovirt-hosted-engine state transition StartState-ReinitializeFSM - ovirt-hosted-engine state transition ReinitializeFSM-EngineStarting - ovirt-hosted-engine state transition EngineStarting-EngineUp the uptime of engine is about 15 hours, so really the engine VM is not rebooted. Agent log: MainThread::INFO::2016-06-29 08:46:10,453::hosted_engine::461::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) Current sta te ReinitializeFSM (score: 0) MainThread::INFO::2016-06-29 08:46:20,547::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Trying: notify time=1467182780. 55 type=state_transition detail=ReinitializeFSM-EngineStarting hostname='ovirt01.hawai.lan' MainThread::INFO::2016-06-29 08:46:20,732::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of st ate_transition (ReinitializeFSM-EngineStarting) sent? sent MainThread::INFO::2016-06-29 08:46:20,733::hosted_engine::612::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) Initializin g VDSM MainThread::INFO::2016-06-29 08:46:24,430::hosted_engine::639::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images) C onnecting the storage MainThread::INFO::2016-06-29 08:46:24,431::storage_server::218::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server) Conn ecting storage server MainThread::INFO::2016-06-29 08:46:31,764::storage_server::225::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server) Conn ecting storage server MainThread::INFO::2016-06-29 08:46:31,780::storage_server::232::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server) Refr eshing the storage domain MainThread::INFO::2016-06-29 08:46:31,945::hosted_engine::666::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images) P reparing images MainThread::INFO::2016-06-29 08:46:31,946::image::126::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images) Preparing images MainThread::INFO::2016-06-29 08:46:35,895::hosted_engine::669::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images) R eloading vm.conf from the shared storage domain MainThread::INFO::2016-06-29 08:46:35,896::config::206::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) Trying to get a fresher copy of vm configuration from the OVF_STORE MainThread::INFO::2016-06-29 08:46:39,621::ovf_store::102::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:8d07965c-a5c4-4057-912d-901f80cf246c, volUUID:ce3aa63e-e1c4-498e-bdca-9d2e9f47f0f9 MainThread::INFO::2016-06-29 08:46:39,667::ovf_store::102::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:bd9aaf0b-8435-4d78-9871-8c7a7f7fa02a, volUUID:3c477b06-063e-4f01-bd05-84c7d467742b MainThread::INFO::2016-06-29 08:46:39,760::ovf_store::111::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) Extracting Engine VM OVF from the OVF_STORE MainThread::INFO::2016-06-29 08:46:39,761::ovf_store::118::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7/images/bd9aaf0b-8435-4d78-9871-8c7a7f7fa02a/3c477b06-063e-4f01-bd05-84c7d467742b MainThread::INFO::2016-06-29 08:46:39,772::config::226::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) Found an OVF for HE VM, trying to convert MainThread::INFO::2016-06-29 08:46:39,774::config::231::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) Got vm.conf from OVF_STORE MainThread::INFO::2016-06-29 08:46:43,489::hosted_engine::461::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) Current state EngineStarting (score: 3400) MainThread::INFO::2016-06-29 08:46:53,605::state_decorators::88::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) Timeout cleared while transitioning <class 'ovirt_hosted_engine_ha.agent.states.EngineStarting'> -> <class 'ovirt_hosted_engine_ha.agent.states.EngineUp'> MainThread::INFO::2016-06-29 08:46:53,606::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Trying: notify time=1467182813. Many times at day I have the log: MainThread::INFO::2016-06-29 08:47:37,833::image::126::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images) Preparing images MainThread::INFO::2016-06-29 08:47:41,721::hosted_engine::669::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images) Reloading vm.conf from the shared storage domain MainThread::INFO::2016-06-29 08:47:41,722::config::206::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) Trying to get a fresher copy of vm configuration from the OVF_STORE MainThread::INFO::2016-06-29 08:47:45,444::ovf_store::102::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:8d07965c-a5c4-4057-912d-901f80cf246c, volUUID:ce3aa63e-e1c4-498e-bdca-9d2e9f47f0f9 MainThread::INFO::2016-06-29 08:47:45,490::ovf_store::102::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:bd9aaf0b-8435-4d78-9871-8c7a7f7fa02a, volUUID:3c477b06-063e-4f01-bd05-84c7d467742b MainThread::INFO::2016-06-29 08:47:45,583::ovf_store::111::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) Extracting Engine VM OVF from the OVF_STORE MainThread::INFO::2016-06-29 08:47:45,583::ovf_store::118::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /rhev/data-center/mnt/ovirt01.hawai.lan:_engine/46f55a31-f35f-465c-b3e2-df45c05e06a7/images/bd9aaf0b-8435-4d78-9871-8c7a7f7fa02a/3c477b06-063e-4f01-bd05-84c7d467742b MainThread::INFO::2016-06-29 08:47:45,597::config::226::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) Found an OVF for HE VM, trying to convert MainThread::INFO::2016-06-29 08:47:45,599::config::231::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) Got vm.conf from OVF_STORE MainThread::INFO::2016-06-29 08:47:49,291::hosted_engine::461::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) Current state EngineUp (score: 3400)
participants (5)
-
Dan Kenigsberg
-
Michal Skrivanek
-
Sandro Bonazzola
-
Simone Tiraboschi
-
Stefano Danzi