HI!!!!!!
I found a workaround!!!!!!!!!!!!!!!!
the brocker process try to connect to vdsm to IPV4 host address using an
IPV6 connection
(I noticed that doing a strace to the process),
but ipv6 is not intialized at boot. (why connect to IPV4 address using
IPV6?)
Acutally, we take an effort to disable ipv6 on ovirt host networks.
Keeping them open without explicit request was deemed a security issue.
Can you share your strace line and the relevant lines in vdsm.log? I
don't understand what is the issue that you are reporting.
I added the following lines to crontab:
@reboot echo 'echo 0 > /proc/sys/net/ipv6/conf/lo/disable_ipv6' |
/usr/bin/at now+1 minutes
@reboot echo 'echo 0 > /proc/sys/net/ipv6/conf/ovirtmgmt/disable_ipv6' |
/usr/bin/at now+1 minutes
@reboot echo '/usr/sbin/route add default gw 192.168.1.254' | /usr/bin/at
now+1 minutes
Il 24/06/2016 12.36, Stefano Danzi ha scritto:
> How I can change self hosted engine configuration to mount directly
> gluster storage without pass through gluster NFS?
>
> Maybe this solve....
>
> Il 24/06/2016 10.16, Stefano Danzi ha scritto:
> > After an additional yum clean all && yum update was updated some
> > other rpms.
> >
> > Something changed.....
> > My setup has engine storage on gluster, but mounted with NFS.
> > Now gluster daemon don't automatically start at boot. After starting
> > manually gluster the error is the same:
> >
> > ==> /var/log/ovirt-hosted-engine-ha/broker.log <==
> > Thread-19::ERROR::2016-06-24
10:10:36,758::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle)
> > Error while serving connection
> > Traceback (most recent call last):
> > File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py",
> > line 166, in handle
> > data)
> > File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py",
> > line 299, in _dispatch
> > .set_storage_domain(client, sd_type, **options)
> > File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
> > line 66, in set_storage_domain
> > self._backends[client].connect()
> > File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
> > line 400, in connect
> > volUUID=volume.volume_uuid
> > File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
> > line 245, in _get_volume_path
> > volUUID
> > File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__
> > return self.__send(self.__name, args)
> > File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request
> > verbose=self.__verbose
> > File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request
> > return self.single_request(host, handler, request_body, verbose)
> > File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in
single_request
> > self.send_content(h, request_body)
> > File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in
send_content
> > connection.endheaders(request_body)
> > File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders
> > self._send_output(message_body)
> > File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output
> > self.send(msg)
> > File "/usr/lib64/python2.7/httplib.py", line 797, in send
> > self.connect()
> > File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line
> > 203, in connect
> > sock = socket.create_connection((self.host, self.port), self.timeout)
> > File "/usr/lib64/python2.7/socket.py", line 571, in
create_connection
> > raise err
> > error: [Errno 101] Network is unreachable
> >
> >
> > VDSM.log
> >
> > jsonrpc.Executor/5::DEBUG::2016-06-24
> > 10:10:21,694::task::995::Storage.TaskManager.Task::(_decref)
> > Task=`5c3b6f30-d3a8-431e-9dd0-8df79b171709`::ref 0
> > aborting False
> > jsonrpc.Executor/5::WARNING::2016-06-24
> > 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency)
> > Following parameters ['type'] were not recogn
> > ized
> > jsonrpc.Executor/5::WARNING::2016-06-24
> > 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency)
> > Provided value "2" not defined in DiskType en
> > um for Volume.getInfo
> > jsonrpc.Executor/5::WARNING::2016-06-24
> > 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency)
> > Parameter capacity is not uint type
> > jsonrpc.Executor/5::WARNING::2016-06-24
> > 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency)
> > Required property allocType is not provided w
> > hen calling Volume.getInfo
> > jsonrpc.Executor/5::WARNING::2016-06-24
> > 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency)
> > Parameter mtime is not uint type
> > jsonrpc.Executor/5::WARNING::2016-06-24
> > 10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency)
> > Parameter ctime is not int type
> > jsonrpc.Executor/5::WARNING::2016-06-24
> > 10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency)
> > Parameter truesize is not uint type
> > jsonrpc.Executor/5::WARNING::2016-06-24
> > 10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency)
> > Parameter apparentsize is not uint type
> > jsonrpc.Executor/5::DEBUG::2016-06-24
> > 10:10:21,695::__init__::550::jsonrpc.JsonRpcServer::(_serveRequest)
> > Return 'Volume.getInfo' in bridge with {'sta
> > tus': 'OK', 'domain':
'46f55a31-f35f-465c-b3e2-df45c05e06a7',
> > 'voltype': 'LEAF', 'description':
'hosted-engine.lockspace',
> > 'parent': '00000000-0000-00
> > 00-0000-000000000000', 'format': 'RAW', 'image':
> > '6838c974-7656-4b40-87cc-f562ff0b2a4c', 'ctime':
'1423074433',
> > 'disktype': '2', 'legality': 'LEGAL',
> > 'mtime': '0', 'apparentsize': '1048576',
'children': [], 'pool': '',
> > 'capacity': '1048576', 'uuid':
> > u'c66a14d3-112a-4104-9025-76bb2e7ad9f1', 'truesize
> > ': '1048576', 'type': 'PREALLOCATED'}
> > JsonRpc (StompReactor)::ERROR::2016-06-24
> > 10:10:36,514::betterAsyncore::113::vds.dispatcher::(recv) SSL error
> > during reading data: (104, 'Connection r
> > eset by peer')
> > JsonRpc (StompReactor)::WARNING::2016-06-24
> > 10:10:36,515::betterAsyncore::154::vds.dispatcher::(log_info)
> > unhandled close event
> > JsonRpc (StompReactor)::ERROR::2016-06-24
> > 10:10:43,807::betterAsyncore::132::vds.dispatcher::(send) SSL error
> > during sending data: (104, 'Connection r
> > eset by peer')
> > JsonRpc (StompReactor)::ERROR::2016-06-24
> > 10:10:43,959::betterAsyncore::113::vds.dispatcher::(recv) SSL error
> > during reading data: bad write retry
> > JsonRpc (StompReactor)::WARNING::2016-06-24
> > 10:10:43,959::betterAsyncore::154::vds.dispatcher::(log_info)
> > unhandled close event
> > JsonRpc (StompReactor)::ERROR::2016-06-24
> > 10:10:47,859::betterAsyncore::113::vds.dispatcher::(recv) SSL error
> > during reading data: (104, 'Connection r
> > eset by peer')
> > JsonRpc (StompReactor)::WARNING::2016-06-24
> > 10:10:47,860::betterAsyncore::154::vds.dispatcher::(log_info)
> > unhandled close event
> > JsonRpc (StompReactor)::ERROR::2016-06-24
> > 10:10:51,725::betterAsyncore::113::vds.dispatcher::(recv) SSL error
> > during reading data: (104, 'Connection r
> > eset by peer')
> > JsonRpc (StompReactor)::WARNING::2016-06-24
> > 10:10:51,726::betterAsyncore::154::vds.dispatcher::(log_info)
> > unhandled close event
> > Reactor thread::INFO::2016-06-24
10:10:53,851::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
> > Accepting connection from ::ffff:1
> > 92.168.1.50:48554
> > Reactor thread::DEBUG::2016-06-24
> > 10:10:53,860::protocoldetector::85::ProtocolDetector.Detector::(__init__)
> > Using required_size=11
> > Reactor thread::INFO::2016-06-24
10:10:53,861::protocoldetector::121::ProtocolDetector.Detector::(handle_read)
> > Detected protocol stomp from ::ffff:192
> > .168.1.50:48554
> > Reactor thread::INFO::2016-06-24
> > 10:10:53,862::stompreactor::101::Broker.StompAdapter::(_cmd_connect)
> > Processing CONNECT request
> > Reactor thread::DEBUG::2016-06-24
10:10:53,862::stompreactor::482::protocoldetector.StompDetector::(handle_socket)
> > Stomp detected from ('::ffff:192.168.1.50', 48554)
> >
> >
> >
> > Il 24/06/2016 8.18, Sandro Bonazzola ha scritto:
> > >
> > >
> > > On Thu, Jun 23, 2016 at 11:46 PM, Stefano Danzi
> > > <s.danzi(a)hawai.it> wrote:
> > >
> > > Hi!
> > > After cleanin metadata yum do an update of vdsm:
> > >
> > > [root@ovirt01 ~]# rpm -qva | grep vdsm
> > > vdsm-yajsonrpc-4.18.4.1-0.el7.centos.noarch
> > > vdsm-infra-4.18.4.1-0.el7.centos.noarch
> > > vdsm-cli-4.18.4.1-0.el7.centos.noarch
> > > vdsm-python-4.18.4.1-0.el7.centos.noarch
> > > vdsm-hook-vmfex-dev-4.18.4.1-0.el7.centos.noarch
> > > vdsm-xmlrpc-4.18.4.1-0.el7.centos.noarch
> > > vdsm-4.18.4.1-0.el7.centos.x86_64
> > > vdsm-api-4.18.4.1-0.el7.centos.noarch
> > > vdsm-gluster-4.18.4.1-0.el7.centos.noarch
> > > vdsm-jsonrpc-4.18.4.1-0.el7.centos.noarch
> > >
> > > But this not solve the issue.
> > >
> > > - Host haven't default gateway after a reboot
> > > - Self Hosted engine don't start.
> > >
> > >
> > > Martin, Dan, can you please look into this?
> > > Stefano, can you please share a full sos report from the host?
> > >
> > >
> > > vdsm.log:
> > >
https://drive.google.com/file/d/0ByMG4sDqvlZcVEJ5YVI1UWxrdE0/view?usp=sha...
> > >
> > >
> > >
> > > Il 2016-06-23 21:41 Sandro Bonazzola ha scritto:
> > >
> > > On Thu, Jun 23, 2016 at 6:36 PM, Stefano Danzi
> > > <s.danzi(a)hawai.it>
> > > wrote:
> > >
> > > Hi!
> > > I've just upgrade oVirt from 3.6 to 4.0 and I'm not
able
> > > to start
> > > the self hosted engine.
> > >
> > >
> > > Hi Stefano, can you please try "yum clean metadata"
"yum update"
> > > again?
> > > You should get vdsm 4.18.4.1, please let us know if this
> > > solve your
> > > issue.
> > >
> > >
> > >
> > >
> > > --
> > > Sandro Bonazzola
> > > Better technology. Faster innovation. Powered by community
> > > collaboration.
> > > See how it works at
redhat.com <
http://redhat.com>
> >
>
> Webwww.hawai.it
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users