Cumulus Switch
by Matt Wells
I've seen some of the cool stuff coming with OVN and even a co-worker has
done some great things with it. However I was wondering if anyone had
experience with Cumulus as the external provider for networks.
It's just a "weekend project" I'm picking up and thought to ask on the
list. I've not found other posts on it yet but will continue to look.
I've just made a fresh lab with the latest and greatest oVirt on CentOS 7.
Thanks to all and a happy holiday season ( if you're into the holiday thing
).
:-)
8 years
Re: [ovirt-users] Python stack trace for VDSM while monitoring GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17
by Giuseppe Ragusa
On Fri, Dec 16, 2016, at 05:44, Ramesh Nachimuthu wrote:
> ----- Original Message -----
> > From: "Giuseppe Ragusa" <giuseppe.ragusa(a)hotmail.com>
> > To: "Ramesh Nachimuthu" <rnachimu(a)redhat.com>
> > Cc: users(a)ovirt.org
> > Sent: Friday, December 16, 2016 2:42:18 AM
> > Subject: Re: [ovirt-users] Python stack trace for VDSM while monitoring GlusterFS volumes in HC HE oVirt 3.6.7 /
> > GlusterFS 3.7.17
> >
> > Giuseppe Ragusa ha condiviso un file di OneDrive. Per visualizzarlo, fare
> > clic sul collegamento seguente.
> >
> >
> > <https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > [https://r1.res.office365.com/owa/prem/images/dc-generic_20.png]<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> >
> > vols.tar.gz<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> >
> >
> >
> > Da: Ramesh Nachimuthu <rnachimu(a)redhat.com>
> > Inviato: lunedì 12 dicembre 2016 09.32
> > A: Giuseppe Ragusa
> > Cc: users(a)ovirt.org
> > Oggetto: Re: [ovirt-users] Python stack trace for VDSM while monitoring
> > GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17
> >
> > On 12/09/2016 08:50 PM, Giuseppe Ragusa wrote:
> > > Hi all,
> > >
> > > I'm writing to ask about the following problem (in a HC HE oVirt 3.6.7
> > > GlusterFS 3.7.17 3-hosts-replica-with-arbiter sharded-volumes setup all on
> > > CentOS 7.2):
> > >
> > > From /var/log/messages:
> > >
> > > Dec 9 15:27:46 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal
> > > server error#012Traceback (most recent call last):#012 File
> > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > _serveRequest#012 res = method(**params)#012 File
> > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012 result
> > > = fn(*methodArgs)#012 File "/usr/share/vdsm/gluster/apiwrapper.py", line
> > > 117, in status#012 return self._gluster.volumeStatus(volumeName, brick,
> > > statusOption)#012 File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > wrapper#012 rv = func(*args, **kwargs)#012 File
> > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > statusOption)#012 File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > __call__#012 return callMethod()#012 File
> > > "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>#012 **kwargs)#012
> > > File "<string>", line 2, in glusterVolumeStatus#012 File
> > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > llmethod#012 raise convert_to_error(kind, result)#012KeyError:
> > > 'device'
> > > Dec 9 15:27:47 shockley ovirt-ha-agent:
> > > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Extracting Engine
> > > VM OVF from the OVF_STORE
> > > Dec 9 15:27:47 shockley ovirt-ha-agent:
> > > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:OVF_STORE volume
> > > path:
> > > /rhev/data-center/mnt/glusterSD/shockley.gluster.private:_enginedomain/1d60fd45-507d-4a78-8294-d642b3178ea3/images/22a172de-698e-4cc5-bff0-082882fb3347/8738287c-8a25-4a2a-a53a-65c366a972a1
> > > Dec 9 15:27:47 shockley ovirt-ha-agent:
> > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Found
> > > an OVF for HE VM, trying to convert
> > > Dec 9 15:27:47 shockley ovirt-ha-agent:
> > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Got
> > > vm.conf from OVF_STORE
> > > Dec 9 15:27:47 shockley ovirt-ha-agent:
> > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Current state
> > > EngineUp (score: 3400)
> > > Dec 9 15:27:47 shockley ovirt-ha-agent:
> > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Best remote
> > > host read.mgmt.private (id: 2, score: 3400)
> > > Dec 9 15:27:48 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal
> > > server error#012Traceback (most recent call last):#012 File
> > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > _serveRequest#012 res = method(**params)#012 File
> > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012 result
> > > = fn(*methodArgs)#012 File "/usr/share/vdsm/gluster/apiwrapper.py", line
> > > 117, in status#012 return self._gluster.volumeStatus(volumeName, brick,
> > > statusOption)#012 File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > wrapper#012 rv = func(*args, **kwargs)#012 File
> > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > statusOption)#012 File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > __call__#012 return callMethod()#012 File
> > > "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>#012 **kwargs)#012
> > > File "<string>", line 2, in glusterVolumeStatus#012 File
> > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > llmethod#012 raise convert_to_error(kind, result)#012KeyError:
> > > 'device'
> > > Dec 9 15:27:48 shockley ovirt-ha-broker:
> > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > established
> > > Dec 9 15:27:48 shockley ovirt-ha-broker:
> > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > closed
> > > Dec 9 15:27:48 shockley ovirt-ha-broker:
> > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > established
> > > Dec 9 15:27:48 shockley ovirt-ha-broker:
> > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > closed
> > > Dec 9 15:27:48 shockley ovirt-ha-broker:
> > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > established
> > > Dec 9 15:27:48 shockley ovirt-ha-broker:
> > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > closed
> > > Dec 9 15:27:48 shockley ovirt-ha-broker:
> > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > established
> > > Dec 9 15:27:48 shockley ovirt-ha-broker:
> > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > closed
> > > Dec 9 15:27:48 shockley ovirt-ha-broker: INFO:mem_free.MemFree:memFree:
> > > 7392
> > > Dec 9 15:27:50 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal
> > > server error#012Traceback (most recent call last):#012 File
> > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > _serveRequest#012 res = method(**params)#012 File
> > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012 result
> > > = fn(*methodArgs)#012 File "/usr/share/vdsm/gluster/apiwrapper.py", line
> > > 117, in status#012 return self._gluster.volumeStatus(volumeName, brick,
> > > statusOption)#012 File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > wrapper#012 rv = func(*args, **kwargs)#012 File
> > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > statusOption)#012 File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > __call__#012 return callMethod()#012 File
> > > "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>#012 **kwargs)#012
> > > File "<string>", line 2, in glusterVolumeStatus#012 File
> > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > llmethod#012 raise convert_to_error(kind, result)#012KeyError:
> > > 'device'
> > > Dec 9 15:27:52 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal
> > > server error#012Traceback (most recent call last):#012 File
> > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > _serveRequest#012 res = method(**params)#012 File
> > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012 result
> > > = fn(*methodArgs)#012 File "/usr/share/vdsm/gluster/apiwrapper.py", line
> > > 117, in status#012 return self._gluster.volumeStatus(volumeName, brick,
> > > statusOption)#012 File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > wrapper#012 rv = func(*args, **kwargs)#012 File
> > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > statusOption)#012 File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > __call__#012 return callMethod()#012 File
> > > "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>#012 **kwargs)#012
> > > File "<string>", line 2, in glusterVolumeStatus#012 File
> > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > llmethod#012 raise convert_to_error(kind, result)#012KeyError:
> > > 'device'
> > > Dec 9 15:27:54 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal
> > > server error#012Traceback (most recent call last):#012 File
> > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > _serveRequest#012 res = method(**params)#012 File
> > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012 result
> > > = fn(*methodArgs)#012 File "/usr/share/vdsm/gluster/apiwrapper.py", line
> > > 117, in status#012 return self._gluster.volumeStatus(volumeName, brick,
> > > statusOption)#012 File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > wrapper#012 rv = func(*args, **kwargs)#012 File
> > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > statusOption)#012 File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > __call__#012 return callMethod()#012 File
> > > "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>#012 **kwargs)#012
> > > File "<string>", line 2, in glusterVolumeStatus#012 File
> > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > llmethod#012 raise convert_to_error(kind, result)#012KeyError:
> > > 'device'
> > > Dec 9 15:27:55 shockley ovirt-ha-broker:
> > > INFO:cpu_load_no_engine.EngineHealth:System load total=0.1234,
> > > engine=0.0364, non-engine=0.0869
> > > Dec 9 15:27:57 shockley ovirt-ha-agent:
> > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Initializing
> > > VDSM
> > > Dec 9 15:27:57 shockley ovirt-ha-agent:
> > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Connecting
> > > the storage
> > > Dec 9 15:27:58 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal
> > > server error#012Traceback (most recent call last):#012 File
> > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > _serveRequest#012 res = method(**params)#012 File
> > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012 result
> > > = fn(*methodArgs)#012 File "/usr/share/vdsm/gluster/apiwrapper.py", line
> > > 117, in status#012 return self._gluster.volumeStatus(volumeName, brick,
> > > statusOption)#012 File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > wrapper#012 rv = func(*args, **kwargs)#012 File
> > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > statusOption)#012 File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > __call__#012 return callMethod()#012 File
> > > "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>#012 **kwargs)#012
> > > File "<string>", line 2, in glusterVolumeStatus#012 File
> > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > llmethod#012 raise convert_to_error(kind, result)#012KeyError:
> > > 'device'
> > > Dec 9 15:27:58 shockley ovirt-ha-agent:
> > > INFO:ovirt_hosted_engine_ha.lib.storage_server.StorageServer:Connecting
> > > storage server
> > > Dec 9 15:27:58 shockley ovirt-ha-agent:
> > > INFO:ovirt_hosted_engine_ha.lib.storage_server.StorageServer:Connecting
> > > storage server
> > > Dec 9 15:27:59 shockley ovirt-ha-agent:
> > > INFO:ovirt_hosted_engine_ha.lib.storage_server.StorageServer:Refreshing
> > > the storage domain
> > > Dec 9 15:27:59 shockley ovirt-ha-broker:
> > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > established
> > > Dec 9 15:27:59 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal
> > > server error#012Traceback (most recent call last):#012 File
> > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > _serveRequest#012 res = method(**params)#012 File
> > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012 result
> > > = fn(*methodArgs)#012 File "/usr/share/vdsm/gluster/apiwrapper.py", line
> > > 117, in status#012 return self._gluster.volumeStatus(volumeName, brick,
> > > statusOption)#012 File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > wrapper#012 rv = func(*args, **kwargs)#012 File
> > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > statusOption)#012 File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > __call__#012 return callMethod()#012 File
> > > "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>#012 **kwargs)#012
> > > File "<string>", line 2, in glusterVolumeStatus#012 File
> > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > llmethod#012 raise convert_to_error(kind, result)#012KeyError:
> > > 'device'
> > >
> > > From /var/log/vdsm/vdsm.log:
> > >
> > > jsonrpc.Executor/1::ERROR::2016-12-09
> > > 15:27:46,870::__init__::538::jsonrpc.JsonRpcServer::(_serveRequest)
> > > Internal server error
> > > Traceback (most recent call last):
> > > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533,
> > > in _serveRequest
> > > res = method(**params)
> > > File "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod
> > > result = fn(*methodArgs)
> > > File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status
> > > return self._gluster.volumeStatus(volumeName, brick, statusOption)
> > > File "/usr/share/vdsm/gluster/api.py", line 86, in wrapper
> > > rv = func(*args, **kwargs)
> > > File "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus
> > > statusOption)
> > > File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> > > return callMethod()
> > > File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
> > > **kwargs)
> > > File "<string>", line 2, in glusterVolumeStatus
> > > File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
> > > _callmethod
> > > raise convert_to_error(kind, result)
> > > KeyError: 'device'
> > > jsonrpc.Executor/5::ERROR::2016-12-09
> > > 15:27:48,627::__init__::538::jsonrpc.JsonRpcServer::(_serveRequest)
> > > Internal server error
> > > Traceback (most recent call last):
> > > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533,
> > > in _serveRequest
> > > res = method(**params)
> > > File "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod
> > > result = fn(*methodArgs)
> > > File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status
> > > return self._gluster.volumeStatus(volumeName, brick, statusOption)
> > > File "/usr/share/vdsm/gluster/api.py", line 86, in wrapper
> > > rv = func(*args, **kwargs)
> > > File "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus
> > > statusOption)
> > > File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> > > return callMethod()
> > > File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
> > > **kwargs)
> > > File "<string>", line 2, in glusterVolumeStatus
> > > File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
> > > _callmethod
> > > raise convert_to_error(kind, result)
> > > KeyError: 'device'
> > > jsonrpc.Executor/7::ERROR::2016-12-09
> > > 15:27:50,164::__init__::538::jsonrpc.JsonRpcServer::(_serveRequest)
> > > Internal server error
> > > Traceback (most recent call last):
> > > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533,
> > > in _serveRequest
> > > res = method(**params)
> > > File "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod
> > > result = fn(*methodArgs)
> > > File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status
> > > return self._gluster.volumeStatus(volumeName, brick, statusOption)
> > > File "/usr/share/vdsm/gluster/api.py", line 86, in wrapper
> > > rv = func(*args, **kwargs)
> > > File "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus
> > > statusOption)
> > > File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> > > return callMethod()
> > > File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
> > > **kwargs)
> > > File "<string>", line 2, in glusterVolumeStatus
> > > File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
> > > _callmethod
> > > raise convert_to_error(kind, result)
> > > KeyError: 'device'
> > > jsonrpc.Executor/0::ERROR::2016-12-09
> > > 15:27:52,804::__init__::538::jsonrpc.JsonRpcServer::(_serveRequest)
> > > Internal server error
> > > Traceback (most recent call last):
> > > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533,
> > > in _serveRequest
> > > res = method(**params)
> > > File "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod
> > > result = fn(*methodArgs)
> > > File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status
> > > return self._gluster.volumeStatus(volumeName, brick, statusOption)
> > > File "/usr/share/vdsm/gluster/api.py", line 86, in wrapper
> > > rv = func(*args, **kwargs)
> > > File "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus
> > > statusOption)
> > > File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> > > return callMethod()
> > > File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
> > > **kwargs)
> > > File "<string>", line 2, in glusterVolumeStatus
> > > File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
> > > _callmethod
> > > raise convert_to_error(kind, result)
> > > KeyError: 'device'
> > > jsonrpc.Executor/5::ERROR::2016-12-09
> > > 15:27:54,679::__init__::538::jsonrpc.JsonRpcServer::(_serveRequest)
> > > Internal server error
> > > Traceback (most recent call last):
> > > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533,
> > > in _serveRequest
> > > res = method(**params)
> > > File "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod
> > > result = fn(*methodArgs)
> > > File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status
> > > return self._gluster.volumeStatus(volumeName, brick, statusOption)
> > > File "/usr/share/vdsm/gluster/api.py", line 86, in wrapper
> > > rv = func(*args, **kwargs)
> > > File "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus
> > > statusOption)
> > > File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> > > return callMethod()
> > > File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
> > > **kwargs)
> > > File "<string>", line 2, in glusterVolumeStatus
> > > File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
> > > _callmethod
> > > raise convert_to_error(kind, result)
> > > KeyError: 'device'
> > > jsonrpc.Executor/2::ERROR::2016-12-09
> > > 15:27:58,349::__init__::538::jsonrpc.JsonRpcServer::(_serveRequest)
> > > Internal server error
> > > Traceback (most recent call last):
> > > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533,
> > > in _serveRequest
> > > res = method(**params)
> > > File "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod
> > > result = fn(*methodArgs)
> > > File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status
> > > return self._gluster.volumeStatus(volumeName, brick, statusOption)
> > > File "/usr/share/vdsm/gluster/api.py", line 86, in wrapper
> > > rv = func(*args, **kwargs)
> > > File "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus
> > > statusOption)
> > > File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> > > return callMethod()
> > > File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
> > > **kwargs)
> > > File "<string>", line 2, in glusterVolumeStatus
> > > File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
> > > _callmethod
> > > raise convert_to_error(kind, result)
> > > KeyError: 'device'
> > > jsonrpc.Executor/4::ERROR::2016-12-09
> > > 15:27:59,169::__init__::538::jsonrpc.JsonRpcServer::(_serveRequest)
> > > Internal server error
> > > Traceback (most recent call last):
> > > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533,
> > > in _serveRequest
> > > res = method(**params)
> > > File "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod
> > > result = fn(*methodArgs)
> > > File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status
> > > return self._gluster.volumeStatus(volumeName, brick, statusOption)
> > > File "/usr/share/vdsm/gluster/api.py", line 86, in wrapper
> > > rv = func(*args, **kwargs)
> > > File "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus
> > > statusOption)
> > > File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> > > return callMethod()
> > > File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
> > > **kwargs)
> > > File "<string>", line 2, in glusterVolumeStatus
> > > File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
> > > _callmethod
> > > raise convert_to_error(kind, result)
> > > KeyError: 'device'
> > >
> > > From /var/log/vdsm/supervdsm.log:
> > >
> > > Traceback (most recent call last):
> > > File "/usr/share/vdsm/supervdsmServer", line 118, in wrapper
> > > res = func(*args, **kwargs)
> > > File "/usr/share/vdsm/supervdsmServer", line 534, in wrapper
> > > return func(*args, **kwargs)
> > > File "/usr/share/vdsm/gluster/cli.py", line 352, in volumeStatus
> > > return _parseVolumeStatusDetail(xmltree)
> > > File "/usr/share/vdsm/gluster/cli.py", line 216, in
> > > _parseVolumeStatusDetail
> > > 'device': value['device'],
> > > KeyError: 'device'
> > > MainProcess|jsonrpc.Executor/5::ERROR::2016-12-09
> > > 15:27:48,625::supervdsmServer::120::SuperVdsm.ServerCallback::(wrapper)
> > > Error in wrapper
> > > Traceback (most recent call last):
> > > File "/usr/share/vdsm/supervdsmServer", line 118, in wrapper
> > > res = func(*args, **kwargs)
> > > File "/usr/share/vdsm/supervdsmServer", line 534, in wrapper
> > > return func(*args, **kwargs)
> > > File "/usr/share/vdsm/gluster/cli.py", line 352, in volumeStatus
> > > return _parseVolumeStatusDetail(xmltree)
> > > File "/usr/share/vdsm/gluster/cli.py", line 216, in
> > > _parseVolumeStatusDetail
> > > 'device': value['device'],
> > > KeyError: 'device'
> > > MainProcess|jsonrpc.Executor/7::ERROR::2016-12-09
> > > 15:27:50,163::supervdsmServer::120::SuperVdsm.ServerCallback::(wrapper)
> > > Error in wrapper
> > > Traceback (most recent call last):
> > > File "/usr/share/vdsm/supervdsmServer", line 118, in wrapper
> > > res = func(*args, **kwargs)
> > > File "/usr/share/vdsm/supervdsmServer", line 534, in wrapper
> > > return func(*args, **kwargs)
> > > File "/usr/share/vdsm/gluster/cli.py", line 352, in volumeStatus
> > > return _parseVolumeStatusDetail(xmltree)
> > > File "/usr/share/vdsm/gluster/cli.py", line 216, in
> > > _parseVolumeStatusDetail
> > > 'device': value['device'],
> > > KeyError: 'device'
> > > MainProcess|jsonrpc.Executor/0::ERROR::2016-12-09
> > > 15:27:52,803::supervdsmServer::120::SuperVdsm.ServerCallback::(wrapper)
> > > Error in wrapper
> > > Traceback (most recent call last):
> > > File "/usr/share/vdsm/supervdsmServer", line 118, in wrapper
> > > res = func(*args, **kwargs)
> > > File "/usr/share/vdsm/supervdsmServer", line 534, in wrapper
> > > return func(*args, **kwargs)
> > > File "/usr/share/vdsm/gluster/cli.py", line 352, in volumeStatus
> > > return _parseVolumeStatusDetail(xmltree)
> > > File "/usr/share/vdsm/gluster/cli.py", line 216, in
> > > _parseVolumeStatusDetail
> > > 'device': value['device'],
> > > KeyError: 'device'
> > > MainProcess|jsonrpc.Executor/5::ERROR::2016-12-09
> > > 15:27:54,677::supervdsmServer::120::SuperVdsm.ServerCallback::(wrapper)
> > > Error in wrapper
> > > Traceback (most recent call last):
> > > File "/usr/share/vdsm/supervdsmServer", line 118, in wrapper
> > > res = func(*args, **kwargs)
> > > File "/usr/share/vdsm/supervdsmServer", line 534, in wrapper
> > > return func(*args, **kwargs)
> > > File "/usr/share/vdsm/gluster/cli.py", line 352, in volumeStatus
> > > return _parseVolumeStatusDetail(xmltree)
> > > File "/usr/share/vdsm/gluster/cli.py", line 216, in
> > > _parseVolumeStatusDetail
> > > 'device': value['device'],
> > > KeyError: 'device'
> > > MainProcess|jsonrpc.Executor/2::ERROR::2016-12-09
> > > 15:27:58,348::supervdsmServer::120::SuperVdsm.ServerCallback::(wrapper)
> > > Error in wrapper
> > > Traceback (most recent call last):
> > > File "/usr/share/vdsm/supervdsmServer", line 118, in wrapper
> > > res = func(*args, **kwargs)
> > > File "/usr/share/vdsm/supervdsmServer", line 534, in wrapper
> > > return func(*args, **kwargs)
> > > File "/usr/share/vdsm/gluster/cli.py", line 352, in volumeStatus
> > > return _parseVolumeStatusDetail(xmltree)
> > > File "/usr/share/vdsm/gluster/cli.py", line 216, in
> > > _parseVolumeStatusDetail
> > > 'device': value['device'],
> > > KeyError: 'device'
> > > MainProcess|jsonrpc.Executor/4::ERROR::2016-12-09
> > > 15:27:59,168::supervdsmServer::120::SuperVdsm.ServerCallback::(wrapper)
> > > Error in wrapper
> > > Traceback (most recent call last):
> > > File "/usr/share/vdsm/supervdsmServer", line 118, in wrapper
> > > res = func(*args, **kwargs)
> > > File "/usr/share/vdsm/supervdsmServer", line 534, in wrapper
> > > return func(*args, **kwargs)
> > > File "/usr/share/vdsm/gluster/cli.py", line 352, in volumeStatus
> > > return _parseVolumeStatusDetail(xmltree)
> > > File "/usr/share/vdsm/gluster/cli.py", line 216, in
> > > _parseVolumeStatusDetail
> > > 'device': value['device'],
> > > KeyError: 'device'
> > >
> > > Please note that the whole oVirt cluster is working (apparently) as it
> > > should, but due to a known limitation with split-GlusterFS-network setup
> > > (http://lists.ovirt.org/pipermail/users/2016-August/042119.html solved in
> > > https://gerrit.ovirt.org/#/c/60083/ but maybe not backported to 3.6.x or
> > > present only in nightly later than 3.6.7, right?) GlusterFS volumes are
> > > being managed from the hosts commandline only, while the oVirt Engine
> > > webui is used only to monitor them.
> > >
> > > The GlusterFS part is currently experiencing some recurring NFS crashes
> > > (using internal GlusterFS NFS support, not NFS-Ganesha) as reported in
> > > Gluster users mailing list and in Bugzilla
> > > (http://www.gluster.org/pipermail/gluster-users/2016-December/029357.html
> > > and https://bugzilla.redhat.com/show_bug.cgi?id=1381970 without any
> > > feedback insofar...) but only on not-oVirt-related volumes.
> > >
> > > Finally, I can confirm that checking all oVirt-related and
> > > not-oVirt-related GlusterFS volumes from the hosts commandline with:
> > >
> > > vdsClient -s localhost glusterVolumeStatus volumeName=nomevolume
> >
> > Can you post the output of 'gluster volume status <vol-name> detail --xml'.
> >
> > Regards,
> > Ramesh
> >
> > Hi Ramesh,
> >
> > Please find attached all the output produced with the following command:
> >
> > for vol in $(gluster volume list); do gluster volume status ${vol} detail
> > --xml > ${vol}.xml; res=$?; echo "Exit ${res} for volume ${vol}"; done
> >
> > Please note that the exit code was always zero.
> >
>
> +gluster-users
>
> This seems to be a bug in Glusterfs 3.7.17. Output of 'gluster volume status <vol-name> details --xml ' should have a <device> element for all the bricks in the volume. But it missing for the arbiter brick. This issue is not re-producible in Gulsterfs-3.8.
Do I need to open a GlusterFS bug for this on 3.7?
Looking at the changelog, it does not seem to have been fixed in 3.7.18 nor to be among the already known issues.
On the oVirt side: is GlusterFS 3.8 compatible with oVirt 3.6.x (maybe with x > 7 ie using nightly snapshots)?
Many thanks.
Regards,
Giuseppe
> Regards,
> Ramesh
>
>
> > Many thanks for you help.
> >
> > Best regards,
> > Giuseppe
> >
> >
> > >
> > > always succeeds without errors.
> > >
> > > Many thanks in advance for any advice (please note that I'm planning to
> > > upgrade from 3.6.7 to latest nightly 3.6.10.x as soon as the corresponding
> > > RHEV gets announced, then later on all the way up to 4.1.0 as soon as it
> > > stabilizes; on GlusterFS-side I'd like to upgrade asap to 3.8.x but I
> > > cannot find any hint on oVirt 3.6.x compatibility...).
> > >
> > > Best regards,
> > > Giuseppe
> > >
> > > PS: please keep my address in to/copy since I still have problems receiving
> > > oVirt mailing list messages on Hotmail.
> > >
> > >
> > > _______________________________________________
> > > Users mailing list
> > > Users(a)ovirt.org
> > > http://lists.phx.ovirt.org/mailman/listinfo/users
> >
> >
> >
8 years
about vms can't migrate automatically in HA
by 张 余歌
--_000_DM5PR11MB1930287CDFAE3FAE05474D61908F0DM5PR11MB1930namp_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
aGVsbG8sbXkgZW52aXJvbWVudCA6DQpob3N0IEEgOiBlbmdpbmUgLA0KaG9zdCBCIDogY29tcHV0
ZSxob3N0IG9mIGNsdXN0ZXINCmhvc3QgQyA6IGNvbXB1dGUsaG9zdCBvZiBjbHVzdGVyDQpBbmQg
cnVuIGEgdm0gaW4gaG9zdCBCLG1vdW50IG5mcyBvbiBob3N0IEINCmkgdXNlIEEgdG8gcHJvdmlk
ZSBuZnMgc3RvcmFnZSB0byBob3N0IEIgYW5kIEMgYXMgc2hhcmVkIHN0b3JhZ2UuaSBqdXN0IHdh
bnQgdG8gcmVhbGl6ZSBIQSB3aXRob3V0IGFueSBtYW5udWwgb3BlcmF0aW9uLg0KDQp0aGUgcHJv
YmxlbSBpczoNCjEsaSBqdXN0IHNhdGlmaWVkIHRoZSBjb25kaXRpb24gb2YgSEEgLGxpa2UgY29u
ZmlndXJlIHBvd2VyIG1hbmFnZXIsdm0gSEEgcGFyYW1ldGVycy4uLmFuZCBrZWVwIGRlZmF1bHQg
ZWxzZS4NCmlmIGkganVzdCBtYWludGVuYW5jZSBob3N0IEIsaXQgd29ya3Mgd2VsbCB0byBtaWdy
YXRlIHZtLG9rYXkuDQoNCkFuZCB0aGVuIGkgcG93ZXIgZG93biBieSBwb3dlciBtYW5hZ2VyLGl0
IHNlZW1zIHRoYXQgdm0gY2FuIHJ1biBvbiBob3N0IEMgdW50aWwgaG9zdCBCIHJlYm9vdCBieSBw
b3dlciBtYW5hZ2VyLGJ1dCBpdCBhbHNvIHdvcmtzLGl0IGlzIHdlaXJkICx0aHJvdWdoIGhvc3Qg
QiBpcyBkb3duLGkgZXhwZWN0IGhvc3QgQyB3aWxsIG1vdW50IG5mcyBhdXRvbWF0aWNhbGx5IC4u
Lg0KDQoyLCBpIGZvcmNlIHRvIHBvd2VyIGZhaWx1cmUgaG9zdCBCLHRoZSByZXN1bHQgaXM6DQpo
b3N0IEIgYW5kIEMgYm90aCB0dXJuIHRvIE5vbiBSZXNwb25zaXZlLGFuZCB0aGUgc3RvcmFnZSBp
cyBkb3duLGV2ZXJ5dGhpbmcgaXMgYmFkIC4uYXJlIHRoZXJlIHNvbWUgcGxhY2UgaSBpZ25vcmU/
DQoNCml0IG1ha2UgbWUgZmVlbCBoZWxwbGVzcy4NCmJlc3QgcmVnYXJkLg0KDQrojrflj5YgT3V0
bG9vayBmb3IgQW5kcm9pZDxodHRwczovL2FrYS5tcy9naGVpMzY+DQo=
--_000_DM5PR11MB1930287CDFAE3FAE05474D61908F0DM5PR11MB1930namp_
Content-Type: text/html; charset="utf-8"
Content-ID: <FA9BFCD90381E7449E8B4FCE172D0477(a)sct-15-1-659-11-msonline-outlook-d08c7.templateTenant>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHAgZGlyPSJhdXRv
IiBzdHlsZT0iIHRleHQtYWxpZ246IGxlZnQ7IG1hcmdpbi10b3A6IDI1cHg7IG1hcmdpbi1ib3R0
b206IDI1cHg7IGZvbnQtZmFtaWx5OiBzYW5zLXNlcmlmOyBmb250LXNpemU6IDExcHQ7IGNvbG9y
OiBibGFjazsgYmFja2dyb3VuZC1jb2xvcjogd2hpdGUgIj4NCmhlbGxvLG15IGVudmlyb21lbnQg
Ojxicj4NCmhvc3QgQSA6IGVuZ2luZSAsPGJyPg0KaG9zdCBCIDogY29tcHV0ZSxob3N0IG9mIGNs
dXN0ZXI8YnI+DQpob3N0IEMgOiBjb21wdXRlLGhvc3Qgb2YgY2x1c3Rlcjxicj4NCkFuZCBydW4g
YSB2bSBpbiBob3N0IEIsbW91bnQgbmZzIG9uIGhvc3QgQiA8YnI+DQppIHVzZSBBIHRvIHByb3Zp
ZGUgbmZzIHN0b3JhZ2UgdG8gaG9zdCBCIGFuZCBDIGFzIHNoYXJlZCBzdG9yYWdlLmkganVzdCB3
YW50IHRvIHJlYWxpemUgSEEgd2l0aG91dCBhbnkgbWFubnVsIG9wZXJhdGlvbi48YnI+DQo8L3A+
DQo8cCBkaXI9ImF1dG8iIHN0eWxlPSIgdGV4dC1hbGlnbjogbGVmdDsgbWFyZ2luLXRvcDogMjVw
eDsgbWFyZ2luLWJvdHRvbTogMjVweDsgZm9udC1mYW1pbHk6IHNhbnMtc2VyaWY7IGZvbnQtc2l6
ZTogMTFwdDsgY29sb3I6IGJsYWNrOyBiYWNrZ3JvdW5kLWNvbG9yOiB3aGl0ZSAiPg0KdGhlIHBy
b2JsZW0gaXM6PGJyPg0KMSxpIGp1c3Qgc2F0aWZpZWQgdGhlIGNvbmRpdGlvbiBvZiBIQSAsbGlr
ZSBjb25maWd1cmUgcG93ZXIgbWFuYWdlcix2bSBIQSBwYXJhbWV0ZXJzLi4uYW5kIGtlZXAgZGVm
YXVsdCBlbHNlLjxicj4NCmlmIGkganVzdCBtYWludGVuYW5jZSBob3N0IEIsaXQgd29ya3Mgd2Vs
bCB0byBtaWdyYXRlIHZtLG9rYXkuPC9wPg0KPHAgZGlyPSJhdXRvIiBzdHlsZT0iIHRleHQtYWxp
Z246IGxlZnQ7IG1hcmdpbi10b3A6IDI1cHg7IG1hcmdpbi1ib3R0b206IDI1cHg7IGZvbnQtZmFt
aWx5OiBzYW5zLXNlcmlmOyBmb250LXNpemU6IDExcHQ7IGNvbG9yOiBibGFjazsgYmFja2dyb3Vu
ZC1jb2xvcjogd2hpdGUgIj4NCkFuZCB0aGVuIGkgcG93ZXIgZG93biBieSBwb3dlciBtYW5hZ2Vy
LGl0IHNlZW1zIHRoYXQgdm0gY2FuIHJ1biBvbiBob3N0IEMgdW50aWwgaG9zdCBCIHJlYm9vdCBi
eSBwb3dlciBtYW5hZ2VyLGJ1dCBpdCBhbHNvIHdvcmtzLGl0IGlzIHdlaXJkICx0aHJvdWdoIGhv
c3QgQiBpcyBkb3duLGkgZXhwZWN0IGhvc3QgQyB3aWxsIG1vdW50IG5mcyBhdXRvbWF0aWNhbGx5
IC4uLjwvcD4NCjxwIGRpcj0iYXV0byIgc3R5bGU9IiB0ZXh0LWFsaWduOiBsZWZ0OyBtYXJnaW4t
dG9wOiAyNXB4OyBtYXJnaW4tYm90dG9tOiAyNXB4OyBmb250LWZhbWlseTogc2Fucy1zZXJpZjsg
Zm9udC1zaXplOiAxMXB0OyBjb2xvcjogYmxhY2s7IGJhY2tncm91bmQtY29sb3I6IHdoaXRlICI+
DQoyLCBpIGZvcmNlIHRvIHBvd2VyIGZhaWx1cmUgaG9zdCBCLHRoZSByZXN1bHQgaXM6IDxicj4N
Cmhvc3QgQiBhbmQgQyBib3RoIHR1cm4gdG8gTm9uIFJlc3BvbnNpdmUsYW5kIHRoZSBzdG9yYWdl
IGlzIGRvd24sZXZlcnl0aGluZyBpcyBiYWQgLi5hcmUgdGhlcmUgc29tZSBwbGFjZSBpIGlnbm9y
ZT88L3A+DQo8cCBkaXI9ImF1dG8iIHN0eWxlPSIgdGV4dC1hbGlnbjogbGVmdDsgbWFyZ2luLXRv
cDogMjVweDsgbWFyZ2luLWJvdHRvbTogMjVweDsgZm9udC1mYW1pbHk6IHNhbnMtc2VyaWY7IGZv
bnQtc2l6ZTogMTFwdDsgY29sb3I6IGJsYWNrOyBiYWNrZ3JvdW5kLWNvbG9yOiB3aGl0ZSAiPg0K
aXQgbWFrZSBtZSBmZWVsIGhlbHBsZXNzLjxicj4NCmJlc3QgcmVnYXJkLjxicj4NCjwvcD4NCjxw
IGRpcj0iYXV0byIgc3R5bGU9IiB0ZXh0LWFsaWduOiBsZWZ0OyBtYXJnaW4tdG9wOiAyNXB4OyBt
YXJnaW4tYm90dG9tOiAyNXB4OyBmb250LWZhbWlseTogc2Fucy1zZXJpZjsgZm9udC1zaXplOiAx
MXB0OyBjb2xvcjogYmxhY2s7IGJhY2tncm91bmQtY29sb3I6IHdoaXRlICI+DQrojrflj5YgPGEg
aHJlZj0iaHR0cHM6Ly9ha2EubXMvZ2hlaTM2Ij5PdXRsb29rIGZvciBBbmRyb2lkPC9hPjxicj4N
CjwvcD4NCjwvYm9keT4NCjwvaHRtbD4NCg==
--_000_DM5PR11MB1930287CDFAE3FAE05474D61908F0DM5PR11MB1930namp_--
8 years
Hosted Engine won't deploy
by Gervais de Montbrun
--Apple-Mail=_D78D948E-0E29-4B7F-A8D8-A338ACAC78FD
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=us-ascii
Hi all,
I had to reinstall one of my hosts today and I noticed an issue. The =
error message was:
Ovirt2:
Cannot edit Host. You are using an unmanaged hosted engine VM. Please =
upgrade the cluster level to 3.6 and wait for the hosted engine storage =
domain to be properly imported.
I am running oVirt 4.0.5 and have a hosted engine and Cluster and Data =
Center say that they are running in 4.0 compatibility mode, so I don't =
understand this error. I did get the host setup by running =
`hosted-engine --deploy` and walking through the command line options. =
Alarmingly, I was warned that this is deprecated and will not be =
possible in oVirt 4.1.=20
Any suggestions as to what I should do to sort out my issue?
Cheers,
Gervais
--Apple-Mail=_D78D948E-0E29-4B7F-A8D8-A338ACAC78FD
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
charset=us-ascii
<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Dus-ascii"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" =
class=3D"">Hi all,<div class=3D""><br class=3D""></div><div class=3D"">I =
had to reinstall one of my hosts today and I noticed an issue. The error =
message was:</div><div class=3D""><br class=3D""></div><blockquote =
style=3D"margin: 0 0 0 40px; border: none; padding: 0px;" class=3D""><div =
class=3D""><span style=3D"font-family: 'Arial Unicode MS', Arial, =
sans-serif; font-size: small; background-color: rgb(255, 255, 255);" =
class=3D"">Ovirt2:</span></div></blockquote><div class=3D""><ul =
style=3D"box-sizing: border-box; margin-top: 0px; margin-bottom: 10px; =
font-family: 'Arial Unicode MS', Arial, sans-serif;" class=3D""><ul =
class=3D""><li style=3D"box-sizing: border-box;" class=3D"">Cannot edit =
Host. You are using an unmanaged hosted engine VM. Please upgrade the =
cluster level to 3.6 and wait for the hosted engine storage domain to be =
properly imported.</li></ul></ul><div class=3D"">I am running oVirt =
4.0.5 and have a hosted engine and Cluster and Data Center say that they =
are running in 4.0 compatibility mode, so I don't understand this error. =
I did get the host setup by running `hosted-engine --deploy` and walking =
through the command line options. Alarmingly, I was warned that this is =
deprecated and will not be possible in oVirt 4.1. </div><div =
class=3D""><br class=3D""></div><div class=3D"">Any suggestions as to =
what I should do to sort out my issue?</div><div class=3D""><br =
class=3D""></div><div class=3D"">Cheers,</div><div class=3D""><div =
id=3D"signature" class=3D"">Gervais<br class=3D""><br class=3D""><br =
class=3D""></div>
</div>
<br class=3D""></div></body></html>=
--Apple-Mail=_D78D948E-0E29-4B7F-A8D8-A338ACAC78FD--
8 years
Python SDK .remove() does not return True?
by Yaniv Kaul
I'd expect, like other actions, add() for example, that I can ensure
.remove() (for clusters, DCs, etc.) will return True as a sign for success.
Is that a bug or a design choice?
TIA,
Y.
8 years
WebUI error with nightly 4.0.7
by Alessandro De Salvo
Hi,
since a few days, after the upgrade my dev machine to the nightly repo
of 4.0.7, I'm getting these kind of errors from the WebUI after a few
minutes the ovirt-engine is up:
Error while executing action: A Request to the Server failed:
java.lang.reflect.InvocationTargetException
The errors go away if I restart the engine service, but after about 15
minutes they show up again. These errors are very annoying as I cannot
use the UI unless I restart the engine.
When I use my Mac I also get other errors like this:
ERROR: Possible problem with your *.gwt.xml module file. The compile
time user.agent (gecko1_8) does not match the runtime user.agent value
(safari). Expect more errors.
Does anyone knows if those errors will be corrected in a future release?
Thanks,
Alessandro
8 years
oVirt multiips hook
by Bill Bill
--_000_CO2PR0801MB074315648CEBF2C79BEFB95DA69D0CO2PR0801MB0743_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SGVsbG8sDQoNCkZvbGxvd2luZyB1cCBvbiB0aGUgdXNlcnMgbGlzdCBhcyBvcHBvc2VkIHRvIEJ1
Z3ppbGxhLg0KDQpUaGFua3MgZm9yIGhlbHBpbmcgb3V0IHdpdGggdGhpcywgbXVjaCBhcHByZWNp
YXRlZC4gSSB3YXMgYWJsZSB0byBnZXQgdGhlIGN1c3RvbSBwcm9wZXJ0eSBhZGRlZCBpbiB0aGUg
ZW5naW5lICYgSSBjYW4gc2VsZWN0IHRoZSBwcm9wZXJ0eSwgdGhlbiBlbnRlciBpbiB0aGUgSVDi
gJlzLg0KDQpJ4oCZbSBub3Qgc3VyZSBpZiBJIGNyZWF0ZWQgdGhlIGhvb2sgY29ycmVjdGx5LCBh
cyBpdCBkb2VzbuKAmXQgYXBwZWFyIHRvIGhhdmUgbWFkZSBhbnkgY2hhbmdlcyBzbyBmYXIsIG9u
bHkgb25lIElQIGNvbW11bmljYXRlcy4NCg0KSSBjcmVhdGVkIGEgZmlsZSBjYWxsZWQg4oCcbXVs
dGlpcHPigJ0gaW4gdGhlIC91c3IvbGliZXhlYy92ZHNtL2hvb2tzL2JlZm9yZV92bV9zdGFydC8g
ZGlyZWN0b3J5IGNvbnRhaW5pbmcgdGhlIGluZm8gZnJvbSB0aGUgQnVnemlsbGEgdGhyZWFkLg0K
DQpJcyB0aGVyZSBhbm90aGVyIHN0ZXAgSSBzaG91bGQgdGFrZSBvciBwZXJoYXBzIEnigJltIG1p
c3Npbmcgc29tZXRoaW5nPw0K
--_000_CO2PR0801MB074315648CEBF2C79BEFB95DA69D0CO2PR0801MB0743_
Content-Type: text/html; charset="utf-8"
Content-ID: <6CE493C634FDC34584A73FE45CF05C03(a)sct-15-1-659-11-msonline-outlook-7ade0.templateTenant>
Content-Transfer-Encoding: base64
PGh0bWwgeG1sbnM6bz0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6b2ZmaWNlIiB4
bWxuczp3PSJ1cm46c2NoZW1hcy1taWNyb3NvZnQtY29tOm9mZmljZTp3b3JkIiB4bWxuczptPSJo
dHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL29mZmljZS8yMDA0LzEyL29tbWwiIHhtbG5zPSJo
dHRwOi8vd3d3LnczLm9yZy9UUi9SRUMtaHRtbDQwIj4NCjxoZWFkPg0KPG1ldGEgaHR0cC1lcXVp
dj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7IGNoYXJzZXQ9dXRmLTgiPg0KPG1l
dGEgbmFtZT0iR2VuZXJhdG9yIiBjb250ZW50PSJNaWNyb3NvZnQgV29yZCAxNSAoZmlsdGVyZWQg
bWVkaXVtKSI+DQo8c3R5bGU+PCEtLQ0KLyogRm9udCBEZWZpbml0aW9ucyAqLw0KQGZvbnQtZmFj
ZQ0KCXtmb250LWZhbWlseToiQ2FtYnJpYSBNYXRoIjsNCglwYW5vc2UtMToyIDQgNSAzIDUgNCA2
IDMgMiA0O30NCkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6Q2FsaWJyaTsNCglwYW5vc2UtMToy
IDE1IDUgMiAyIDIgNCAzIDIgNDt9DQovKiBTdHlsZSBEZWZpbml0aW9ucyAqLw0KcC5Nc29Ob3Jt
YWwsIGxpLk1zb05vcm1hbCwgZGl2Lk1zb05vcm1hbA0KCXttYXJnaW46MGluOw0KCW1hcmdpbi1i
b3R0b206LjAwMDFwdDsNCglmb250LXNpemU6MTEuMHB0Ow0KCWZvbnQtZmFtaWx5OiJDYWxpYnJp
IixzYW5zLXNlcmlmO30NCmE6bGluaywgc3Bhbi5Nc29IeXBlcmxpbmsNCgl7bXNvLXN0eWxlLXBy
aW9yaXR5Ojk5Ow0KCWNvbG9yOmJsdWU7DQoJdGV4dC1kZWNvcmF0aW9uOnVuZGVybGluZTt9DQph
OnZpc2l0ZWQsIHNwYW4uTXNvSHlwZXJsaW5rRm9sbG93ZWQNCgl7bXNvLXN0eWxlLXByaW9yaXR5
Ojk5Ow0KCWNvbG9yOiM5NTRGNzI7DQoJdGV4dC1kZWNvcmF0aW9uOnVuZGVybGluZTt9DQouTXNv
Q2hwRGVmYXVsdA0KCXttc28tc3R5bGUtdHlwZTpleHBvcnQtb25seTt9DQpAcGFnZSBXb3JkU2Vj
dGlvbjENCgl7c2l6ZTo4LjVpbiAxMS4waW47DQoJbWFyZ2luOjEuMGluIDEuMGluIDEuMGluIDEu
MGluO30NCmRpdi5Xb3JkU2VjdGlvbjENCgl7cGFnZTpXb3JkU2VjdGlvbjE7fQ0KLS0+PC9zdHls
ZT4NCjwvaGVhZD4NCjxib2R5IGxhbmc9IkVOLVVTIiBsaW5rPSJibHVlIiB2bGluaz0iIzk1NEY3
MiI+DQo8ZGl2IGNsYXNzPSJXb3JkU2VjdGlvbjEiPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+SGVs
bG8sPC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PG86cD4mbmJzcDs8L286cD48L3A+DQo8cCBj
bGFzcz0iTXNvTm9ybWFsIj5Gb2xsb3dpbmcgdXAgb24gdGhlIHVzZXJzIGxpc3QgYXMgb3Bwb3Nl
ZCB0byBCdWd6aWxsYS48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48bzpwPiZuYnNwOzwvbzpw
PjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPlRoYW5rcyBmb3IgaGVscGluZyBvdXQgd2l0aCB0
aGlzLCBtdWNoIGFwcHJlY2lhdGVkLiBJIHdhcyBhYmxlIHRvIGdldCB0aGUgY3VzdG9tIHByb3Bl
cnR5IGFkZGVkIGluIHRoZSBlbmdpbmUgJmFtcDsgSSBjYW4gc2VsZWN0IHRoZSBwcm9wZXJ0eSwg
dGhlbiBlbnRlciBpbiB0aGUgSVDigJlzLjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxvOnA+
Jm5ic3A7PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+SeKAmW0gbm90IHN1cmUgaWYg
SSBjcmVhdGVkIHRoZSBob29rIGNvcnJlY3RseSwgYXMgaXQgZG9lc27igJl0IGFwcGVhciB0byBo
YXZlIG1hZGUgYW55IGNoYW5nZXMgc28gZmFyLCBvbmx5IG9uZSBJUCBjb21tdW5pY2F0ZXMuPC9w
Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PG86cD4mbmJzcDs8L286cD48L3A+DQo8cCBjbGFzcz0i
TXNvTm9ybWFsIj5JIGNyZWF0ZWQgYSBmaWxlIGNhbGxlZCDigJxtdWx0aWlwc+KAnSBpbiB0aGUg
L3Vzci9saWJleGVjL3Zkc20vaG9va3MvYmVmb3JlX3ZtX3N0YXJ0LyBkaXJlY3RvcnkgY29udGFp
bmluZyB0aGUgaW5mbyBmcm9tIHRoZSBCdWd6aWxsYSB0aHJlYWQuPC9wPg0KPHAgY2xhc3M9Ik1z
b05vcm1hbCI+PG86cD4mbmJzcDs8L286cD48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj5JcyB0
aGVyZSBhbm90aGVyIHN0ZXAgSSBzaG91bGQgdGFrZSBvciBwZXJoYXBzIEnigJltIG1pc3Npbmcg
c29tZXRoaW5nPzwvcD4NCjwvZGl2Pg0KPC9ib2R5Pg0KPC9odG1sPg0K
--_000_CO2PR0801MB074315648CEBF2C79BEFB95DA69D0CO2PR0801MB0743_--
8 years
gluster command failed
by Nathanaël Blanchet
Hi,
I used to successfully run a replica 3 gluster volume, but since the
last 4.0.5 update, they can't connect each other with the message :
gluster [gluster peer status guadalupe1.v100.abes.fr] command failed on
server guadalupe2.v100.abes.fr.
So host guadalupe1 can't never be up.
When doing gluster peer probe, they are connected as expected. I
reinstalled vdsm and gluster, but it is still the same.
I found this on guadalupe2 supervdsm.log
MainProcess|jsonrpc.Executor/6::DEBUG::2016-12-16
11:53:21,429::supervdsmServer::99::SuperVdsm.ServerCallback::(wrapper)
return peerStatus with [{'status': 'CONNECTED', 'hostname':
'10.34.101.56/24', 'uuid': 'c259c09b-8d7c-4b12-8745-677199877583'},
{'status': 'CONNECTED', 'hostname': 'guadalupe3.v100.abes.fr', 'uuid':
'6af67cd3-7931-446d-aaa2-ffea51325adc'}, {'status': 'CONNECTED',
'hostname': 'guadalupe1.v100.abes.fr', 'uuid':
'8eb485cd-31c4-4c3a-a315-3dc6d3ddc0c9'}]
MainProcess|jsonrpc.Executor/7::DEBUG::2016-12-16
11:53:21,490::supervdsmServer::92::SuperVdsm.ServerCallback::(wrapper)
call peerProbe with () {}
MainProcess|jsonrpc.Executor/7::DEBUG::2016-12-16
11:53:21,491::commands::68::root::(execCmd) /usr/bin/taskset --cpu-list
0-63 /usr/sbin/gluster --mode=script peer probe guadalupe1.v100.abes.fr
--xml (cwd None)
MainProcess|jsonrpc.Executor/7::DEBUG::2016-12-16
11:53:21,570::commands::86::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc.Executor/7::DEBUG::2016-12-16
11:53:21,570::supervdsmServer::99::SuperVdsm.ServerCallback::(wrapper)
return peerProbe with True
We can see guadalupe2 can see guadalupe1 but taskset still executes peer
probe to guadalupe1 with message "Host guadalupe1.v100.abes.fr port
24007 already in peer list"
How can I say to guadalupe2 stop trying to probe guadalupe1?
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
8 years
root login using HE w/o cloud-init?
by Mark Steckel
Still trying to get a single server hosted engine setup running at Hetzner.de...
If I use the HE Appliance and do not use cloud-init the VM launches but can not be accessed.
<quote>
[ INFO ] Detecting available oVirt engine appliances
The following appliance have been found on your system:
[1] - The oVirt Engine Appliance image (OVA) - 4.0-20161210.1.el7.centos
[2] - Directly select an OVA file
Please select an appliance (1, 2) [1]: 1
[ INFO ] Verifying its sha1sum
[ INFO ] Checking OVF archive content (could take a few minutes depending on archive size)
[ INFO ] Checking OVF XML content (could take a few minutes depending on archive size)
[WARNING] OVF does not contain a valid image description, using default.
Would you like to use cloud-init to customize the appliance on the first boot (Yes, No)[Yes]? No
[WARNING] The oVirt engine appliance is not configured with a default password, please consider configuring it via cloud-init
<snip...>
Make a selection from the options below:
(1) Continue setup - oVirt-Engine installation is ready and ovirt-engine service is up
(2) Abort setup
(3) Power off and restart the VM
(4) Destroy VM and abort setup
</quote>
At this point I can access the VM via vnc but am unable to login.
Is root login possible in this situation?
8 years
template issues
by Stefan Wandl
------=_Part_9082032_1789305433.1481895402640
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi,=20
i have currently two issues with templates:=20
1) I cannot create VMs from Templates with Pre-Allocated Disks. I have a VM=
with 2 Preallocated and one Thin Provisioned Disk.=20
Whenever i create a new template from this Machine all Disks are changed to=
Thin Provisoned inside of the Template.=20
During the VM create in the resource tab i can only choose between Thin and=
Clone, since the disks are all changed to Thin Provisioned it is impossibl=
e to create Pre-Allocated Disks with the Template.=20
Can the behavior of the "Make Template" somehow be changed?=20
3) I use cloud-init with a couple of settings in the "Intital Run" for the =
installation of the machines. Whenever i create a new VM from template insi=
de of the Admin GUI the "Use Cloud-Init/Sysprep" is enabled and prefilled. =
Thats fine.=20
When i use the same template inside of the User GUI the "Use Cloud-Init/Sys=
prep" is disabled and all the Cloud-Init settings are empty.=20
Should it work that way or is it a bug?=20
---------------------------------------------------------------------------=
-----------------------------------------=20
UBIMET GmbH - weather matters=20
Stefan Wandl=20
Information & Process Management=20
A-1220 Wien =E2=80=A2 Donau-City-Stra=C3=9Fe 11 =E2=80=A2 Tel +43 1 263 11 =
22 479=20
swandl @ubimet.com =E2=80=A2 www.ubimet.com=20
---------------------------------------------------------------------------=
-----------------------------------------=20
The information contained in this message (including any attachments) is co=
nfidential and may be legally privileged or otherwise protected from disclo=
sure. This message is intended solely for the addressee(s). If you are not =
the intended recipient, please notify the sender by return e-mail and delet=
e this message from your system. Any unauthorized use, reproduction, or dis=
semination of this message is strictly prohibited. Please note that e-mails=
are susceptible to change. UBIMET GmbH shall not be liable for the imprope=
r or incomplete transmission of the information contained in this communica=
tion, nor shall it be liable for any delay in its receipt. UBIMET GmbH acce=
pts no liability for loss or damage caused by software viruses and you are =
advised to carry out a virus check on any attachments contained in this mes=
sage.=20
------=_Part_9082032_1789305433.1481895402640
Content-Type: multipart/related;
boundary="----=_Part_9082033_1840351450.1481895402640"
------=_Part_9082033_1840351450.1481895402640
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: arial, helvetica, sans-serif; font-s=
ize: 12pt; color: #000000"><div>Hi,</div><div><br data-mce-bogus=3D"1"></di=
v><div>i have currently two issues with templates:</div><div><br data-=
mce-bogus=3D"1"></div><div>1) I cannot create VMs from Templates with Pre-A=
llocated Disks. I have a VM with 2 Preallocated and one Thin Provisioned Di=
sk. </div><div>Whenever i create a new template from this Machine all Disks=
are changed to Thin Provisoned inside of the Template.</div><div>During th=
e VM create in the resource tab i can only choose between Thin and Clone, s=
ince the disks are all changed to Thin Provisioned it is impossible to crea=
te Pre-Allocated Disks with the Template.</div><div><br data-mce-bogus=3D"1=
"></div><div>Can the behavior of the "Make Template" somehow be changed?</d=
iv><div><br data-mce-bogus=3D"1"></div><div><br></div><div>3) I use cloud-i=
nit with a couple of settings in the "Intital Run" for the installation of =
the machines. Whenever i create a new VM from template inside of the A=
dmin GUI the "Use Cloud-Init/Sysprep" is enabled and prefilled. Thats fine.=
</div><div>When i use the same template inside of the User GUI the <sp=
an style=3D"color: #000000; font-family: arial, helvetica, sans-serif; font=
-size: 16px; font-style: normal; font-variant-ligatures: normal; font-varia=
nt-caps: normal; font-weight: normal; letter-spacing: normal; orphans: 2; t=
ext-align: start; text-indent: 0px; text-transform: none; white-space: norm=
al; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; backgroun=
d-color: #ffffff; display: inline !important; float: none;" data-mce-style=
=3D"color: #000000; font-family: arial, helvetica, sans-serif; font-size: 1=
6px; font-style: normal; font-variant-ligatures: normal; font-variant-caps:=
normal; font-weight: normal; letter-spacing: normal; orphans: 2; text-alig=
n: start; text-indent: 0px; text-transform: none; white-space: normal; wido=
ws: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color:=
#ffffff; display: inline !important; float: none;">"Use Cloud-Init/Sysprep=
" is disabled and all the Cloud-Init settings are empty.</span></div><div><=
span style=3D"color: #000000; font-family: arial, helvetica, sans-serif; fo=
nt-size: 16px; font-style: normal; font-variant-ligatures: normal; font-var=
iant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: 2;=
text-align: start; text-indent: 0px; text-transform: none; white-space: no=
rmal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; backgro=
und-color: #ffffff; display: inline !important; float: none;" data-mce-styl=
e=3D"color: #000000; font-family: arial, helvetica, sans-serif; font-size: =
16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps=
: normal; font-weight: normal; letter-spacing: normal; orphans: 2; text-ali=
gn: start; text-indent: 0px; text-transform: none; white-space: normal; wid=
ows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color=
: #ffffff; display: inline !important; float: none;"><br data-mce-bogus=3D"=
1"></span></div><div><span style=3D"color: #000000; font-family: arial, hel=
vetica, sans-serif; font-size: 16px; font-style: normal; font-variant-ligat=
ures: normal; font-variant-caps: normal; font-weight: normal; letter-spacin=
g: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform:=
none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stro=
ke-width: 0px; background-color: #ffffff; display: inline !important; float=
: none;" data-mce-style=3D"color: #000000; font-family: arial, helvetica, s=
ans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: nor=
mal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal=
; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; wh=
ite-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width:=
0px; background-color: #ffffff; display: inline !important; float: none;">=
Should it work that way or is it a bug?</span></div><div><br></di=
v><div><br></div><div data-marker=3D"__SIG_PRE__"><div><br> <img src=3D"cid=
:2c8971e9a35b616f7274d6aab0e97537b5973dfc@zimbra" data-mce-src=3D"http://we=
bmail.at.ubimet.com/home/swandl@ubimet.com/Briefcase/ubimet_logo.gif" doc=
=3D"Briefcase/ubimet_logo.gif"><br> <span style=3D"font-variant: normal;" d=
ata-mce-style=3D"font-variant: normal;"><span style=3D"color: #b3b3b3;" col=
or=3D"#b3b3b3" data-mce-style=3D"color: #b3b3b3;"><span style=3D"font-famil=
y: arial;" face=3D"arial" data-mce-style=3D"font-family: arial;"><span styl=
e=3D"font-size: xx-small;" size=3D"1" data-mce-style=3D"font-size: xx-small=
;"><span style=3D"font-style: normal;" data-mce-style=3D"font-style: normal=
;"><span style=3D"font-weight: normal;" data-mce-style=3D"font-weight: norm=
al;">----------------------------------------------------------------------=
----------------------------------------------</span></span></span></span><=
/span></span><span style=3D"font-variant: normal;" data-mce-style=3D"font-v=
ariant: normal;"><span style=3D"color: #000000;" color=3D"#000000" data-mce=
-style=3D"color: #000000;"><span style=3D"font-family: arial;" face=3D"aria=
l" data-mce-style=3D"font-family: arial;"><span style=3D"font-size: xx-smal=
l;" size=3D"1" data-mce-style=3D"font-size: xx-small;"><span style=3D"font-=
style: normal;" data-mce-style=3D"font-style: normal;"><span style=3D"font-=
weight: normal;" data-mce-style=3D"font-weight: normal;"><br> </span></span=
></span></span></span></span><strong><span style=3D"font-variant: normal;" =
data-mce-style=3D"font-variant: normal;"><span style=3D"color: #666666;" co=
lor=3D"#666666" data-mce-style=3D"color: #666666;"><span style=3D"font-fami=
ly: Sans serif,sans-serif;" face=3D"Sans serif, sans-serif" data-mce-style=
=3D"font-family: Sans serif,sans-serif;"><span style=3D"font-size: xx-small=
;" size=3D"1" data-mce-style=3D"font-size: xx-small;"><span style=3D"font-s=
tyle: normal;" data-mce-style=3D"font-style: normal;"><span style=3D"font-w=
eight: normal;" data-mce-style=3D"font-weight: normal;">UBIMET GmbH</span><=
/span></span></span></span></span></strong><span style=3D"font-variant: nor=
mal;" data-mce-style=3D"font-variant: normal;"><span style=3D"color: #b3b3b=
3;" color=3D"#b3b3b3" data-mce-style=3D"color: #b3b3b3;"><span style=3D"fon=
t-family: Sans serif,sans-serif;" face=3D"Sans serif, sans-serif" data-mce-=
style=3D"font-family: Sans serif,sans-serif;"><span style=3D"font-size: xx-=
small;" size=3D"1" data-mce-style=3D"font-size: xx-small;"><span style=3D"f=
ont-style: normal;" data-mce-style=3D"font-style: normal;"><span style=3D"f=
ont-weight: normal;" data-mce-style=3D"font-weight: normal;"> - weathe=
r matters </span></span></span></span></span></span><span style=3D"fon=
t-variant: normal;" data-mce-style=3D"font-variant: normal;"><span style=3D=
"color: #000000;" color=3D"#000000" data-mce-style=3D"color: #000000;"><spa=
n style=3D"font-family: Sans serif,sans-serif;" face=3D"Sans serif, sans-se=
rif" data-mce-style=3D"font-family: Sans serif,sans-serif;"><span style=3D"=
font-size: xx-small;" size=3D"1" data-mce-style=3D"font-size: xx-small;"><s=
pan style=3D"font-style: normal;" data-mce-style=3D"font-style: normal;"><s=
pan style=3D"font-weight: normal;" data-mce-style=3D"font-weight: normal;">=
<br> </span></span></span></span></span></span><span style=3D"font-variant:=
normal;" data-mce-style=3D"font-variant: normal;"><span style=3D"color: #6=
66666;" color=3D"#666666" data-mce-style=3D"color: #666666;"><span style=3D=
"font-family: Sans serif,sans-serif;" face=3D"Sans serif, sans-serif" data-=
mce-style=3D"font-family: Sans serif,sans-serif;"><span style=3D"font-size:=
xx-small;" size=3D"1" data-mce-style=3D"font-size: xx-small;"><span style=
=3D"font-style: normal;" data-mce-style=3D"font-style: normal;"><span style=
=3D"font-weight: normal;" data-mce-style=3D"font-weight: normal;">Stefan Wa=
ndl</span></span></span></span></span></span></div><div><span style=3D"font=
-variant: normal;" data-mce-style=3D"font-variant: normal;"><span style=3D"=
color: #000000;" color=3D"#000000" data-mce-style=3D"color: #000000;"><span=
style=3D"font-family: Sans serif,sans-serif;" face=3D"Sans serif, sans-ser=
if" data-mce-style=3D"font-family: Sans serif,sans-serif;"><span style=3D"f=
ont-size: xx-small;" size=3D"1" data-mce-style=3D"font-size: xx-small;"><sp=
an style=3D"font-style: normal;" data-mce-style=3D"font-style: normal;"><sp=
an style=3D"font-weight: normal;" data-mce-style=3D"font-weight: normal;"><=
span style=3D"color: #666666;" data-mce-style=3D"color: #666666;"><span sty=
le=3D"font-family: 'Sans serif', sans-serif;" data-mce-style=3D"font-family=
: 'Sans serif', sans-serif;"><span style=3D"font-size: xx-small;" data-mce-=
style=3D"font-size: xx-small;">Information & Process Management</span><=
/span></span><br> <br> </span></span></span></span></span></span><span styl=
e=3D"font-variant: normal;" data-mce-style=3D"font-variant: normal;"><span =
style=3D"color: #666666;" color=3D"#666666" data-mce-style=3D"color: #66666=
6;"><span style=3D"font-family: Sans serif,sans-serif;" face=3D"Sans serif,=
sans-serif" data-mce-style=3D"font-family: Sans serif,sans-serif;"><span s=
tyle=3D"font-size: xx-small;" size=3D"1" data-mce-style=3D"font-size: xx-sm=
all;"><span style=3D"font-style: normal;" data-mce-style=3D"font-style: nor=
mal;"><span style=3D"font-weight: normal;" data-mce-style=3D"font-weight: n=
ormal;">A-1220 Wien =E2=80=A2 Donau-City-Stra=C3=9Fe 11 =E2=80=A2 Tel <span=
class=3D"Object" id=3D"OBJ_PREFIX_DWT2242_com_zimbra_phone"><a href=3D"cal=
lto:+43%201%20263%2011%2022%20479" data-mce-href=3D"callto:+43%201%20263%20=
11%2022%20479">+43 1 263 11 22 479</a></span><span style=3D"font-weight: no=
rmal;" class=3D"Object" id=3D"OBJ_PREFIX_DWT2243_com_zimbra_phone" data-mce=
-style=3D"font-weight: normal;"></span><br><a href=3D"mailto:swandl@ubimet.=
com" data-mce-href=3D"mailto:swandl@ubimet.com"> swandl<span class=3D"Objec=
t" id=3D"OBJ_PREFIX_DWT2244_com_zimbra_email"><span class=3D"moz-txt-link-a=
bbreviated">@ubimet.com</span></span></a></span></span></span></span></span=
></span><span style=3D"font-variant: normal;" data-mce-style=3D"font-varian=
t: normal;"><span style=3D"color: #666666;" color=3D"#666666" data-mce-styl=
e=3D"color: #666666;"><span style=3D"font-family: Sans serif,sans-serif;" f=
ace=3D"Sans serif, sans-serif" data-mce-style=3D"font-family: Sans serif,sa=
ns-serif;"><span style=3D"font-size: xx-small;" size=3D"1" data-mce-style=
=3D"font-size: xx-small;"> </span></span></span></span><span style=3D"font-=
variant: normal;" data-mce-style=3D"font-variant: normal;"><span style=3D"c=
olor: #666666;" color=3D"#666666" data-mce-style=3D"color: #666666;"><span =
style=3D"font-family: Sans serif,sans-serif;" face=3D"Sans serif,
sans-serif" data-mce-style=3D"font-family: Sans serif,sans-seri=
f;"><span style=3D"font-size: xx-small;" size=3D"1" data-mce-style=3D"font-=
size: xx-small;"><span style=3D"font-style: normal;" data-mce-style=3D"font=
-style: normal;"><span style=3D"font-weight: normal;" data-mce-style=3D"fon=
t-weight: normal;">=E2=80=A2</span></span></span></span></span></span><span=
style=3D"font-variant: normal;" data-mce-style=3D"font-variant: normal;"><=
span style=3D"color: #666666;" color=3D"#666666" data-mce-style=3D"color: #=
666666;"><span style=3D"font-family: Sans serif,sans-serif;" face=3D"Sans s=
erif, sans-serif" data-mce-style=3D"font-family: Sans serif,sans-serif;"><s=
pan style=3D"font-size: xx-small;" size=3D"1" data-mce-style=3D"font-size: =
xx-small;"> </span></span></span></span><span style=3D"font-variant: normal=
;" data-mce-style=3D"font-variant: normal;"><span style=3D"color: #666666;"=
color=3D"#666666" data-mce-style=3D"color: #666666;"><span style=3D"font-f=
amily: Sans serif,sans-serif;" face=3D"Sans serif,
sans-serif" data-mce-style=3D"font-family: Sans serif,sans-seri=
f;"><span style=3D"font-size: xx-small;" size=3D"1" data-mce-style=3D"font-=
size: xx-small;"><span style=3D"font-style: normal;" data-mce-style=3D"font=
-style: normal;"><span style=3D"font-weight: normal;" data-mce-style=3D"fon=
t-weight: normal;"><span class=3D"Object" id=3D"OBJ_PREFIX_DWT2245_com_zimb=
ra_url"><a class=3D"moz-txt-link-abbreviated" href=3D"http://www.ubimet.com=
" target=3D"_blank" data-mce-href=3D"http://www.ubimet.com">www.ubimet.com<=
/a></span><span style=3D"font-variant: normal;" data-mce-style=3D"font-vari=
ant: normal;"><span style=3D"color: #666666;" color=3D"#666666" data-mce-st=
yle=3D"color: #666666;"><span style=3D"font-family: Sans serif,sans-serif;"=
face=3D"Sans serif, sans-serif" data-mce-style=3D"font-family: Sans serif,=
sans-serif;"><span style=3D"font-size: xx-small;" size=3D"1" data-mce-style=
=3D"font-size: xx-small;"><span style=3D"font-style: normal;" data-mce-styl=
e=3D"font-style: normal;"><span style=3D"font-weight: normal;" data-mce-sty=
le=3D"font-weight: normal;"> </span></span></span></span></span></span><spa=
n style=3D"font-weight: normal;" class=3D"Object" id=3D"OBJ_PREFIX_DWT2246_=
com_zimbra_phone" data-mce-style=3D"font-weight: normal;"></span></span></s=
pan></span></span></span></span><span style=3D"font-variant: normal;" data-=
mce-style=3D"font-variant: normal;"><span style=3D"color: #666666;" color=
=3D"#666666" data-mce-style=3D"color: #666666;"><span style=3D"font-family:=
Sans serif,sans-serif;" face=3D"Sans serif, sans-serif" data-mce-style=3D"=
font-family: Sans serif,sans-serif;"><span style=3D"font-size: xx-small;" s=
ize=3D"1" data-mce-style=3D"font-size: xx-small;"><span style=3D"font-style=
: normal;" data-mce-style=3D"font-style: normal;"></span></span></span></sp=
an></span></div><div><span style=3D"color: #b3b3b3;" color=3D"#b3b3b3" data=
-mce-style=3D"color: #b3b3b3;"><span style=3D"font-family: arial;" face=3D"=
arial" data-mce-style=3D"font-family: arial;"><span style=3D"font-size: xx-=
small;" size=3D"1" data-mce-style=3D"font-size: xx-small;">----------------=
---------------------------------------------------------------------------=
-------------------------<br> <br> <span style=3D"font-size: xx-small;" siz=
e=3D"1" data-mce-style=3D"font-size: xx-small;">The information contained i=
n this message (including any attachments) is confidential and may be legal=
ly privileged or otherwise protected from disclosure. This message is inten=
ded solely for the addressee(s). If you are not the intended recipient, ple=
ase notify the sender by return e-mail and delete this message from your sy=
stem. Any unauthorized use, reproduction, or dissemination of this message =
is strictly prohibited. Please note that e-mails are susceptible to change.=
UBIMET GmbH shall not be liable for the improper or incomplete transmissio=
n of the information contained in this communication, nor shall it be liabl=
e for any delay in its receipt. UBIMET GmbH accepts no liability for loss o=
r damage caused by software viruses and you are advised to carry out a viru=
s check on any attachments contained in this message.</span></span></span><=
/span></div></div></div></body></html>
------=_Part_9082033_1840351450.1481895402640
Content-Type: image/gif; name=ubimet_logo.gif
Content-Disposition: attachment; filename=ubimet_logo.gif
Content-Transfer-Encoding: base64
Content-ID: <2c8971e9a35b616f7274d6aab0e97537b5973dfc@zimbra>
R0lGODlhfgASANUAALq6uv/MK9HR0f/dev/uvt7k5wAqROnp6e/x82iDkvn5+ampqc7W20NmeP/Q
Qt3d3f/vvv/DAP/774qeqq7n9hhJX7+/v4Da8f/yz//HBputtv/231Z1hpjg9Njy+v/ZaQLH6/Pz
8//dewA5UgDB6ay6w+7u7kbO7sXFxf/mnQCh3ePj4+v5/f/hi/++AACp4ACv4tfX1//qrcvLy//U
VgC75q+vrwC15S9XbL3IznmQnqKiov+6AACb2wAaN////yH/C1hNUCBEYXRhWE1QPD94cGFja2V0
IGJlZ2luPSLvu78iIGlkPSJXNU0wTXBDZWhpSHpyZVN6TlRjemtjOWQiPz4gPHg6eG1wbWV0YSB4
bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iQWRvYmUgWE1QIENvcmUgNS41LWMwMTQg
NzkuMTUxNDgxLCAyMDEzLzAzLzEzLTEyOjA5OjE1ICAgICAgICAiPiA8cmRmOlJERiB4bWxuczpy
ZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRmLXN5bnRheC1ucyMiPiA8cmRmOkRl
c2NyaXB0aW9uIHJkZjphYm91dD0iIiB4bWxuczp4bXA9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFw
LzEuMC8iIHhtbG5zOnhtcE1NPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvbW0vIiB4bWxu
czpzdFJlZj0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL3NUeXBlL1Jlc291cmNlUmVmIyIg
eG1wOkNyZWF0b3JUb29sPSJBZG9iZSBQaG90b3Nob3AgQ0MgKE1hY2ludG9zaCkiIHhtcE1NOklu
c3RhbmNlSUQ9InhtcC5paWQ6MEZGRURERjkxRjlCMTFFM0E2RUZCRUJDMTVDNTI0QjAiIHhtcE1N
OkRvY3VtZW50SUQ9InhtcC5kaWQ6MEZGRURERkExRjlCMTFFM0E2RUZCRUJDMTVDNTI0QjAiPiA8
eG1wTU06RGVyaXZlZEZyb20gc3RSZWY6aW5zdGFuY2VJRD0ieG1wLmlpZDowRkZFRERGNzFGOUIx
MUUzQTZFRkJFQkMxNUM1MjRCMCIgc3RSZWY6ZG9jdW1lbnRJRD0ieG1wLmRpZDowRkZFRERGODFG
OUIxMUUzQTZFRkJFQkMxNUM1MjRCMCIvPiA8L3JkZjpEZXNjcmlwdGlvbj4gPC9yZGY6UkRGPiA8
L3g6eG1wbWV0YT4gPD94cGFja2V0IGVuZD0iciI/PgH//v38+/r5+Pf29fTz8vHw7+7t7Ovq6ejn
5uXk4+Lh4N/e3dzb2tnY19bV1NPS0dDPzs3My8rJyMfGxcTDwsHAv769vLu6ubi3trW0s7KxsK+u
rayrqqmop6alpKOioaCfnp2cm5qZmJeWlZSTkpGQj46NjIuKiYiHhoWEg4KBgH9+fXx7enl4d3Z1
dHNycXBvbm1sa2ppaGdmZWRjYmFgX15dXFtaWVhXVlVUU1JRUE9OTUxLSklIR0ZFRENCQUA/Pj08
Ozo5ODc2NTQzMjEwLy4tLCsqKSgnJiUkIyIhIB8eHRwbGhkYFxYVFBMSERAPDg0MCwoJCAcGBQQD
AgEAACH5BAAAAAAALAAAAAB+ABIAAAb/wJ9wSCwaj8ikcslsOp/HQo5hzOUQQqvWWigyct2fFluU
grPbLdWc3g5Tgbh8Tsc8APi8fv8D3f6AgYJCOj44Rj4+OUKJjY0JRDg+OoyJkEQIBomUP46eh4We
nkMDPKanqKkEAjutrq+wPzc9tLW2t4SGiIqVopNDkpyOi0MNjcK+uqHJPqSpz6irsNOvsrfXtrmH
RYnE3UMIkgbAv52NI2Qljsi+oMyJzqguDiIBz9LU1Na3NxfY2ru88RqirpmQYL0s/ci0rtcIHRAh
ahATUVIiDhHjmaIhYciGDNFYuVqwQkgIC7Fm3fLw48U1gNwGfhsSihy7RjksHuu1Lcky/2JESply
8AMCSAcSJEQ4ha9VDAUzAJhQgNLVvlokfnjo8PJHoZ5DZl6MmOCizYTM2BnAwZbtJZo4jQjlsWED
qgw/WjAVaUOAApELhFBtdbWHCg8XsqqsBZOI2GRAEZp7V44ZWK9xi8z9ISIVBAh7dwD4EeIAClcW
ABxQsGBH4RMte1CggAvzZXMCfRmg8kNyI02OgFdOdvmn3FM/PngGbUragxCtpwUG4HpxDxb+etz4
ASOb7YCVgP5goKnBweGGjDWasHOyqOKZg57agAFVhLyhD6zIN5p64at9ePcVeLgVUdY4vaGHA0OT
5NBeI7cVYZxmpxAlw1IBYKBUaCj88PCAACCG6JcJhFnHAlfa/YdZBWU8Jh5mCPqWyCE5QCTGg4lU
kAZvcM0k3ykfECEBSKHtIMABSCaJ5APRFZYVBRf8QJt37PkwwRAFVJAIGT4uNIIP5iXIzmUOboKW
I/B1KcRcpkTwgQgOuKCKSPmkdEsNHVBwggq1IaClD2vp9FaObeEAXAnnjVlEmei9J2F8GkEzZ53T
FIaNdz8UoNMjjiXDiZg8LYpjO4+q+QObkhZJqVXWXcoYETU2wMEEYdAUUUS0EqGBDsRENBERBUTU
6623/jpEjRDV+gYdzM5hxx7Q6tGHINQGAsW12Gar7bZQBAEAOw==
------=_Part_9082033_1840351450.1481895402640--
------=_Part_9082032_1789305433.1481895402640--
8 years