oVirt 3.6.7 upgrade to 4.0.5 and CentOS 7.3
by ovirt@timmi.org
This is a multi-part message in MIME format.
--------------32AAAA2813839A67E3A601C9
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hi oVirt List,
I wanted to upgrade our oVirt 3.6.7 installation in the next couple of
days to oVirt 4.0.5.
My hosts are running CentOS 7.2 currently. It is safe to perform the
upgrade also to CentOS 7.3?
Is 4.0.5 also support this version of CentOS?
It is correct that the upgrade to 4.0 is the same as always?
Just I need to install the new repositories?
I guess I have to delete the old 3.6 repos or not?
|# yum install
http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm
# yum update "ovirt-engine-setup*" # engine-setup|
Best regards and thank you for your answers
Christoph
--------------32AAAA2813839A67E3A601C9
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hi oVirt List,<br>
<br>
I wanted to upgrade our oVirt 3.6.7 installation in the next couple
of days to oVirt 4.0.5.<br>
My hosts are running CentOS 7.2 currently. It is safe to perform the
upgrade also to CentOS 7.3? <br>
<br>
Is 4.0.5 also support this version of CentOS?<br>
<br>
It is correct that the upgrade to 4.0 is the same as always?<br>
Just I need to install the new repositories?<br>
I guess I have to delete the old 3.6 repos or not?<br>
<br>
<pre class="highlight plaintext"><code> # yum install <a class="moz-txt-link-freetext" href="http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm">http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm</a>
# yum update "ovirt-engine-setup*"
# engine-setup</code></pre>
<br>
Best regards and thank you for your answers<br>
Christoph<br>
</body>
</html>
--------------32AAAA2813839A67E3A601C9--
8 years
oVirt / OVN / MTU
by Devin Acosta
We are running oVirt 4.0.5 and we have OVN working to provide a Virtual
Layer 2 network. We are noticing that because the OVN is using Geneve and
between all the firewalls and networks it crosses we are running into an
MTU issue. What is the best suggested way to lower say the entire OVN
network to say MTU of 1400, and also allow for fragmenting packets?
--
Devin Acosta
Red Hat Certified Architect, LinuxStack
602-354-1220 || devin(a)linuxguru.co
8 years
ovirtmgmt network change
by Bill Bill
--_000_SN2PR0801MB0752AB712D2E46F2E92CD53EA6900SN2PR0801MB0752_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SGVsbG8sDQoNCkhvdyBjYW4gdGhlIElQIG9mIHRoZSBob3N0cyBiZSBjaGFuZ2VkPyBJdCBpcyBn
cmV5ZWQgb3V0IGFuZCBJIG5lZWQgdG8gY2hhbmdlIHRob3NlLiBJdOKAmXMgdGhlIG9ubHkgdGhp
bmcgSeKAmW0gbm90IGFibGUgdG8gY2hhbmdlLi4NCg==
--_000_SN2PR0801MB0752AB712D2E46F2E92CD53EA6900SN2PR0801MB0752_
Content-Type: text/html; charset="utf-8"
Content-ID: <05640483046E4C41AD151F9E20E91A5C(a)sct-15-1-789-14-msonline-outlook-8b04f.templateTenant>
Content-Transfer-Encoding: base64
PGh0bWwgeG1sbnM6bz0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6b2ZmaWNlIiB4
bWxuczp3PSJ1cm46c2NoZW1hcy1taWNyb3NvZnQtY29tOm9mZmljZTp3b3JkIiB4bWxuczptPSJo
dHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL29mZmljZS8yMDA0LzEyL29tbWwiIHhtbG5zPSJo
dHRwOi8vd3d3LnczLm9yZy9UUi9SRUMtaHRtbDQwIj4NCjxoZWFkPg0KPG1ldGEgaHR0cC1lcXVp
dj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7IGNoYXJzZXQ9dXRmLTgiPg0KPG1l
dGEgbmFtZT0iR2VuZXJhdG9yIiBjb250ZW50PSJNaWNyb3NvZnQgV29yZCAxNSAoZmlsdGVyZWQg
bWVkaXVtKSI+DQo8c3R5bGU+PCEtLQ0KLyogRm9udCBEZWZpbml0aW9ucyAqLw0KQGZvbnQtZmFj
ZQ0KCXtmb250LWZhbWlseToiQ2FtYnJpYSBNYXRoIjsNCglwYW5vc2UtMToyIDQgNSAzIDUgNCA2
IDMgMiA0O30NCkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6Q2FsaWJyaTsNCglwYW5vc2UtMToy
IDE1IDUgMiAyIDIgNCAzIDIgNDt9DQovKiBTdHlsZSBEZWZpbml0aW9ucyAqLw0KcC5Nc29Ob3Jt
YWwsIGxpLk1zb05vcm1hbCwgZGl2Lk1zb05vcm1hbA0KCXttYXJnaW46MGluOw0KCW1hcmdpbi1i
b3R0b206LjAwMDFwdDsNCglmb250LXNpemU6MTEuMHB0Ow0KCWZvbnQtZmFtaWx5OiJDYWxpYnJp
IixzYW5zLXNlcmlmO30NCmE6bGluaywgc3Bhbi5Nc29IeXBlcmxpbmsNCgl7bXNvLXN0eWxlLXBy
aW9yaXR5Ojk5Ow0KCWNvbG9yOmJsdWU7DQoJdGV4dC1kZWNvcmF0aW9uOnVuZGVybGluZTt9DQph
OnZpc2l0ZWQsIHNwYW4uTXNvSHlwZXJsaW5rRm9sbG93ZWQNCgl7bXNvLXN0eWxlLXByaW9yaXR5
Ojk5Ow0KCWNvbG9yOiM5NTRGNzI7DQoJdGV4dC1kZWNvcmF0aW9uOnVuZGVybGluZTt9DQouTXNv
Q2hwRGVmYXVsdA0KCXttc28tc3R5bGUtdHlwZTpleHBvcnQtb25seTt9DQpAcGFnZSBXb3JkU2Vj
dGlvbjENCgl7c2l6ZTo4LjVpbiAxMS4waW47DQoJbWFyZ2luOjEuMGluIDEuMGluIDEuMGluIDEu
MGluO30NCmRpdi5Xb3JkU2VjdGlvbjENCgl7cGFnZTpXb3JkU2VjdGlvbjE7fQ0KLS0+PC9zdHls
ZT4NCjwvaGVhZD4NCjxib2R5IGxhbmc9IkVOLVVTIiBsaW5rPSJibHVlIiB2bGluaz0iIzk1NEY3
MiI+DQo8ZGl2IGNsYXNzPSJXb3JkU2VjdGlvbjEiPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+SGVs
bG8sPC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PG86cD4mbmJzcDs8L286cD48L3A+DQo8cCBj
bGFzcz0iTXNvTm9ybWFsIj5Ib3cgY2FuIHRoZSBJUCBvZiB0aGUgaG9zdHMgYmUgY2hhbmdlZD8g
SXQgaXMgZ3JleWVkIG91dCBhbmQgSSBuZWVkIHRvIGNoYW5nZSB0aG9zZS4gSXTigJlzIHRoZSBv
bmx5IHRoaW5nIEnigJltIG5vdCBhYmxlIHRvIGNoYW5nZS4uDQo8L3A+DQo8L2Rpdj4NCjwvYm9k
eT4NCjwvaHRtbD4NCg==
--_000_SN2PR0801MB0752AB712D2E46F2E92CD53EA6900SN2PR0801MB0752_--
8 years
Regarding old mail access
by TranceWorldLogic .
Hi,
While using this tool I try to do search and got some link (user ovirt mail
links) where problem is already discussed.
But when I try to open those link it not visible.
Can some one help me to know how to view previous mail discussion ?
Thanks,
~Rohit
8 years
EventProcessingPoolSize
by joost@familiealbers.nl
Hi All, there is an engine config option named EventProcessingPoolSize:
default value is 10.
I am wondering how to determine what it is and if the setting is right
for my setup.
I have around 36 dc's at the moment with two hosts running between 2 -3
vm;s
should this value be increased?
8 years
News from oVirt CI: Introducing 'build-on-demand'
by Eyal Edri
FYI,
Following last announcement on the manual build from patch job [1], we got
some feedback and
requests from developers on ability to improve the flow of building
artifacts from a patch.
I'm happy to announce that after some coding, the infra team was able to
add a new feature
to the 'standard CI' framework, that will allow any oVirt project to build
rpms from any VERSION or OS DISTRO using a single comment in the patch.
Full details can be found on the new oVirt blog 'ci please build' [2], but
to give the TL;DR version here,
All you have to do is write '*ci please build*' on a comment and CI will
trigger a job for you with new RPMs (or tarballs).
The projects which already have this feature enabled are:
- ovirt-engine
- vdsm
- vdsm-jsonrpc-java
- ovirt-engine-dashboard
Adding new project is a single line of code in the project YAML file and
its fully described on the blog post [2], so feel free to add your project
as well.
So let the builds roll...
Happy Xmas!
[1] http://lists.phx.ovirt.org/pipermail/devel/2016-December/028967.html
[2] https://www.ovirt.org/blog/2016/12/ci-please-build/
--
Eyal Edri
Associate Manager
RHV DevOps
EMEA ENG Virtualization R&D
Red Hat Israel
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
8 years
Re: [ovirt-users] Python stack trace for VDSM while monitoring GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17
by Giuseppe Ragusa
On Tue, Dec 20, 2016, at 09:16, Ramesh Nachimuthu wrote:
> ----- Original Message -----
> > From: "Giuseppe Ragusa" <giuseppe.ragusa(a)hotmail.com>
> > To: "Ramesh Nachimuthu" <rnachimu(a)redhat.com>
> > Cc: users(a)ovirt.org, gluster-users(a)gluster.org, "Ravishankar Narayanankutty" <ranaraya(a)redhat.com>
> > Sent: Tuesday, December 20, 2016 4:15:18 AM
> > Subject: Re: [ovirt-users] Python stack trace for VDSM while monitoring GlusterFS volumes in HC HE oVirt 3.6.7 /
> > GlusterFS 3.7.17
> >
> > On Fri, Dec 16, 2016, at 05:44, Ramesh Nachimuthu wrote:
> > > ----- Original Message -----
> > > > From: "Giuseppe Ragusa" <giuseppe.ragusa(a)hotmail.com>
> > > > To: "Ramesh Nachimuthu" <rnachimu(a)redhat.com>
> > > > Cc: users(a)ovirt.org
> > > > Sent: Friday, December 16, 2016 2:42:18 AM
> > > > Subject: Re: [ovirt-users] Python stack trace for VDSM while monitoring
> > > > GlusterFS volumes in HC HE oVirt 3.6.7 /
> > > > GlusterFS 3.7.17
> > > >
> > > > Giuseppe Ragusa ha condiviso un file di OneDrive. Per visualizzarlo, fare
> > > > clic sul collegamento seguente.
> > > >
> > > >
> > > > <https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > > > [https://r1.res.office365.com/owa/prem/images/dc-generic_20.png]<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > > >
> > > > vols.tar.gz<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > > >
> > > >
> > > >
> > > > Da: Ramesh Nachimuthu <rnachimu(a)redhat.com>
> > > > Inviato: lunedì 12 dicembre 2016 09.32
> > > > A: Giuseppe Ragusa
> > > > Cc: users(a)ovirt.org
> > > > Oggetto: Re: [ovirt-users] Python stack trace for VDSM while monitoring
> > > > GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17
> > > >
> > > > On 12/09/2016 08:50 PM, Giuseppe Ragusa wrote:
> > > > > Hi all,
> > > > >
> > > > > I'm writing to ask about the following problem (in a HC HE oVirt 3.6.7
> > > > > GlusterFS 3.7.17 3-hosts-replica-with-arbiter sharded-volumes setup all
> > > > > on
> > > > > CentOS 7.2):
> > > > >
> > > > > From /var/log/messages:
> > > > >
> > > > > Dec 9 15:27:46 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR
> > > > > Internal
> > > > > server error#012Traceback (most recent call last):#012 File
> > > > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > > > _serveRequest#012 res = method(**params)#012 File
> > > > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012
> > > > > result
> > > > > = fn(*methodArgs)#012 File "/usr/share/vdsm/gluster/apiwrapper.py",
> > > > > line
> > > > > 117, in status#012 return self._gluster.volumeStatus(volumeName,
> > > > > brick,
> > > > > statusOption)#012 File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > > > wrapper#012 rv = func(*args, **kwargs)#012 File
> > > > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > > > statusOption)#012 File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > > > __call__#012 return callMethod()#012 File
> > > > > "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>#012
> > > > > **kwargs)#012
> > > > > File "<string>", line 2, in glusterVolumeStatus#012 File
> > > > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > > > llmethod#012 raise convert_to_error(kind, result)#012KeyError:
> > > > > 'device'
> > > > > Dec 9 15:27:47 shockley ovirt-ha-agent:
> > > > > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Extracting
> > > > > Engine
> > > > > VM OVF from the OVF_STORE
> > > > > Dec 9 15:27:47 shockley ovirt-ha-agent:
> > > > > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:OVF_STORE volume
> > > > > path:
> > > > > /rhev/data-center/mnt/glusterSD/shockley.gluster.private:_enginedomain/1d60fd45-507d-4a78-8294-d642b3178ea3/images/22a172de-698e-4cc5-bff0-082882fb3347/8738287c-8a25-4a2a-a53a-65c366a972a1
> > > > > Dec 9 15:27:47 shockley ovirt-ha-agent:
> > > > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Found
> > > > > an OVF for HE VM, trying to convert
> > > > > Dec 9 15:27:47 shockley ovirt-ha-agent:
> > > > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Got
> > > > > vm.conf from OVF_STORE
> > > > > Dec 9 15:27:47 shockley ovirt-ha-agent:
> > > > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Current
> > > > > state
> > > > > EngineUp (score: 3400)
> > > > > Dec 9 15:27:47 shockley ovirt-ha-agent:
> > > > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Best
> > > > > remote
> > > > > host read.mgmt.private (id: 2, score: 3400)
> > > > > Dec 9 15:27:48 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR
> > > > > Internal
> > > > > server error#012Traceback (most recent call last):#012 File
> > > > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > > > _serveRequest#012 res = method(**params)#012 File
> > > > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012
> > > > > result
> > > > > = fn(*methodArgs)#012 File "/usr/share/vdsm/gluster/apiwrapper.py",
> > > > > line
> > > > > 117, in status#012 return self._gluster.volumeStatus(volumeName,
> > > > > brick,
> > > > > statusOption)#012 File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > > > wrapper#012 rv = func(*args, **kwargs)#012 File
> > > > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > > > statusOption)#012 File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > > > __call__#012 return callMethod()#012 File
> > > > > "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>#012
> > > > > **kwargs)#012
> > > > > File "<string>", line 2, in glusterVolumeStatus#012 File
> > > > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > > > llmethod#012 raise convert_to_error(kind, result)#012KeyError:
> > > > > 'device'
> > > > > Dec 9 15:27:48 shockley ovirt-ha-broker:
> > > > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > > > established
> > > > > Dec 9 15:27:48 shockley ovirt-ha-broker:
> > > > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > > > closed
> > > > > Dec 9 15:27:48 shockley ovirt-ha-broker:
> > > > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > > > established
> > > > > Dec 9 15:27:48 shockley ovirt-ha-broker:
> > > > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > > > closed
> > > > > Dec 9 15:27:48 shockley ovirt-ha-broker:
> > > > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > > > established
> > > > > Dec 9 15:27:48 shockley ovirt-ha-broker:
> > > > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > > > closed
> > > > > Dec 9 15:27:48 shockley ovirt-ha-broker:
> > > > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > > > established
> > > > > Dec 9 15:27:48 shockley ovirt-ha-broker:
> > > > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > > > closed
> > > > > Dec 9 15:27:48 shockley ovirt-ha-broker:
> > > > > INFO:mem_free.MemFree:memFree:
> > > > > 7392
> > > > > Dec 9 15:27:50 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR
> > > > > Internal
> > > > > server error#012Traceback (most recent call last):#012 File
> > > > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > > > _serveRequest#012 res = method(**params)#012 File
> > > > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012
> > > > > result
> > > > > = fn(*methodArgs)#012 File "/usr/share/vdsm/gluster/apiwrapper.py",
> > > > > line
> > > > > 117, in status#012 return self._gluster.volumeStatus(volumeName,
> > > > > brick,
> > > > > statusOption)#012 File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > > > wrapper#012 rv = func(*args, **kwargs)#012 File
> > > > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > > > statusOption)#012 File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > > > __call__#012 return callMethod()#012 File
> > > > > "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>#012
> > > > > **kwargs)#012
> > > > > File "<string>", line 2, in glusterVolumeStatus#012 File
> > > > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > > > llmethod#012 raise convert_to_error(kind, result)#012KeyError:
> > > > > 'device'
> > > > > Dec 9 15:27:52 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR
> > > > > Internal
> > > > > server error#012Traceback (most recent call last):#012 File
> > > > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > > > _serveRequest#012 res = method(**params)#012 File
> > > > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012
> > > > > result
> > > > > = fn(*methodArgs)#012 File "/usr/share/vdsm/gluster/apiwrapper.py",
> > > > > line
> > > > > 117, in status#012 return self._gluster.volumeStatus(volumeName,
> > > > > brick,
> > > > > statusOption)#012 File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > > > wrapper#012 rv = func(*args, **kwargs)#012 File
> > > > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > > > statusOption)#012 File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > > > __call__#012 return callMethod()#012 File
> > > > > "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>#012
> > > > > **kwargs)#012
> > > > > File "<string>", line 2, in glusterVolumeStatus#012 File
> > > > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > > > llmethod#012 raise convert_to_error(kind, result)#012KeyError:
> > > > > 'device'
> > > > > Dec 9 15:27:54 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR
> > > > > Internal
> > > > > server error#012Traceback (most recent call last):#012 File
> > > > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > > > _serveRequest#012 res = method(**params)#012 File
> > > > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012
> > > > > result
> > > > > = fn(*methodArgs)#012 File "/usr/share/vdsm/gluster/apiwrapper.py",
> > > > > line
> > > > > 117, in status#012 return self._gluster.volumeStatus(volumeName,
> > > > > brick,
> > > > > statusOption)#012 File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > > > wrapper#012 rv = func(*args, **kwargs)#012 File
> > > > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > > > statusOption)#012 File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > > > __call__#012 return callMethod()#012 File
> > > > > "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>#012
> > > > > **kwargs)#012
> > > > > File "<string>", line 2, in glusterVolumeStatus#012 File
> > > > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > > > llmethod#012 raise convert_to_error(kind, result)#012KeyError:
> > > > > 'device'
> > > > > Dec 9 15:27:55 shockley ovirt-ha-broker:
> > > > > INFO:cpu_load_no_engine.EngineHealth:System load total=0.1234,
> > > > > engine=0.0364, non-engine=0.0869
> > > > > Dec 9 15:27:57 shockley ovirt-ha-agent:
> > > > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Initializing
> > > > > VDSM
> > > > > Dec 9 15:27:57 shockley ovirt-ha-agent:
> > > > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Connecting
> > > > > the storage
> > > > > Dec 9 15:27:58 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR
> > > > > Internal
> > > > > server error#012Traceback (most recent call last):#012 File
> > > > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > > > _serveRequest#012 res = method(**params)#012 File
> > > > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012
> > > > > result
> > > > > = fn(*methodArgs)#012 File "/usr/share/vdsm/gluster/apiwrapper.py",
> > > > > line
> > > > > 117, in status#012 return self._gluster.volumeStatus(volumeName,
> > > > > brick,
> > > > > statusOption)#012 File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > > > wrapper#012 rv = func(*args, **kwargs)#012 File
> > > > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > > > statusOption)#012 File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > > > __call__#012 return callMethod()#012 File
> > > > > "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>#012
> > > > > **kwargs)#012
> > > > > File "<string>", line 2, in glusterVolumeStatus#012 File
> > > > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > > > llmethod#012 raise convert_to_error(kind, result)#012KeyError:
> > > > > 'device'
> > > > > Dec 9 15:27:58 shockley ovirt-ha-agent:
> > > > > INFO:ovirt_hosted_engine_ha.lib.storage_server.StorageServer:Connecting
> > > > > storage server
> > > > > Dec 9 15:27:58 shockley ovirt-ha-agent:
> > > > > INFO:ovirt_hosted_engine_ha.lib.storage_server.StorageServer:Connecting
> > > > > storage server
> > > > > Dec 9 15:27:59 shockley ovirt-ha-agent:
> > > > > INFO:ovirt_hosted_engine_ha.lib.storage_server.StorageServer:Refreshing
> > > > > the storage domain
> > > > > Dec 9 15:27:59 shockley ovirt-ha-broker:
> > > > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > > > established
> > > > > Dec 9 15:27:59 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR
> > > > > Internal
> > > > > server error#012Traceback (most recent call last):#012 File
> > > > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > > > _serveRequest#012 res = method(**params)#012 File
> > > > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012
> > > > > result
> > > > > = fn(*methodArgs)#012 File "/usr/share/vdsm/gluster/apiwrapper.py",
> > > > > line
> > > > > 117, in status#012 return self._gluster.volumeStatus(volumeName,
> > > > > brick,
> > > > > statusOption)#012 File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > > > wrapper#012 rv = func(*args, **kwargs)#012 File
> > > > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > > > statusOption)#012 File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > > > __call__#012 return callMethod()#012 File
> > > > > "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>#012
> > > > > **kwargs)#012
> > > > > File "<string>", line 2, in glusterVolumeStatus#012 File
> > > > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > > > llmethod#012 raise convert_to_error(kind, result)#012KeyError:
> > > > > 'device'
> > > > >
> > > > > From /var/log/vdsm/vdsm.log:
> > > > >
> > > > > jsonrpc.Executor/1::ERROR::2016-12-09
> > > > > 15:27:46,870::__init__::538::jsonrpc.JsonRpcServer::(_serveRequest)
> > > > > Internal server error
> > > > > Traceback (most recent call last):
> > > > > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> > > > > 533,
> > > > > in _serveRequest
> > > > > res = method(**params)
> > > > > File "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod
> > > > > result = fn(*methodArgs)
> > > > > File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status
> > > > > return self._gluster.volumeStatus(volumeName, brick, statusOption)
> > > > > File "/usr/share/vdsm/gluster/api.py", line 86, in wrapper
> > > > > rv = func(*args, **kwargs)
> > > > > File "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus
> > > > > statusOption)
> > > > > File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> > > > > return callMethod()
> > > > > File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
> > > > > **kwargs)
> > > > > File "<string>", line 2, in glusterVolumeStatus
> > > > > File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
> > > > > in
> > > > > _callmethod
> > > > > raise convert_to_error(kind, result)
> > > > > KeyError: 'device'
> > > > > jsonrpc.Executor/5::ERROR::2016-12-09
> > > > > 15:27:48,627::__init__::538::jsonrpc.JsonRpcServer::(_serveRequest)
> > > > > Internal server error
> > > > > Traceback (most recent call last):
> > > > > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> > > > > 533,
> > > > > in _serveRequest
> > > > > res = method(**params)
> > > > > File "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod
> > > > > result = fn(*methodArgs)
> > > > > File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status
> > > > > return self._gluster.volumeStatus(volumeName, brick, statusOption)
> > > > > File "/usr/share/vdsm/gluster/api.py", line 86, in wrapper
> > > > > rv = func(*args, **kwargs)
> > > > > File "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus
> > > > > statusOption)
> > > > > File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> > > > > return callMethod()
> > > > > File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
> > > > > **kwargs)
> > > > > File "<string>", line 2, in glusterVolumeStatus
> > > > > File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
> > > > > in
> > > > > _callmethod
> > > > > raise convert_to_error(kind, result)
> > > > > KeyError: 'device'
> > > > > jsonrpc.Executor/7::ERROR::2016-12-09
> > > > > 15:27:50,164::__init__::538::jsonrpc.JsonRpcServer::(_serveRequest)
> > > > > Internal server error
> > > > > Traceback (most recent call last):
> > > > > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> > > > > 533,
> > > > > in _serveRequest
> > > > > res = method(**params)
> > > > > File "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod
> > > > > result = fn(*methodArgs)
> > > > > File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status
> > > > > return self._gluster.volumeStatus(volumeName, brick, statusOption)
> > > > > File "/usr/share/vdsm/gluster/api.py", line 86, in wrapper
> > > > > rv = func(*args, **kwargs)
> > > > > File "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus
> > > > > statusOption)
> > > > > File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> > > > > return callMethod()
> > > > > File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
> > > > > **kwargs)
> > > > > File "<string>", line 2, in glusterVolumeStatus
> > > > > File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
> > > > > in
> > > > > _callmethod
> > > > > raise convert_to_error(kind, result)
> > > > > KeyError: 'device'
> > > > > jsonrpc.Executor/0::ERROR::2016-12-09
> > > > > 15:27:52,804::__init__::538::jsonrpc.JsonRpcServer::(_serveRequest)
> > > > > Internal server error
> > > > > Traceback (most recent call last):
> > > > > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> > > > > 533,
> > > > > in _serveRequest
> > > > > res = method(**params)
> > > > > File "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod
> > > > > result = fn(*methodArgs)
> > > > > File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status
> > > > > return self._gluster.volumeStatus(volumeName, brick, statusOption)
> > > > > File "/usr/share/vdsm/gluster/api.py", line 86, in wrapper
> > > > > rv = func(*args, **kwargs)
> > > > > File "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus
> > > > > statusOption)
> > > > > File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> > > > > return callMethod()
> > > > > File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
> > > > > **kwargs)
> > > > > File "<string>", line 2, in glusterVolumeStatus
> > > > > File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
> > > > > in
> > > > > _callmethod
> > > > > raise convert_to_error(kind, result)
> > > > > KeyError: 'device'
> > > > > jsonrpc.Executor/5::ERROR::2016-12-09
> > > > > 15:27:54,679::__init__::538::jsonrpc.JsonRpcServer::(_serveRequest)
> > > > > Internal server error
> > > > > Traceback (most recent call last):
> > > > > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> > > > > 533,
> > > > > in _serveRequest
> > > > > res = method(**params)
> > > > > File "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod
> > > > > result = fn(*methodArgs)
> > > > > File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status
> > > > > return self._gluster.volumeStatus(volumeName, brick, statusOption)
> > > > > File "/usr/share/vdsm/gluster/api.py", line 86, in wrapper
> > > > > rv = func(*args, **kwargs)
> > > > > File "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus
> > > > > statusOption)
> > > > > File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> > > > > return callMethod()
> > > > > File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
> > > > > **kwargs)
> > > > > File "<string>", line 2, in glusterVolumeStatus
> > > > > File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
> > > > > in
> > > > > _callmethod
> > > > > raise convert_to_error(kind, result)
> > > > > KeyError: 'device'
> > > > > jsonrpc.Executor/2::ERROR::2016-12-09
> > > > > 15:27:58,349::__init__::538::jsonrpc.JsonRpcServer::(_serveRequest)
> > > > > Internal server error
> > > > > Traceback (most recent call last):
> > > > > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> > > > > 533,
> > > > > in _serveRequest
> > > > > res = method(**params)
> > > > > File "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod
> > > > > result = fn(*methodArgs)
> > > > > File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status
> > > > > return self._gluster.volumeStatus(volumeName, brick, statusOption)
> > > > > File "/usr/share/vdsm/gluster/api.py", line 86, in wrapper
> > > > > rv = func(*args, **kwargs)
> > > > > File "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus
> > > > > statusOption)
> > > > > File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> > > > > return callMethod()
> > > > > File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
> > > > > **kwargs)
> > > > > File "<string>", line 2, in glusterVolumeStatus
> > > > > File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
> > > > > in
> > > > > _callmethod
> > > > > raise convert_to_error(kind, result)
> > > > > KeyError: 'device'
> > > > > jsonrpc.Executor/4::ERROR::2016-12-09
> > > > > 15:27:59,169::__init__::538::jsonrpc.JsonRpcServer::(_serveRequest)
> > > > > Internal server error
> > > > > Traceback (most recent call last):
> > > > > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> > > > > 533,
> > > > > in _serveRequest
> > > > > res = method(**params)
> > > > > File "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod
> > > > > result = fn(*methodArgs)
> > > > > File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status
> > > > > return self._gluster.volumeStatus(volumeName, brick, statusOption)
> > > > > File "/usr/share/vdsm/gluster/api.py", line 86, in wrapper
> > > > > rv = func(*args, **kwargs)
> > > > > File "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus
> > > > > statusOption)
> > > > > File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> > > > > return callMethod()
> > > > > File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
> > > > > **kwargs)
> > > > > File "<string>", line 2, in glusterVolumeStatus
> > > > > File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
> > > > > in
> > > > > _callmethod
> > > > > raise convert_to_error(kind, result)
> > > > > KeyError: 'device'
> > > > >
> > > > > From /var/log/vdsm/supervdsm.log:
> > > > >
> > > > > Traceback (most recent call last):
> > > > > File "/usr/share/vdsm/supervdsmServer", line 118, in wrapper
> > > > > res = func(*args, **kwargs)
> > > > > File "/usr/share/vdsm/supervdsmServer", line 534, in wrapper
> > > > > return func(*args, **kwargs)
> > > > > File "/usr/share/vdsm/gluster/cli.py", line 352, in volumeStatus
> > > > > return _parseVolumeStatusDetail(xmltree)
> > > > > File "/usr/share/vdsm/gluster/cli.py", line 216, in
> > > > > _parseVolumeStatusDetail
> > > > > 'device': value['device'],
> > > > > KeyError: 'device'
> > > > > MainProcess|jsonrpc.Executor/5::ERROR::2016-12-09
> > > > > 15:27:48,625::supervdsmServer::120::SuperVdsm.ServerCallback::(wrapper)
> > > > > Error in wrapper
> > > > > Traceback (most recent call last):
> > > > > File "/usr/share/vdsm/supervdsmServer", line 118, in wrapper
> > > > > res = func(*args, **kwargs)
> > > > > File "/usr/share/vdsm/supervdsmServer", line 534, in wrapper
> > > > > return func(*args, **kwargs)
> > > > > File "/usr/share/vdsm/gluster/cli.py", line 352, in volumeStatus
> > > > > return _parseVolumeStatusDetail(xmltree)
> > > > > File "/usr/share/vdsm/gluster/cli.py", line 216, in
> > > > > _parseVolumeStatusDetail
> > > > > 'device': value['device'],
> > > > > KeyError: 'device'
> > > > > MainProcess|jsonrpc.Executor/7::ERROR::2016-12-09
> > > > > 15:27:50,163::supervdsmServer::120::SuperVdsm.ServerCallback::(wrapper)
> > > > > Error in wrapper
> > > > > Traceback (most recent call last):
> > > > > File "/usr/share/vdsm/supervdsmServer", line 118, in wrapper
> > > > > res = func(*args, **kwargs)
> > > > > File "/usr/share/vdsm/supervdsmServer", line 534, in wrapper
> > > > > return func(*args, **kwargs)
> > > > > File "/usr/share/vdsm/gluster/cli.py", line 352, in volumeStatus
> > > > > return _parseVolumeStatusDetail(xmltree)
> > > > > File "/usr/share/vdsm/gluster/cli.py", line 216, in
> > > > > _parseVolumeStatusDetail
> > > > > 'device': value['device'],
> > > > > KeyError: 'device'
> > > > > MainProcess|jsonrpc.Executor/0::ERROR::2016-12-09
> > > > > 15:27:52,803::supervdsmServer::120::SuperVdsm.ServerCallback::(wrapper)
> > > > > Error in wrapper
> > > > > Traceback (most recent call last):
> > > > > File "/usr/share/vdsm/supervdsmServer", line 118, in wrapper
> > > > > res = func(*args, **kwargs)
> > > > > File "/usr/share/vdsm/supervdsmServer", line 534, in wrapper
> > > > > return func(*args, **kwargs)
> > > > > File "/usr/share/vdsm/gluster/cli.py", line 352, in volumeStatus
> > > > > return _parseVolumeStatusDetail(xmltree)
> > > > > File "/usr/share/vdsm/gluster/cli.py", line 216, in
> > > > > _parseVolumeStatusDetail
> > > > > 'device': value['device'],
> > > > > KeyError: 'device'
> > > > > MainProcess|jsonrpc.Executor/5::ERROR::2016-12-09
> > > > > 15:27:54,677::supervdsmServer::120::SuperVdsm.ServerCallback::(wrapper)
> > > > > Error in wrapper
> > > > > Traceback (most recent call last):
> > > > > File "/usr/share/vdsm/supervdsmServer", line 118, in wrapper
> > > > > res = func(*args, **kwargs)
> > > > > File "/usr/share/vdsm/supervdsmServer", line 534, in wrapper
> > > > > return func(*args, **kwargs)
> > > > > File "/usr/share/vdsm/gluster/cli.py", line 352, in volumeStatus
> > > > > return _parseVolumeStatusDetail(xmltree)
> > > > > File "/usr/share/vdsm/gluster/cli.py", line 216, in
> > > > > _parseVolumeStatusDetail
> > > > > 'device': value['device'],
> > > > > KeyError: 'device'
> > > > > MainProcess|jsonrpc.Executor/2::ERROR::2016-12-09
> > > > > 15:27:58,348::supervdsmServer::120::SuperVdsm.ServerCallback::(wrapper)
> > > > > Error in wrapper
> > > > > Traceback (most recent call last):
> > > > > File "/usr/share/vdsm/supervdsmServer", line 118, in wrapper
> > > > > res = func(*args, **kwargs)
> > > > > File "/usr/share/vdsm/supervdsmServer", line 534, in wrapper
> > > > > return func(*args, **kwargs)
> > > > > File "/usr/share/vdsm/gluster/cli.py", line 352, in volumeStatus
> > > > > return _parseVolumeStatusDetail(xmltree)
> > > > > File "/usr/share/vdsm/gluster/cli.py", line 216, in
> > > > > _parseVolumeStatusDetail
> > > > > 'device': value['device'],
> > > > > KeyError: 'device'
> > > > > MainProcess|jsonrpc.Executor/4::ERROR::2016-12-09
> > > > > 15:27:59,168::supervdsmServer::120::SuperVdsm.ServerCallback::(wrapper)
> > > > > Error in wrapper
> > > > > Traceback (most recent call last):
> > > > > File "/usr/share/vdsm/supervdsmServer", line 118, in wrapper
> > > > > res = func(*args, **kwargs)
> > > > > File "/usr/share/vdsm/supervdsmServer", line 534, in wrapper
> > > > > return func(*args, **kwargs)
> > > > > File "/usr/share/vdsm/gluster/cli.py", line 352, in volumeStatus
> > > > > return _parseVolumeStatusDetail(xmltree)
> > > > > File "/usr/share/vdsm/gluster/cli.py", line 216, in
> > > > > _parseVolumeStatusDetail
> > > > > 'device': value['device'],
> > > > > KeyError: 'device'
> > > > >
> > > > > Please note that the whole oVirt cluster is working (apparently) as it
> > > > > should, but due to a known limitation with split-GlusterFS-network
> > > > > setup
> > > > > (http://lists.ovirt.org/pipermail/users/2016-August/042119.html solved
> > > > > in
> > > > > https://gerrit.ovirt.org/#/c/60083/ but maybe not backported to 3.6.x
> > > > > or
> > > > > present only in nightly later than 3.6.7, right?) GlusterFS volumes are
> > > > > being managed from the hosts commandline only, while the oVirt Engine
> > > > > webui is used only to monitor them.
> > > > >
> > > > > The GlusterFS part is currently experiencing some recurring NFS crashes
> > > > > (using internal GlusterFS NFS support, not NFS-Ganesha) as reported in
> > > > > Gluster users mailing list and in Bugzilla
> > > > > (http://www.gluster.org/pipermail/gluster-users/2016-December/029357.html
> > > > > and https://bugzilla.redhat.com/show_bug.cgi?id=1381970 without any
> > > > > feedback insofar...) but only on not-oVirt-related volumes.
> > > > >
> > > > > Finally, I can confirm that checking all oVirt-related and
> > > > > not-oVirt-related GlusterFS volumes from the hosts commandline with:
> > > > >
> > > > > vdsClient -s localhost glusterVolumeStatus volumeName=nomevolume
> > > >
> > > > Can you post the output of 'gluster volume status <vol-name> detail
> > > > --xml'.
> > > >
> > > > Regards,
> > > > Ramesh
> > > >
> > > > Hi Ramesh,
> > > >
> > > > Please find attached all the output produced with the following command:
> > > >
> > > > for vol in $(gluster volume list); do gluster volume status ${vol} detail
> > > > --xml > ${vol}.xml; res=$?; echo "Exit ${res} for volume ${vol}"; done
> > > >
> > > > Please note that the exit code was always zero.
> > > >
> > >
> > > +gluster-users
> > >
> > > This seems to be a bug in Glusterfs 3.7.17. Output of 'gluster volume
> > > status <vol-name> details --xml ' should have a <device> element for all
> > > the bricks in the volume. But it missing for the arbiter brick. This issue
> > > is not re-producible in Gulsterfs-3.8.
> >
> > Do I need to open a GlusterFS bug for this on 3.7?
> > Looking at the changelog, it does not seem to have been fixed in 3.7.18 nor
> > to be among the already known issues.
> >
>
> Please open a bug against Glusterfs 3.7.17.
Done:
https://bugzilla.redhat.com/show_bug.cgi?id=1406569
> > On the oVirt side: is GlusterFS 3.8 compatible with oVirt 3.6.x (maybe with x
> > > 7 ie using nightly snapshots)?
> >
>
> You can upgrade to GlusterFS 3.8. It is compatible with oVirt 3.6.
>
> Note: You may have to add the GlusterFS 3.8 repo manually from https://download.gluster.org/pub/gluster/glusterfs/3.8/LATEST/.
Many thanks for your advice and for your help.
Do you happen to know whether the split-GlusterFS-network limitation in oVirt 3.6.7 has been fixed (backporting https://gerrit.ovirt.org/#/c/60083/ ) in latest oVirt 3.6.x (unsupported, I know...) nightly snapshot releases?
Many thanks again.
Regards,
Giuseppe
> Regards,
> Ramesh
>
> > Many thanks.
> >
> > Regards,
> > Giuseppe
> >
> > > Regards,
> > > Ramesh
> > >
> > >
> > > > Many thanks for you help.
> > > >
> > > > Best regards,
> > > > Giuseppe
> > > >
> > > >
> > > > >
> > > > > always succeeds without errors.
> > > > >
> > > > > Many thanks in advance for any advice (please note that I'm planning to
> > > > > upgrade from 3.6.7 to latest nightly 3.6.10.x as soon as the
> > > > > corresponding
> > > > > RHEV gets announced, then later on all the way up to 4.1.0 as soon as
> > > > > it
> > > > > stabilizes; on GlusterFS-side I'd like to upgrade asap to 3.8.x but I
> > > > > cannot find any hint on oVirt 3.6.x compatibility...).
> > > > >
> > > > > Best regards,
> > > > > Giuseppe
> > > > >
> > > > > PS: please keep my address in to/copy since I still have problems
> > > > > receiving
> > > > > oVirt mailing list messages on Hotmail.
> > > > >
> > > > >
> > > > > _______________________________________________
> > > > > Users mailing list
> > > > > Users(a)ovirt.org
> > > > > http://lists.phx.ovirt.org/mailman/listinfo/users
> > > >
> > > >
> > > >
> >
8 years
Adding Disk stuck?
by Pat Riehecky
Last Friday I started a job to add 1 new disk to each of 4 VMs - total
of 4 disks each 100G.
It seems to still be running, but no host shows an obvious IO load.
State is
Adding Disk (hour glass)
-> Validating (green check mark)
-> Executing (hour glass)
->-> Creating Volume (green check mark)
I checked in with:
/usr/share/ovirt-engine/setup/dbutils/taskcleaner.sh
and it didn't show anything interesting.
The VMs themselves show the disks are there, but the VMs are still
locked by the disk processes.
Ideas?
Pat
--
Pat Riehecky
Fermi National Accelerator Laboratory
www.fnal.gov
www.scientificlinux.org
8 years
Iso Upload
by Koen Vanoppen
Dear All,
I thougt it was possible to upload an iso to ovirt 4 now. But for some
reason I didn't manage to do it. If I try to upload a iso in the disk
section ov the gui, I'm always getting the following message:
Make sure ovirt-imageio-proxy service is installed and configured, and
ovirt-engine's certificate is registred as a vailid ca in the browser.
ovirt-image-io-proxy is installed. Already rerun engine-setup... But still
the image get's paused when I try to upload it... Any ideas?
8 years
Failed to import Hosted Engine VM
by knarra
Hi,
I have latest master installed and i see that Hosted Engine VM
fails to import. Below are the logs i see in the engine log. Can some
one help me understand why does this happen?
2016-12-20 06:46:02,291Z INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] START, GetImageInfoVDSComman
d( GetImageInfoVDSCommandParameters:{runAsync='true',
storagePoolId='00000001-0001-0001-0001-000000000311',
ignoreFailoverLimit='false', storageDomainId='4830f5b2-5a7d-4a89-
8fc9-8911134035e4', imageGroupId='0dec26c2-59c8-4d7f-adc0-6e4c878028ee',
imageId='e1133334-9f08-4e71-9b3a-d6a93273fbd3'}), log id: 78f8a633
2016-12-20 06:46:02,291Z INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] START, GetVolumeInfoVDSComm
and(HostName = hosted_engine1,
GetVolumeInfoVDSCommandParameters:{runAsync='true',
hostId='4c4a3633-2c2a-49c9-be06-78a21a4a2584',
storagePoolId='00000001-0001-0001-0001-0000
00000311', storageDomainId='4830f5b2-5a7d-4a89-8fc9-8911134035e4',
imageGroupId='0dec26c2-59c8-4d7f-adc0-6e4c878028ee',
imageId='e1133334-9f08-4e71-9b3a-d6a93273fbd3'}), log
id: 62a0b308
2016-12-20 06:46:02,434Z ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] Failed building DiskImage:
No enum const
org.ovirt.engine.core.common.businessentities.LeaseState.{owners=[Ljava.lang.Object;@28beccfa,
version=2}
2016-12-20 06:46:02,434Z INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] Command 'org.ovirt.engine.c
ore.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand' return value '
VolumeInfoReturn:{status='Status [code=0, message=Done]'}
status = OK
domain = 4830f5b2-5a7d-4a89-8fc9-8911134035e4
voltype = LEAF
description = Hosted Engine Image
parent = 00000000-0000-0000-0000-000000000000
format = RAW
generation = 0
image = 0dec26c2-59c8-4d7f-adc0-6e4c878028ee
ctime = 1482153085
disktype = 2
legality = LEGAL
mtime = 0
apparentsize = 53687091200
children:
[]
pool =
capacity = 53687091200
uuid = e1133334-9f08-4e71-9b3a-d6a93273fbd3
truesize = 2761210368
type = SPARSE
lease:
owners:
[1]
version = 2
'
2016-12-20 06:46:02,434Z INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] HostName = hosted_engine1
2016-12-20 06:46:02,434Z INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] FINISH,
GetVolumeInfoVDSCommand, log id: 62a0b308
2016-12-20 06:46:02,434Z ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] Failed to get the volume
information, marking as FAILED
2016-12-20 06:46:02,434Z INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] FINISH,
GetImageInfoVDSCommand, log id: 78f8a633
2016-12-20 06:46:02,434Z WARN
[org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] Validation of action
'ImportVm' failed for user SYSTEM. Reasons:
VAR__ACTION__IMPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_IMAGE_DOES_NOT_EXIST
2016-12-20 06:46:02,435Z INFO
[org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] Lock freed to object
'EngineLock:{exclusiveLocks='[89681893-94fe-4366-be6d-15141ff2b365=<VM,
ACTION_TYPE_FAILED_VM_IS_BEING_IMPORTED$VmName HostedEngine>,
HostedEngine=<VM_NAME, ACTION_TYPE_FAILED_NAME_ALREADY_USED>]',
sharedLocks='[89681893-94fe-4366-be6d-15141ff2b365=<REMOTE_VM,
ACTION_TYPE_FAILED_VM_IS_BEING_IMPORTED$VmName HostedEngine>]'}'
2016-12-20 06:46:02,435Z ERROR
[org.ovirt.engine.core.bll.HostedEngineImporter]
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] Failed importing the
Hosted Engine VM
2016-12-20 06:46:04,436Z INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(DefaultQuartzScheduler4) [2d8b8a56] FINISH,
GlusterServersListVDSCommand, return: [10.70.36.79/23:CONNECTED,
10.70.36.80:CONNECTED, 10.70.36.81:CONNECTED], log id: 617781b7
Thanks
kasturi.
8 years