ovirt 3.5 engine web certificate
by Baptiste Agasse
Hi all,
I've followed the procedure to replace self signed certificate to one issued by our internal PKI to avoid security failure when users access to the webui (https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtuali...). The connection to the webui now works fine without any security warning (the internal PKI CA is in the trusted CA of our clients OS). But on the other hand, i've some troubles:
* I've to specify the --ca-file option for ovirt-shell and engine-iso-uploader (i didn't test the engine-image-upload command), it will be nice if the documentation provide a way to replace this by default (or use the trusted ca store of the OS ?). This is not a bug just some feedback on the certificate change procedure that don't cover these side effects.
* I can't add new ovirt-node anymore. The ovirt-hosted-engine --deploy fails on new nodes with an SSL error. To workaround this i've to modify the file "/usr/lib/python2.7/site-packages/ovirtsdk/web/connection.py" around line 233 to make an insecure connection to the engine and add the new node. I didn't have tested to add a new node from the ovirt engine cli/webui but i think it will be the same issue because the error occurs on the vdsm activation that is common to the 'new hosted engine node' and 'new node' deployment. I've seen https://bugzilla.redhat.com/show_bug.cgi?id=1059952 but the workaround noted in the comment #8 didn't work for me.
Someone have more info on this issue or have the same problem ?
This deployment is on ovirt 3.5.3, CentOS 7 (engine and nodes).
Have a nice day.
Regards.
--
Baptiste
9 years, 2 months
Get involved in oVirt project! Autumn edition
by Sandro Bonazzola
Hi,
Autumn is coming and with it will come the next 3.6.0 release and will
start the next major release cycle.
Have you got some free time and do you want to get involved in oVirt
project?
Do you like the idea of having fresh disk images of recent distribution in
oVirt Glance repository?
You can help us by testing existing online images (like
https://getfedora.org/en/cloud/download/) ensuring they works with
cloud-init
or creating one yourself and report your success to devel(a)ovirt.org.
We'll be happy to upload the images once these are ready.
Do you like Debian and do you have some programming skill?
Help us getting VDSM running on it! We started releasing highly
experimental packages and it's a good time for giving them a try.
You can follow the progress here: http://www.ovirt.org/VDSM_on_Debian
Here are some bugs you can try to help with:
Bug ID Whiteboard Status Target Release
Summary
1115059 network ASSIGNED 3.6.0 Incomplete
error message when adding VNIC profile to running VM
1234257 integration ON_QA 3.5.5 Test engine
upgrade path from EL6 with oVirt 3.5.z to EL7
1251965 integration NEW 3.6.0 Appliance
based setup should default to using /var/tmp for unpacking the image
1221176 integration NEW 3.6.0
hosted-engine accepts FQDNs with underscore while the engine correctly
fails on that
1120585 integration NEW 3.6.0 update image
uploader documentation
1120586 integration NEW 3.6.0 update iso
uploader documentation
1120588 integration NEW 3.6.0 update log
collector documentation
1156060 integration NEW 3.6.0 [text]
engine admin password prompt consistency
1237132 integration NEW 3.6.0 [TEXT] New
package listing of engine-setup when upgrading packages is not user friendly
1247068 integration NEW 3.6.0 [TEXT] Warn
the administrator that CD/DVD passthrough is disabled for RHEL7
1083104 integration ASSIGNED 3.6.0 engine-setup
--offline does not update versionlock
1059952 integration ASSIGNED 3.6.0
hosted-engine --deploy (additional host) will fail if the engine is not
using the default self-signed CA
1065350 integration POST 3.6.0
hosted-engine should prompt a question at the user when the host was
already a host in the engine
1232825 integration ON_QA 3.6.0 [Text] Need
to update in the ovirt-engine-reports-tool the text "Exporting users from
Jasperreports"
772931 infra NEW --- [RFE]
Reports should include the name of the oVirt engine
1074301 infra NEW 4.0.0 [RFE]
ovirt-shell has no man page
1174285 i18n NEW 3.6.0 [de-DE]
"Live Snapshot Support" reads "Live Snapsnot Support"
1159784 docs NEW --- [RFE]
Document when and where new features are available when upgrading cluster /
datacenters
1099998 docs NEW 3.6.0 Hosted
Engine documentation has several errors
1099995 docs NEW 3.6.0 Migrate to
Hosted Engine How-To does not state all pre-reqs
Do you love "DevOps?", you count stable builds in jenkins ci while trying
to fall a sleep?
Then oVirt infra team is looking for you!, join the infra team and dive in
to do the newest and coolest devops tools today!
Here are some of our open tasks you can help with:
https://ovirt-jira.atlassian.net/secure/RapidBoard.jspa?rapidView=1&proje...
You don't have programming skills, not enough time for DevOps but you want
still to contribute?
Here are some bugs you can take care of, also without writing a line of
code:
https://bugzilla.redhat.com/buglist.cgi?quicksearch=product%3Aovirt%20whi...
Do you prefer to test things? We have some test cases[5] you can try using
nightly snapshots[6].
Do you want to contribute test cases? Most of the features[7] included in
oVirt are missing a test case, you're welcome to contribute one!
Is this the first time you try to contribute to oVirt project?
You can start from here [1][2]!
You don't know gerrit very well? You can find some more docs here [3].
Any other question about development? Feel free to ask on devel(a)ovirt.org
or on irc channel[4].
You don't really have time / skills for any development / documentation /
testing related task?
Spread the word[8]!
Let us know you're getting involved, present yourself and tell us what
you're going to do, you'll be welcome!
[1] http://www.ovirt.org/Develop
[2] http://www.ovirt.org/Working_with_oVirt_Gerrit
[3] https://gerrit-review.googlesource.com/Documentation
[4] http://www.ovirt.org/Community
[5] http://www.ovirt.org/Category:TestCase
[6] http://www.ovirt.org/Install_nightly_snapshot
[7] http://www.ovirt.org/Category:Feature
[8]
http://www.zdnet.com/article/how-much-longer-can-red-hats-ovirt-remain-co...
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
9 years, 2 months
Gluster command [<UNKNOWN>] failed on server
by suporte@logicworks.pt
------=_Part_276953_1869596252.1441280421047
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi,=20
I just installed Version 3.5.3.1-1.el7.centos , on centos 7.1, no HE.=20
for storage, I have only one server with glusterfs:=20
glusterfs-fuse-3.7.3-1.el7.x86_64=20
glusterfs-server-3.7.3-1.el7.x86_64=20
glusterfs-libs-3.7.3-1.el7.x86_64=20
glusterfs-client-xlators-3.7.3-1.el7.x86_64=20
glusterfs-api-3.7.3-1.el7.x86_64=20
glusterfs-3.7.3-1.el7.x86_64=20
glusterfs-cli-3.7.3-1.el7.x86_64=20
# service glusterd status=20
Redirecting to /bin/systemctl status glusterd.service=20
glusterd.service - GlusterFS, a clustered file-system server=20
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)=20
Active: active (running) since Thu 2015-09-03 11 :23:32 WEST; 10min ago=20
Process: 1153 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (code=
=3Dexited, status=3D0/SUCCESS)=20
Main PID: 1387 (glusterd)=20
CGroup: /system.slice/glusterd.service=20
=C3=A2=C3=A21387 /usr/sbin/glusterd -p /var/run/glusterd.pid=20
=C3=A2=C3=A22314 /usr/sbin/glusterfsd -s gfs3.acloud.pt --volfile-id gv0.gf=
s...=20
Sep 03 11:23:31 gfs3.domain.pt systemd[1]: Starting GlusterFS, a clustered =
f....=20
Sep 03 11:23:32 gfs3.domain.pt systemd[1]: Started GlusterFS, a clustered f=
i....=20
Hint: Some lines were ellipsized, use -l to show in full.=20
Everything was running until I need to restart the node (host), after that =
I was not ables to make the host active. This is the error message:=20
Gluster command [<UNKNOWN>] failed on server=20
I also disable JSON protocol, but no success=20
vdsm.log:=20
Thread-14::DEBUG:: 2015-09-03 11 :37:23,131::BindingXMLRPC::1133::vds::(wra=
pper) client [ 192.168.6.200 ]::call getHardwareInfo with () {}=20
Thread-14::DEBUG:: 2015-09-03 11 :37:23,132::BindingXMLRPC::1140::vds::(wra=
pper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}=
, 'info': {'systemProductName': 'PRIMERGY RX2520 M1', 'systemSerialNumber':=
'YLSK005705', 'systemFamily': 'SERVER', 'systemVersion': 'GS01', 'systemUU=
ID': '4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer': 'FUJITSU=
'}}=20
Thread-14::DEBUG:: 2015-09-03 11 :37:23,266::BindingXMLRPC::1133::vds::(wra=
pper) client [ 192.168.6.200 ]::call hostsList with () {} flowID [4acc5233]=
=20
Thread-14::ERROR:: 2015-09-03 11 :37:23,279::BindingXMLRPC::1149::vds::(wra=
pper) vdsm exception occured=20
Traceback (most recent call last):=20
File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1136, in wrapper=20
res =3D f(*args, **kwargs)=20
File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper=20
rv =3D func(*args, **kwargs)=20
File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList=20
return {'hosts': self.svdsmProxy.glusterPeerStatus()}=20
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__=20
return callMethod()=20
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>=20
**kwargs)=20
File "<string>", line 2, in glusterPeerStatus=20
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _call=
method=20
raise convert_to_error(kind, result)=20
GlusterCmdExecFailedException: Command execution failed=20
error: Connection failed. Please check if gluster daemon is operational.=20
return code: 1=20
supervdsm.log:=20
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 :37:23,131::supervdsmServer::1=
02::SuperVdsm.ServerCallback::(wrapper) call getHardwareInfo with () {}=20
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 :37:23,132::supervdsmServer::1=
09::SuperVdsm.ServerCallback::(wrapper) return getHardwareInfo with {'syste=
mProductName': 'PRIMERGY RX2520 M1', 'systemSerialNumber': 'YLSK005705', 's=
ystemFamily': 'SERVER', 'systemVersion': 'GS01', 'systemUUID': '4600EA20-2B=
FF-B34F-B607-DBF9F6B278CE', 'systemManufacturer': 'FUJITSU'}=20
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 :37:23,266::supervdsmServer::1=
02::SuperVdsm.ServerCallback::(wrapper) call wrapper with () {}=20
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 :37:23,267::utils::739::root::=
(execCmd) /usr/sbin/gluster --mode=3Dscript peer status --xml (cwd None)=20
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 :37:23,278::utils::759::root::=
(execCmd) FAILED: <err> =3D ''; <rc> =3D 1=20
MainProcess|Thread-14::ERROR:: 2015-09-03 11 :37:23,279::supervdsmServer::1=
06::SuperVdsm.ServerCallback::(wrapper) Error in wrapper=20
Traceback (most recent call last):=20
File "/usr/share/vdsm/supervdsmServer", line 104, in wrapper=20
res =3D func(*args, **kwargs)=20
File "/usr/share/vdsm/supervdsmServer", line 414, in wrapper=20
return func(*args, **kwargs)=20
File "/usr/share/vdsm/gluster/__init__.py", line 31, in wrapper=20
return func(*args, **kwargs)=20
File "/usr/share/vdsm/gluster/cli.py", line 909, in peerStatus=20
xmltree =3D _execGlusterXml(command)=20
File "/usr/share/vdsm/gluster/cli.py", line 90, in _execGlusterXml=20
raise ge.GlusterCmdExecFailedException(rc, out, err)=20
GlusterCmdExecFailedException: Command execution failed=20
error: Connection failed. Please check if gluster daemon is operational.=20
return code: 1=20
Any idea?=20
Thanks=20
Jos=C3=A9=20
--=20
Jose Ferradeira=20
http://www.logicworks.pt=20
------=_Part_276953_1869596252.1441280421047
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: Times New Roman; font-size: 10pt; co=
lor: #000000"><div><div>Hi,</div><div><br></div><div>I just installed <span=
class=3D"version-text">Version 3.5.3.1-1.el7.centos</span>, on centos 7.1,=
no HE.</div><div><br></div><div>for storage, I have only one server with g=
lusterfs:</div><div>glusterfs-fuse-3.7.3-1.el7.x86_64<br>glusterfs-server-3=
.7.3-1.el7.x86_64<br>glusterfs-libs-3.7.3-1.el7.x86_64<br>glusterfs-client-=
xlators-3.7.3-1.el7.x86_64<br>glusterfs-api-3.7.3-1.el7.x86_64<br>glusterfs=
-3.7.3-1.el7.x86_64<br>glusterfs-cli-3.7.3-1.el7.x86_64<br><br></div><div>#=
service glusterd status<br>Redirecting to /bin/systemctl status glus=
terd.service<br>glusterd.service - GlusterFS, a clustered file-system serve=
r<br> Loaded: loaded (/usr/lib/systemd/system/glusterd.service;=
enabled)<br> Active: active (running) since Thu <span class=3D=
"Object" id=3D"OBJ_PREFIX_DWT374_com_zimbra_phone"><a href=3D"callto:2015-0=
9-03%2011" data-mce-href=3D"callto:2015-09-03%2011">2015-09-03 11</a></span=
>:23:32 WEST; 10min ago<br> Process: 1153 ExecStart=3D/usr/sbin/glust=
erd -p /var/run/glusterd.pid (code=3Dexited, status=3D0/SUCCESS)<br> M=
ain PID: 1387 (glusterd)<br> CGroup: /system.slice/glusterd.ser=
vice<br> =C3=A2=
=C3=A21387 /usr/sbin/glusterd -p /var/run/glusterd.pid<br>  =
; =C3=A2=C3=A22314 /usr/sbin/glus=
terfsd -s gfs3.acloud.pt --volfile-id gv0.gfs...<br><br>Sep 03 11:23:31 gfs=
3.domain.pt systemd[1]: Starting GlusterFS, a clustered f....<br>Sep 03 11:=
23:32 gfs3.domain.pt systemd[1]: Started GlusterFS, a clustered fi....<br>H=
int: Some lines were ellipsized, use -l to show in full.<br><br></div><div>=
<br></div><div>Everything was running until I need to restart the node (hos=
t), after that I was not ables to make the host active. This is the error m=
essage:</div><div>Gluster command [<UNKNOWN>] failed on server</div><=
div><br></div><div><br></div><div>I also disable JSON protocol, but no succ=
ess</div><div><br></div><div>vdsm.log:</div><div>Thread-14::DEBUG::<span cl=
ass=3D"Object" id=3D"OBJ_PREFIX_DWT375_com_zimbra_phone"><a href=3D"callto:=
2015-09-03%2011" data-mce-href=3D"callto:2015-09-03%2011">2015-09-03 11</a>=
</span>:37:23,131::BindingXMLRPC::1133::vds::(wrapper) client [<span class=
=3D"Object" id=3D"OBJ_PREFIX_DWT376_com_zimbra_phone"><a href=3D"callto:192=
.168.6.200" data-mce-href=3D"callto:192.168.6.200">192.168.6.200</a></span>=
]::call getHardwareInfo with () {}<br>Thread-14::DEBUG::<span class=3D"Obje=
ct" id=3D"OBJ_PREFIX_DWT377_com_zimbra_phone"><a href=3D"callto:2015-09-03%=
2011" data-mce-href=3D"callto:2015-09-03%2011">2015-09-03 11</a></span>:37:=
23,132::BindingXMLRPC::1140::vds::(wrapper) return getHardwareInfo with {'s=
tatus': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'PRIM=
ERGY RX2520 M1', 'systemSerialNumber': 'YLSK005705', 'systemFamily': 'SERVE=
R', 'systemVersion': 'GS01', 'systemUUID': '4600EA20-2BFF-B34F-B607-DBF9F6B=
278CE', 'systemManufacturer': 'FUJITSU'}}<br>Thread-14::DEBUG::<span class=
=3D"Object" id=3D"OBJ_PREFIX_DWT378_com_zimbra_phone"><a href=3D"callto:201=
5-09-03%2011" data-mce-href=3D"callto:2015-09-03%2011">2015-09-03 11</a></s=
pan>:37:23,266::BindingXMLRPC::1133::vds::(wrapper) client [<span class=3D"=
Object" id=3D"OBJ_PREFIX_DWT379_com_zimbra_phone"><a href=3D"callto:192.168=
.6.200" data-mce-href=3D"callto:192.168.6.200">192.168.6.200</a></span>]::c=
all hostsList with () {} flowID [4acc5233]<br>Thread-14::ERROR::<span class=
=3D"Object" id=3D"OBJ_PREFIX_DWT380_com_zimbra_phone"><a href=3D"callto:201=
5-09-03%2011" data-mce-href=3D"callto:2015-09-03%2011">2015-09-03 11</a></s=
pan>:37:23,279::BindingXMLRPC::1149::vds::(wrapper) vdsm exception occured<=
br>Traceback (most recent call last):<br> File "/usr/share/vdsm/rpc/B=
indingXMLRPC.py", line 1136, in wrapper<br> res =3D f(*ar=
gs, **kwargs)<br> File "/usr/share/vdsm/gluster/api.py", line 54, in =
wrapper<br> rv =3D func(*args, **kwargs)<br> File "=
/usr/share/vdsm/gluster/api.py", line 251, in hostsList<br> &nbs=
p; return {'hosts': self.svdsmProxy.glusterPeerStatus()}<br> File "/u=
sr/share/vdsm/supervdsm.py", line 50, in __call__<br> ret=
urn callMethod()<br> File "/usr/share/vdsm/supervdsm.py", line 48, in=
<lambda><br> **kwargs)<br> File "<string&=
gt;", line 2, in glusterPeerStatus<br> File "/usr/lib64/python2.7/mul=
tiprocessing/managers.py", line 773, in _callmethod<br> r=
aise convert_to_error(kind, result)<br>GlusterCmdExecFailedException: Comma=
nd execution failed<br>error: Connection failed. Please check if gluster da=
emon is operational.<br>return code: 1<br><br></div><div><br></div><div>sup=
ervdsm.log:</div><div>MainProcess|Thread-14::DEBUG::<span class=3D"Object" =
id=3D"OBJ_PREFIX_DWT381_com_zimbra_phone"><a href=3D"callto:2015-09-03%2011=
" data-mce-href=3D"callto:2015-09-03%2011">2015-09-03 11</a></span>:37:23,1=
31::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call getHardw=
areInfo with () {}<br>MainProcess|Thread-14::DEBUG::<span class=3D"Object" =
id=3D"OBJ_PREFIX_DWT382_com_zimbra_phone"><a href=3D"callto:2015-09-03%2011=
" data-mce-href=3D"callto:2015-09-03%2011">2015-09-03 11</a></span>:37:23,1=
32::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return getHar=
dwareInfo with {'systemProductName': 'PRIMERGY RX2520 M1', 'systemSerialNum=
ber': 'YLSK005705', 'systemFamily': 'SERVER', 'systemVersion': 'GS01', 'sys=
temUUID': '4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer': 'FU=
JITSU'}<br>MainProcess|Thread-14::DEBUG::<span class=3D"Object" id=3D"OBJ_P=
REFIX_DWT383_com_zimbra_phone"><a href=3D"callto:2015-09-03%2011" data-mce-=
href=3D"callto:2015-09-03%2011">2015-09-03 11</a></span>:37:23,266::supervd=
smServer::102::SuperVdsm.ServerCallback::(wrapper) call wrapper with () {}<=
br>MainProcess|Thread-14::DEBUG::<span class=3D"Object" id=3D"OBJ_PREFIX_DW=
T384_com_zimbra_phone"><a href=3D"callto:2015-09-03%2011" data-mce-href=3D"=
callto:2015-09-03%2011">2015-09-03 11</a></span>:37:23,267::utils::739::roo=
t::(execCmd) /usr/sbin/gluster --mode=3Dscript peer status --xml (cwd None)=
<br>MainProcess|Thread-14::DEBUG::<span class=3D"Object" id=3D"OBJ_PREFIX_D=
WT385_com_zimbra_phone"><a href=3D"callto:2015-09-03%2011" data-mce-href=3D=
"callto:2015-09-03%2011">2015-09-03 11</a></span>:37:23,278::utils::759::ro=
ot::(execCmd) FAILED: <err> =3D ''; <rc> =3D 1<br>MainProcess|T=
hread-14::ERROR::<span class=3D"Object" id=3D"OBJ_PREFIX_DWT386_com_zimbra_=
phone"><a href=3D"callto:2015-09-03%2011" data-mce-href=3D"callto:2015-09-0=
3%2011">2015-09-03 11</a></span>:37:23,279::supervdsmServer::106::SuperVdsm=
.ServerCallback::(wrapper) Error in wrapper<br>Traceback (most recent call =
last):<br> File "/usr/share/vdsm/supervdsmServer", line 104, in wrapp=
er<br> res =3D func(*args, **kwargs)<br> File "/usr=
/share/vdsm/supervdsmServer", line 414, in wrapper<br> re=
turn func(*args, **kwargs)<br> File "/usr/share/vdsm/gluster/__init__=
.py", line 31, in wrapper<br> return func(*args, **kwargs=
)<br> File "/usr/share/vdsm/gluster/cli.py", line 909, in peerStatus<=
br> xmltree =3D _execGlusterXml(command)<br> File "=
/usr/share/vdsm/gluster/cli.py", line 90, in _execGlusterXml<br>  =
; raise ge.GlusterCmdExecFailedException(rc, out, err)<br>GlusterCmdE=
xecFailedException: Command execution failed<br>error: Connection failed. P=
lease check if gluster daemon is operational.<br>return code: 1<br><br></di=
v><div><br></div><div><br></div><div>Any idea?</div><div><br></div><div>Tha=
nks</div><div><br></div>Jos=C3=A9<br><div><br></div></div><div><br></div><d=
iv>-- <br></div><div><span name=3D"x"></span><hr style=3D"width: 100%; heig=
ht: 2px;" data-mce-style=3D"width: 100%; height: 2px;">Jose Ferradeira<br>h=
ttp://www.logicworks.pt<br><span name=3D"x"></span><br></div></div></body><=
/html>
------=_Part_276953_1869596252.1441280421047--
9 years, 2 months
ovirt+gluster+NFS : storage hicups
by Nicolas Ecarnot
Hi,
I used the two links below to setup a test DC :
http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/
http://community.redhat.com/blog/2014/11/up-and-running-with-ovirt-3-5-pa...
The only thing I did different is I did not usea hosted engine, but I
dedicated a solid server for that.
So I have one engine (CentOS 6.6), and 3 hosts (CentOS 7.0)
As in the doc above, my 3 hosts are publishing 300 Go of replicated
gluster storage, above which ctdb is managing a floating virtual ip that
is used by NFS as the master storage domain.
The last point is that the manager is also presenting a NFS storage I'm
using as an export domain.
It took me some time to plug all this setup as it is a bit more
complicated as my other DC with a real SAN and no gluster, but it is
eventually working (I can run VMs, migrate them...)
I have made many severe tests (from a very dumb user point of view :
unplug/replug the power cable of this server - does ctdb floats the vIP?
does gluster self-heals?, does the VM restart?...)
When precisely looking each layer one by one, all seems to be correct :
ctdb is fast at managing the ip, NFS is OK, gluster seems to
reconstruct, fencing eventually worked with the lanplus workaround, and
so on...
But from times to times, there seem to appear a severe hicup which I
have great difficulties to diagnose.
The messages in the web gui are not very precise, and not consistent:
- some tell about some host having network issues, but I can ping it
from every place it needs to be reached (especially from the SPM and the
manager)
"On host serv-vm-al01, Error: Network error during communication with
the Host"
- some tell that some volume is degraded, when it's not (gluster
commands are showing no issue. Even the oVirt tab about the volumes are
all green)
- "Host serv-vm-al03 cannot access the Storage Domain(s) <UNKNOWN>
attached to the Data Center"
Just by waiting a couple of seconds lead to a self heal with no action.
- Repeated "Detected change in status of brick
serv-vm-al03:/gluster/data/brick of volume data from DOWN to UP."
but absolutely no action is made on this filesystem.
At this time, zero VM is running in this test datacenter, and no action
is made on the hosts. Though, I see some looping errors coming and
going, and I find no way to diagnose.
Amongst the *actions* that I had the idea to use to solve some issues :
- I've found that trying to force the self-healing, and playing with
gluster commands had no effect
- I've found that playing with gluster adviced actions "find /gluster
-exec stat {} \; ..." seem to have no either effect
- I've found that forcing ctdb to move the vIp ("ctdb stop, ctdb
continue") DID SOLVE most of these issue.
I believe that it's not what ctdb is doing that helps, but maybe one of
its shell hook that is cleaning some troubles?
As this setup is complexe, I don't ask anyone a silver bullet, but maybe
you may know which layer is the most fragile, and which one I should
look at more closely?
--
Nicolas ECARNOT
9 years, 2 months
ovirt on OFTC issues?
by Sahina Bose
Hi all
When I send a message to #ovirt on OFTC , I get a response - #ovirt
:Cannot send to channel
Anyone else facing this?
thanks
sahina
9 years, 2 months
Trying hosted-engine on ovirt-3.6 beta
by Joop
Hi All,
I have been trying the above and keep getting an error at the end about
unable to write to HEConfImage, see attached log.
Host is Fedora22 (clean system), engine is Centos-7.1, followed the
readme from the 3.6beta release notes but in short:
- setup a nfs server on the fedora22 host
- exported /nfs/ovirt-he/data
- installed yum, installed the 3.6 beta repo
- installed hosted engine
- ran setup
- installed centos7.1, ran engine-setup
Tried with and without selinux/iptables/firewalld.
Regards,
Joop
9 years, 2 months
ovirt 3.6 Failed to execute stage 'Environment setup'
by Richard Neuboeck
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--3W6fXveMv22FLGti6dq6jEIOIdx6nU7sR
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi,
I'm trying to test a self hosted engine oVirt 3.6 setup on a CentOS
7.1 minimal installation. But it fails quite early after running
hosted-engine --deploy with
[ ERROR ] Failed to execute stage 'Environment setup': <Fault 1:
"<type 'exceptions.TypeError'>:cannot marshal None unless allow_none
is enabled">
So far I've followed the repository installation instructions as
mentioned on http://www.ovirt.org/OVirt_3.6_Release_Management, and
added the current gluster repo to the default minimal CentOS 7.1 setup.
The output of hosted-engine --deploy is as follows:
# hosted-engine --deploy
[ INFO ] Stage: Initializing
[ INFO ] Generating a temporary VNC password.
[ INFO ] Stage: Environment setup
Continuing will configure this host for serving as
hypervisor and create a VM where you have to install oVirt Engine
afterwards.
Are you sure you want to continue? (Yes, No)[Yes]:
Configuration files: []
Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-201508311232=
58-w2syys.log
Version: otopi-1.4.0_master
(otopi-1.4.0-0.0.master.20150727232243.git04fa8c9.el7)
It has been detected that this program is executed through
an SSH connection without using screen.
Continuing with the installation may lead to broken
installation if the network connection fails.
It is highly recommended to abort the installation and run
it inside a screen session using command "screen".
Do you want to continue anyway? (Yes, No)[No]: yes
[ INFO ] Hardware supports virtualization
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ ERROR ] Failed to execute stage 'Environment setup': <Fault 1:
"<type 'exceptions.TypeError'>:cannot marshal None unless allow_none
is enabled">
[ INFO ] Stage: Clean up
[ INFO ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20150831123315.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
The VDSM log shows that it fails to run dmidecode to gather hardware
information. I've had the same issue with oVirt 3.5 but the access
restriction to /dev/mem is kernel imposed so I'm not sure what to
make of it since this kernel option is enabled on all the systems
I've tested by default.
There seem to be some gluster packages missing but I'm guessing
that's not the problem at hand.
I'm not sure what to search for in the logs so I'm kind of stuck as
to what to try next. Any help is greatly appreciated.
All the best
Richard
The rest of the VDSM log during the hosted-engine setup is as follows:
BindingXMLRPC::INFO::2015-08-31
12:32:33,395::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47391
Thread-51::INFO::2015-08-31
12:32:33,396::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47391 started
Thread-51::INFO::2015-08-31
12:32:33,399::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47391 stopped
Reactor thread::INFO::2015-08-31
12:32:48,416::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47392
Reactor thread::DEBUG::2015-08-31
12:32:48,428::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:32:48,429::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47392
Reactor thread::DEBUG::2015-08-31
12:32:48,429::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47392)
BindingXMLRPC::INFO::2015-08-31
12:32:48,429::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47392
Thread-52::INFO::2015-08-31
12:32:48,430::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47392 started
Thread-52::INFO::2015-08-31
12:32:48,434::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47392 stopped
Reactor thread::INFO::2015-08-31
12:33:03,452::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47393
Reactor thread::DEBUG::2015-08-31
12:33:03,464::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:03,465::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47393
Reactor thread::DEBUG::2015-08-31
12:33:03,465::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47393)
BindingXMLRPC::INFO::2015-08-31
12:33:03,465::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47393
Thread-53::INFO::2015-08-31
12:33:03,466::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47393 started
Thread-53::INFO::2015-08-31
12:33:03,469::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47393 stopped
Reactor thread::INFO::2015-08-31
12:33:04,772::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47394
Reactor thread::DEBUG::2015-08-31
12:33:04,783::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:04,783::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47394
Reactor thread::DEBUG::2015-08-31
12:33:04,784::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47394)
BindingXMLRPC::INFO::2015-08-31
12:33:04,784::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47394
Thread-54::INFO::2015-08-31
12:33:04,786::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47394 started
Thread-54::DEBUG::2015-08-31
12:33:04,787::bindingxmlrpc::1256::vds::(wrapper) client
[127.0.0.1]::call getHardwareInfo with () {}
Thread-54::ERROR::2015-08-31
12:33:04,791::API::1328::vds::(getHardwareInfo) failed to retrieve
hardware info
Traceback (most recent call last):
File "/usr/share/vdsm/API.py", line 1325, in getHardwareInfo
hw =3D supervdsm.getProxy().getHardwareInfo()
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in getHardwareInfo
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
raise convert_to_error(kind, result)
RuntimeError: [src/dmidecodemodule.c:317] Error decoding DMI data
Thread-54::DEBUG::2015-08-31
12:33:04,793::bindingxmlrpc::1263::vds::(wrapper) return
getHardwareInfo with {'status': {'message': 'Failed to read hardware
information', 'code': 57}}
Thread-54::INFO::2015-08-31
12:33:04,795::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47394 stopped
Reactor thread::INFO::2015-08-31
12:33:05,798::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47395
Reactor thread::DEBUG::2015-08-31
12:33:05,812::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:05,812::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47395
Reactor thread::DEBUG::2015-08-31
12:33:05,812::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47395)
BindingXMLRPC::INFO::2015-08-31
12:33:05,813::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47395
Thread-55::INFO::2015-08-31
12:33:05,814::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47395 started
Thread-55::DEBUG::2015-08-31
12:33:05,815::bindingxmlrpc::1256::vds::(wrapper) client
[127.0.0.1]::call getHardwareInfo with () {}
Thread-55::ERROR::2015-08-31
12:33:05,818::API::1328::vds::(getHardwareInfo) failed to retrieve
hardware info
Traceback (most recent call last):
File "/usr/share/vdsm/API.py", line 1325, in getHardwareInfo
hw =3D supervdsm.getProxy().getHardwareInfo()
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in getHardwareInfo
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
raise convert_to_error(kind, result)
RuntimeError: [src/dmidecodemodule.c:317] Error decoding DMI data
Thread-55::DEBUG::2015-08-31
12:33:05,818::bindingxmlrpc::1263::vds::(wrapper) return
getHardwareInfo with {'status': {'message': 'Failed to read hardware
information', 'code': 57}}
Thread-55::INFO::2015-08-31
12:33:05,821::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47395 stopped
Reactor thread::INFO::2015-08-31
12:33:06,824::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47396
Reactor thread::DEBUG::2015-08-31
12:33:06,836::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:06,836::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47396
Reactor thread::DEBUG::2015-08-31
12:33:06,837::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47396)
BindingXMLRPC::INFO::2015-08-31
12:33:06,837::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47396
Thread-56::INFO::2015-08-31
12:33:06,838::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47396 started
Thread-56::DEBUG::2015-08-31
12:33:06,839::bindingxmlrpc::1256::vds::(wrapper) client
[127.0.0.1]::call getHardwareInfo with () {}
Thread-56::ERROR::2015-08-31
12:33:06,842::API::1328::vds::(getHardwareInfo) failed to retrieve
hardware info
Traceback (most recent call last):
File "/usr/share/vdsm/API.py", line 1325, in getHardwareInfo
hw =3D supervdsm.getProxy().getHardwareInfo()
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in getHardwareInfo
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
raise convert_to_error(kind, result)
RuntimeError: [src/dmidecodemodule.c:317] Error decoding DMI data
Thread-56::DEBUG::2015-08-31
12:33:06,842::bindingxmlrpc::1263::vds::(wrapper) return
getHardwareInfo with {'status': {'message': 'Failed to read hardware
information', 'code': 57}}
Thread-56::INFO::2015-08-31
12:33:06,844::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47396 stopped
Reactor thread::INFO::2015-08-31
12:33:07,847::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47397
Reactor thread::DEBUG::2015-08-31
12:33:07,859::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:07,859::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47397
Reactor thread::DEBUG::2015-08-31
12:33:07,860::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47397)
BindingXMLRPC::INFO::2015-08-31
12:33:07,860::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47397
Thread-57::INFO::2015-08-31
12:33:07,861::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47397 started
Thread-57::DEBUG::2015-08-31
12:33:07,862::bindingxmlrpc::1256::vds::(wrapper) client
[127.0.0.1]::call getHardwareInfo with () {}
Thread-57::ERROR::2015-08-31
12:33:07,865::API::1328::vds::(getHardwareInfo) failed to retrieve
hardware info
Traceback (most recent call last):
File "/usr/share/vdsm/API.py", line 1325, in getHardwareInfo
hw =3D supervdsm.getProxy().getHardwareInfo()
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in getHardwareInfo
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
raise convert_to_error(kind, result)
RuntimeError: [src/dmidecodemodule.c:317] Error decoding DMI data
Thread-57::DEBUG::2015-08-31
12:33:07,865::bindingxmlrpc::1263::vds::(wrapper) return
getHardwareInfo with {'status': {'message': 'Failed to read hardware
information', 'code': 57}}
Thread-57::INFO::2015-08-31
12:33:07,867::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47397 stopped
Reactor thread::INFO::2015-08-31
12:33:08,870::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47398
Reactor thread::DEBUG::2015-08-31
12:33:08,881::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:08,882::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47398
Reactor thread::DEBUG::2015-08-31
12:33:08,882::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47398)
BindingXMLRPC::INFO::2015-08-31
12:33:08,882::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47398
Thread-58::INFO::2015-08-31
12:33:08,884::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47398 started
Thread-58::DEBUG::2015-08-31
12:33:08,885::bindingxmlrpc::1256::vds::(wrapper) client
[127.0.0.1]::call getHardwareInfo with () {}
Thread-58::ERROR::2015-08-31
12:33:08,887::API::1328::vds::(getHardwareInfo) failed to retrieve
hardware info
Traceback (most recent call last):
File "/usr/share/vdsm/API.py", line 1325, in getHardwareInfo
hw =3D supervdsm.getProxy().getHardwareInfo()
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in getHardwareInfo
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
raise convert_to_error(kind, result)
RuntimeError: [src/dmidecodemodule.c:317] Error decoding DMI data
Thread-58::DEBUG::2015-08-31
12:33:08,888::bindingxmlrpc::1263::vds::(wrapper) return
getHardwareInfo with {'status': {'message': 'Failed to read hardware
information', 'code': 57}}
Thread-58::INFO::2015-08-31
12:33:08,890::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47398 stopped
Reactor thread::INFO::2015-08-31
12:33:09,892::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47399
Reactor thread::DEBUG::2015-08-31
12:33:09,904::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:09,904::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47399
Reactor thread::DEBUG::2015-08-31
12:33:09,905::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47399)
BindingXMLRPC::INFO::2015-08-31
12:33:09,905::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47399
Thread-59::INFO::2015-08-31
12:33:09,906::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47399 started
Thread-59::DEBUG::2015-08-31
12:33:09,907::bindingxmlrpc::1256::vds::(wrapper) client
[127.0.0.1]::call getHardwareInfo with () {}
Thread-59::ERROR::2015-08-31
12:33:09,909::API::1328::vds::(getHardwareInfo) failed to retrieve
hardware info
Traceback (most recent call last):
File "/usr/share/vdsm/API.py", line 1325, in getHardwareInfo
hw =3D supervdsm.getProxy().getHardwareInfo()
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in getHardwareInfo
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
raise convert_to_error(kind, result)
RuntimeError: [src/dmidecodemodule.c:317] Error decoding DMI data
Thread-59::DEBUG::2015-08-31
12:33:09,910::bindingxmlrpc::1263::vds::(wrapper) return
getHardwareInfo with {'status': {'message': 'Failed to read hardware
information', 'code': 57}}
Thread-59::INFO::2015-08-31
12:33:09,912::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47399 stopped
Reactor thread::INFO::2015-08-31
12:33:10,914::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47400
Reactor thread::DEBUG::2015-08-31
12:33:10,926::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:10,926::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47400
Reactor thread::DEBUG::2015-08-31
12:33:10,927::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47400)
BindingXMLRPC::INFO::2015-08-31
12:33:10,927::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47400
Thread-60::INFO::2015-08-31
12:33:10,928::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47400 started
Thread-60::DEBUG::2015-08-31
12:33:10,929::bindingxmlrpc::1256::vds::(wrapper) client
[127.0.0.1]::call getHardwareInfo with () {}
Thread-60::ERROR::2015-08-31
12:33:10,931::API::1328::vds::(getHardwareInfo) failed to retrieve
hardware info
Traceback (most recent call last):
File "/usr/share/vdsm/API.py", line 1325, in getHardwareInfo
hw =3D supervdsm.getProxy().getHardwareInfo()
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in getHardwareInfo
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
raise convert_to_error(kind, result)
RuntimeError: [src/dmidecodemodule.c:317] Error decoding DMI data
Thread-60::DEBUG::2015-08-31
12:33:10,932::bindingxmlrpc::1263::vds::(wrapper) return
getHardwareInfo with {'status': {'message': 'Failed to read hardware
information', 'code': 57}}
Thread-60::INFO::2015-08-31
12:33:10,934::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47400 stopped
Reactor thread::INFO::2015-08-31
12:33:11,936::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47401
Reactor thread::DEBUG::2015-08-31
12:33:11,948::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:11,948::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47401
Reactor thread::DEBUG::2015-08-31
12:33:11,948::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47401)
BindingXMLRPC::INFO::2015-08-31
12:33:11,949::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47401
Thread-61::INFO::2015-08-31
12:33:11,949::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47401 started
Thread-61::DEBUG::2015-08-31
12:33:11,950::bindingxmlrpc::1256::vds::(wrapper) client
[127.0.0.1]::call getHardwareInfo with () {}
Thread-61::ERROR::2015-08-31
12:33:11,953::API::1328::vds::(getHardwareInfo) failed to retrieve
hardware info
Traceback (most recent call last):
File "/usr/share/vdsm/API.py", line 1325, in getHardwareInfo
hw =3D supervdsm.getProxy().getHardwareInfo()
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in getHardwareInfo
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
raise convert_to_error(kind, result)
RuntimeError: [src/dmidecodemodule.c:317] Error decoding DMI data
Thread-61::DEBUG::2015-08-31
12:33:11,954::bindingxmlrpc::1263::vds::(wrapper) return
getHardwareInfo with {'status': {'message': 'Failed to read hardware
information', 'code': 57}}
Thread-61::INFO::2015-08-31
12:33:11,955::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47401 stopped
Reactor thread::INFO::2015-08-31
12:33:12,958::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47402
Reactor thread::DEBUG::2015-08-31
12:33:12,969::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:12,970::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47402
Reactor thread::DEBUG::2015-08-31
12:33:12,970::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47402)
BindingXMLRPC::INFO::2015-08-31
12:33:12,970::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47402
Thread-62::INFO::2015-08-31
12:33:12,971::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47402 started
Thread-62::DEBUG::2015-08-31
12:33:12,972::bindingxmlrpc::1256::vds::(wrapper) client
[127.0.0.1]::call getHardwareInfo with () {}
Thread-62::ERROR::2015-08-31
12:33:12,975::API::1328::vds::(getHardwareInfo) failed to retrieve
hardware info
Traceback (most recent call last):
File "/usr/share/vdsm/API.py", line 1325, in getHardwareInfo
hw =3D supervdsm.getProxy().getHardwareInfo()
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in getHardwareInfo
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
raise convert_to_error(kind, result)
RuntimeError: [src/dmidecodemodule.c:317] Error decoding DMI data
Thread-62::DEBUG::2015-08-31
12:33:12,976::bindingxmlrpc::1263::vds::(wrapper) return
getHardwareInfo with {'status': {'message': 'Failed to read hardware
information', 'code': 57}}
Thread-62::INFO::2015-08-31
12:33:12,977::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47402 stopped
Reactor thread::INFO::2015-08-31
12:33:13,980::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47403
Reactor thread::DEBUG::2015-08-31
12:33:13,991::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:13,992::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47403
Reactor thread::DEBUG::2015-08-31
12:33:13,992::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47403)
BindingXMLRPC::INFO::2015-08-31
12:33:13,993::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47403
Thread-63::INFO::2015-08-31
12:33:13,994::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47403 started
Thread-63::DEBUG::2015-08-31
12:33:13,995::bindingxmlrpc::1256::vds::(wrapper) client
[127.0.0.1]::call getHardwareInfo with () {}
Thread-63::ERROR::2015-08-31
12:33:13,998::API::1328::vds::(getHardwareInfo) failed to retrieve
hardware info
Traceback (most recent call last):
File "/usr/share/vdsm/API.py", line 1325, in getHardwareInfo
hw =3D supervdsm.getProxy().getHardwareInfo()
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in getHardwareInfo
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
raise convert_to_error(kind, result)
RuntimeError: [src/dmidecodemodule.c:317] Error decoding DMI data
Thread-63::DEBUG::2015-08-31
12:33:13,998::bindingxmlrpc::1263::vds::(wrapper) return
getHardwareInfo with {'status': {'message': 'Failed to read hardware
information', 'code': 57}}
Thread-63::INFO::2015-08-31
12:33:14,000::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47403 stopped
Reactor thread::INFO::2015-08-31
12:33:15,042::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47404
Reactor thread::DEBUG::2015-08-31
12:33:15,054::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:15,054::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47404
Reactor thread::DEBUG::2015-08-31
12:33:15,054::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47404)
BindingXMLRPC::INFO::2015-08-31
12:33:15,055::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47404
Thread-64::INFO::2015-08-31
12:33:15,056::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47404 started
Thread-64::DEBUG::2015-08-31
12:33:15,057::bindingxmlrpc::1256::vds::(wrapper) client
[127.0.0.1]::call getCapabilities with () {}
Thread-64::DEBUG::2015-08-31
12:33:15,111::utils::661::root::(execCmd) /usr/sbin/tc qdisc show
(cwd None)
Thread-64::DEBUG::2015-08-31
12:33:15,124::utils::679::root::(execCmd) SUCCESS: <err> =3D ''; <rc> =3D=
0
Thread-64::DEBUG::2015-08-31
12:33:15,127::utils::661::root::(execCmd) /usr/bin/sudo -n
/usr/sbin/dmidecode -s system-uuid (cwd None)
Thread-64::DEBUG::2015-08-31
12:33:15,153::utils::679::root::(execCmd) FAILED: <err> =3D '/dev/mem:
Operation not permitted\n'; <rc> =3D 1
Thread-64::WARNING::2015-08-31
12:33:15,154::utils::812::root::(getHostUUID) Could not find host UUID.
Thread-64::DEBUG::2015-08-31
12:33:15,156::caps::780::root::(_getKeyPackages) rpm package
('glusterfs-rdma',) not found
Thread-64::DEBUG::2015-08-31
12:33:15,158::caps::780::root::(_getKeyPackages) rpm package
('gluster-swift',) not found
Thread-64::DEBUG::2015-08-31
12:33:15,160::caps::780::root::(_getKeyPackages) rpm package
('gluster-swift-object',) not found
Thread-64::DEBUG::2015-08-31
12:33:15,161::caps::780::root::(_getKeyPackages) rpm package
('gluster-swift-plugin',) not found
Thread-64::DEBUG::2015-08-31
12:33:15,164::caps::780::root::(_getKeyPackages) rpm package
('gluster-swift-account',) not found
Thread-64::DEBUG::2015-08-31
12:33:15,164::caps::780::root::(_getKeyPackages) rpm package
('gluster-swift-proxy',) not found
Thread-64::DEBUG::2015-08-31
12:33:15,165::caps::780::root::(_getKeyPackages) rpm package
('gluster-swift-doc',) not found
Thread-64::DEBUG::2015-08-31
12:33:15,165::caps::780::root::(_getKeyPackages) rpm package
('gluster-swift-container',) not found
Thread-64::DEBUG::2015-08-31
12:33:15,170::bindingxmlrpc::1263::vds::(wrapper) return
getCapabilities with {'status': {'message': 'Done', 'code': 0},
'info': {'HBAInventory': {'iSCSI': [{'InitiatorName':
'iqn.1994-05.com.redhat:aa2f33e6faca'}], 'FC': []}, 'packages2':
{'kernel': {'release': '229.el7.x86_64', 'buildtime': 1425638202.0,
'version': '3.10.0'}, 'glusterfs-fuse': {'release': '1.el7',
'buildtime': 1438093544L, 'version': '3.7.3'}, 'spice-server':
{'release': '9.el7', 'buildtime': 1426031557L, 'version': '0.12.4'},
'librbd1': {'release': '2.el7', 'buildtime': 1425594433L, 'version':
'0.80.7'}, 'vdsm': {'release': '0.el7.centos', 'buildtime':
1440055696L, 'version': '4.17.3'}, 'qemu-kvm': {'release':
'23.el7_1.6.1', 'buildtime': 1438078890L, 'version': '2.1.2'},
'qemu-img': {'release': '23.el7_1.6.1', 'buildtime': 1438078890L,
'version': '2.1.2'}, 'libvirt': {'release': '16.el7_1.3',
'buildtime': 1431461920L, 'version': '1.2.8'}, 'glusterfs':
{'release': '1.el7', 'buildtime': 1438093544L, 'version': '3.7.3'},
'mom': {'release': '1.el7.centos', 'buildtime': 1436814841L,
'version': '0.5.0'}, 'glusterfs-server': {'release': '1.el7',
'buildtime': 1438093544L, 'version': '3.7.3'},
'glusterfs-geo-replication': {'release': '1.el7', 'buildtime':
1438093544L, 'version': '3.7.3'}}, 'numaNodeDistance': {'1': [21,
10], '0': [10, 21]}, 'cpuModel': 'Intel(R) Xeon(R) CPU E5-2690 v2 @
3.00GHz', 'liveMerge': 'true', 'hooks': {'before_vm_start':
{'50_hostedengine': {'md5': '2a6d96c26a3599812be6cf1a13d9f485'}}},
'vmTypes': ['kvm'], 'selinux': {'mode': '1'}, 'liveSnapshot':
'true', 'kdumpStatus': 0, 'networks': {}, 'bridges': {}, 'uuid':
None, 'onlineCpus':
'0,1,2,3,4,5,6,7,8,9,20,21,22,23,24,25,26,27,28,29,10,11,12,13,14,15,16,1=
7,18,19,30,31,32,33,34,35,36,37,38,39',
'nics': {'eno1': {'permhwaddr': '00:1e:67:b9:33:f9', 'addr': '',
'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4':
False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg':
{'SLAVE': 'yes', 'BOOTPROTO': 'none', 'MASTER': 'bond0', 'DEVICE':
'eno1', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr':
'00:1e:67:b9:33:f9', 'speed': 1000, 'gateway': ''}, 'eno2':
{'permhwaddr': '00:1e:67:b9:33:fa', 'addr': '', 'ipv6gateway': '::',
'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': False, 'netmask': '',
'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes',
'BOOTPROTO': 'none', 'MASTER': 'bond0', 'DEVICE': 'eno2', 'TYPE':
'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:1e:67:b9:33:f9',
'speed': 1000, 'gateway': ''}, 'eno3': {'permhwaddr':
'00:1e:67:b9:33:fb', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs':
[], 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False,
'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'BOOTPROTO': 'none',
'MASTER': 'bond0', 'DEVICE': 'eno3', 'TYPE': 'Ethernet', 'ONBOOT':
'yes'}, 'hwaddr': '00:1e:67:b9:33:f9', 'speed': 1000, 'gateway':
''}, 'eno4': {'permhwaddr': '00:1e:67:b9:33:fc', 'addr': '',
'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4':
False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg':
{'SLAVE': 'yes', 'BOOTPROTO': 'none', 'MASTER': 'bond0', 'DEVICE':
'eno4', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr':
'00:1e:67:b9:33:f9', 'speed': 1000, 'gateway': ''}},
'software_revision': '0', 'hostdevPassthrough': 'false',
'clusterLevels': ['3.4', '3.5', '3.6'], 'cpuFlags':
'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,=
clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp=
,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_ts=
c,aperfmperf,eagerfpu,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2=
,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,x2apic,popcnt,tsc_deadline_t=
imer,aes,xsave,avx,f16c,rdrand,lahf_lm,ida,arat,epb,xsaveopt,pln,pts,dthe=
rm,tpr_shadow,vnmi,flexpriority,ept,vpid,fsgsbase,smep,erms,model_Nehalem=
,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,mo=
del_n270,model_SandyBridge',
'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:aa2f33e6faca',
'netConfigDirty': 'False', 'supportedENGINEs': ['3.4', '3.5',
'3.6'], 'autoNumaBalancing': 1, 'additionalFeatures':
['GLUSTER_SNAPSHOT', 'GLUSTER_GEO_REPLICATION',
'GLUSTER_BRICK_MANAGEMENT'], 'reservedMem': '321', 'bondings':
{'bond0': {'ipv4addrs': ['131.130.44.101/24'], 'addr':
'131.130.44.101', 'cfg': {'IPV6INIT': 'no', 'BONDING_MASTER': 'yes',
'IPADDR': '131.130.44.101', 'IPV4_FAILURE_FATAL': 'no', 'PREFIX':
'24', 'BOOTPROTO': 'none', 'BONDING_OPTS': 'miimon=3D100
mode=3D802.3ad', 'DEVICE': 'bond0', 'TYPE': 'Bond', 'ONBOOT': 'yes',
'NAME': 'Bond connection bond0'}, 'ipv6addrs':
['fe80::21e:67ff:feb9:33f9/64'], 'active_slave': '', 'mtu': '1500',
'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6': False,
'slaves': ['eno1', 'eno2', 'eno3', 'eno4'], 'hwaddr':
'00:1e:67:b9:33:f9', 'ipv6gateway': '::', 'gateway': '131.130.44.1',
'opts': {'miimon': '100', 'mode': '4'}}}, 'software_version':
'4.17', 'memSize': '515720', 'cpuSpeed': '1272.304', 'numaNodes':
{'1': {'totalMemory': '262144', 'cpus': [10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39]}, '0':
{'totalMemory': '262065', 'cpus': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 20,
21, 22, 23, 24, 25, 26, 27, 28, 29]}}, 'cpuSockets': '2', 'vlans':
{}, 'lastClientIface': 'lo', 'cpuCores': '20', 'kvmEnabled': 'true',
'guestOverhead': '65', 'version_name': 'Snow Man', 'cpuThreads':
'40', 'emulatedMachines': ['pc-i440fx-rhel7.1.0', 'rhel6.3.0',
'pc-q35-rhel7.0.0', 'rhel6.1.0', 'rhel6.6.0', 'rhel6.2.0', 'pc',
'pc-q35-rhel7.1.0', 'q35', 'rhel6.4.0', 'rhel6.0.0', 'rhel6.5.0',
'pc-i440fx-rhel7.0.0'], 'rngSources': ['random'], 'operatingSystem':
{'release': '1.1503.el7.centos.2.8', 'version': '7', 'name':
'RHEL'}, 'lastClient': '127.0.0.1'}}
Thread-64::INFO::2015-08-31
12:33:15,298::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47404 stopped
Reactor thread::INFO::2015-08-31
12:33:18,486::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47405
Reactor thread::DEBUG::2015-08-31
12:33:18,498::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:18,499::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47405
Reactor thread::DEBUG::2015-08-31
12:33:18,499::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47405)
BindingXMLRPC::INFO::2015-08-31
12:33:18,499::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47405
Thread-65::INFO::2015-08-31
12:33:18,501::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47405 started
Thread-65::INFO::2015-08-31
12:33:18,504::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47405 stopped
Reactor thread::INFO::2015-08-31
12:33:33,520::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47406
Reactor thread::DEBUG::2015-08-31
12:33:33,532::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:33,533::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47406
Reactor thread::DEBUG::2015-08-31
12:33:33,533::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47406)
BindingXMLRPC::INFO::2015-08-31
12:33:33,533::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47406
Thread-66::INFO::2015-08-31
12:33:33,535::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47406 started
Thread-66::INFO::2015-08-31
12:33:33,538::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47406 stopped
--=20
/dev/null
--3W6fXveMv22FLGti6dq6jEIOIdx6nU7sR
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iEYEARECAAYFAlXkND4ACgkQnGohgOrO9GEHSQCfS+zyjah2+6JV0GVK1MzIfESJ
POYAoNOeh/pz1DIE7wKZOLKAXJfXdB79
=JwoY
-----END PGP SIGNATURE-----
--3W6fXveMv22FLGti6dq6jEIOIdx6nU7sR--
9 years, 2 months