Users
Threads by month
- ----- 2026 -----
- February
- January
- ----- 2025 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- 7 participants
- 19177 discussions
Invitation: [3.6 deep dive] - AAA - local user management @ Mon 2015-09-07 17:00 - 17:45 (bazulay@redhat.com)
by Barak Azulay 03 Sep '15
by Barak Azulay 03 Sep '15
03 Sep '15
You have been invited to the following event.
Title: [3.6 deep dive] - AAA - local user management
abstract:
oVirt 3.6 comes by default with a new AAA-JDBC extension which stores
authentication
and authorization data in relational database and provides these data
using standardized
oVirt AAA API similarly to already existing AAA-LDAP extension.
In this session we will discuss the design/usage/features & customization
of the AAA-JDBC extention
Feature page:
http://www.ovirt.org/Features/AAA_JDBC
Google hangout link:
https://plus.google.com/events/c45mkdo294kkjlcfiknk1bjc2bo
Youtube link:
http://www.youtube.com/watch?v=CUsaqLQIkuQ
When: Mon 2015-09-07 17:00 - 17:45 Jerusalem
Where: http://www.youtube.com/watch?v=CUsaqLQIkuQ
Calendar: bazulay(a)redhat.com
Who:
* Barak Azulay - organizer
* Oved Ourfali
* Martin Perina
* iheim(a)redhat.com
* users(a)ovirt.org
* devel(a)ovirt.org
Event details:
https://www.google.com/calendar/event?action=VIEW&eid=N3VmcTZjcW9xZXVmbXR1N…
Invitation from Google Calendar: https://www.google.com/calendar/
You are receiving this courtesy email at the account users(a)ovirt.org
because you are an attendee of this event.
To stop receiving future updates for this event, decline this event.
Alternatively you can sign up for a Google account at
https://www.google.com/calendar/ and control your notification settings for
your entire calendar.
Forwarding this invitation could allow any recipient to modify your RSVP
response. Learn more at
https://support.google.com/calendar/answer/37135#forwarding
1
0
------=_Part_276953_1869596252.1441280421047
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi,=20
I just installed Version 3.5.3.1-1.el7.centos , on centos 7.1, no HE.=20
for storage, I have only one server with glusterfs:=20
glusterfs-fuse-3.7.3-1.el7.x86_64=20
glusterfs-server-3.7.3-1.el7.x86_64=20
glusterfs-libs-3.7.3-1.el7.x86_64=20
glusterfs-client-xlators-3.7.3-1.el7.x86_64=20
glusterfs-api-3.7.3-1.el7.x86_64=20
glusterfs-3.7.3-1.el7.x86_64=20
glusterfs-cli-3.7.3-1.el7.x86_64=20
# service glusterd status=20
Redirecting to /bin/systemctl status glusterd.service=20
glusterd.service - GlusterFS, a clustered file-system server=20
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)=20
Active: active (running) since Thu 2015-09-03 11 :23:32 WEST; 10min ago=20
Process: 1153 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (code=
=3Dexited, status=3D0/SUCCESS)=20
Main PID: 1387 (glusterd)=20
CGroup: /system.slice/glusterd.service=20
=C3=A2=C3=A21387 /usr/sbin/glusterd -p /var/run/glusterd.pid=20
=C3=A2=C3=A22314 /usr/sbin/glusterfsd -s gfs3.acloud.pt --volfile-id gv0.gf=
s...=20
Sep 03 11:23:31 gfs3.domain.pt systemd[1]: Starting GlusterFS, a clustered =
f....=20
Sep 03 11:23:32 gfs3.domain.pt systemd[1]: Started GlusterFS, a clustered f=
i....=20
Hint: Some lines were ellipsized, use -l to show in full.=20
Everything was running until I need to restart the node (host), after that =
I was not ables to make the host active. This is the error message:=20
Gluster command [<UNKNOWN>] failed on server=20
I also disable JSON protocol, but no success=20
vdsm.log:=20
Thread-14::DEBUG:: 2015-09-03 11 :37:23,131::BindingXMLRPC::1133::vds::(wra=
pper) client [ 192.168.6.200 ]::call getHardwareInfo with () {}=20
Thread-14::DEBUG:: 2015-09-03 11 :37:23,132::BindingXMLRPC::1140::vds::(wra=
pper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}=
, 'info': {'systemProductName': 'PRIMERGY RX2520 M1', 'systemSerialNumber':=
'YLSK005705', 'systemFamily': 'SERVER', 'systemVersion': 'GS01', 'systemUU=
ID': '4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer': 'FUJITSU=
'}}=20
Thread-14::DEBUG:: 2015-09-03 11 :37:23,266::BindingXMLRPC::1133::vds::(wra=
pper) client [ 192.168.6.200 ]::call hostsList with () {} flowID [4acc5233]=
=20
Thread-14::ERROR:: 2015-09-03 11 :37:23,279::BindingXMLRPC::1149::vds::(wra=
pper) vdsm exception occured=20
Traceback (most recent call last):=20
File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1136, in wrapper=20
res =3D f(*args, **kwargs)=20
File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper=20
rv =3D func(*args, **kwargs)=20
File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList=20
return {'hosts': self.svdsmProxy.glusterPeerStatus()}=20
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__=20
return callMethod()=20
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>=20
**kwargs)=20
File "<string>", line 2, in glusterPeerStatus=20
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _call=
method=20
raise convert_to_error(kind, result)=20
GlusterCmdExecFailedException: Command execution failed=20
error: Connection failed. Please check if gluster daemon is operational.=20
return code: 1=20
supervdsm.log:=20
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 :37:23,131::supervdsmServer::1=
02::SuperVdsm.ServerCallback::(wrapper) call getHardwareInfo with () {}=20
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 :37:23,132::supervdsmServer::1=
09::SuperVdsm.ServerCallback::(wrapper) return getHardwareInfo with {'syste=
mProductName': 'PRIMERGY RX2520 M1', 'systemSerialNumber': 'YLSK005705', 's=
ystemFamily': 'SERVER', 'systemVersion': 'GS01', 'systemUUID': '4600EA20-2B=
FF-B34F-B607-DBF9F6B278CE', 'systemManufacturer': 'FUJITSU'}=20
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 :37:23,266::supervdsmServer::1=
02::SuperVdsm.ServerCallback::(wrapper) call wrapper with () {}=20
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 :37:23,267::utils::739::root::=
(execCmd) /usr/sbin/gluster --mode=3Dscript peer status --xml (cwd None)=20
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 :37:23,278::utils::759::root::=
(execCmd) FAILED: <err> =3D ''; <rc> =3D 1=20
MainProcess|Thread-14::ERROR:: 2015-09-03 11 :37:23,279::supervdsmServer::1=
06::SuperVdsm.ServerCallback::(wrapper) Error in wrapper=20
Traceback (most recent call last):=20
File "/usr/share/vdsm/supervdsmServer", line 104, in wrapper=20
res =3D func(*args, **kwargs)=20
File "/usr/share/vdsm/supervdsmServer", line 414, in wrapper=20
return func(*args, **kwargs)=20
File "/usr/share/vdsm/gluster/__init__.py", line 31, in wrapper=20
return func(*args, **kwargs)=20
File "/usr/share/vdsm/gluster/cli.py", line 909, in peerStatus=20
xmltree =3D _execGlusterXml(command)=20
File "/usr/share/vdsm/gluster/cli.py", line 90, in _execGlusterXml=20
raise ge.GlusterCmdExecFailedException(rc, out, err)=20
GlusterCmdExecFailedException: Command execution failed=20
error: Connection failed. Please check if gluster daemon is operational.=20
return code: 1=20
Any idea?=20
Thanks=20
Jos=C3=A9=20
--=20
Jose Ferradeira=20
http://www.logicworks.pt=20
------=_Part_276953_1869596252.1441280421047
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: Times New Roman; font-size: 10pt; co=
lor: #000000"><div><div>Hi,</div><div><br></div><div>I just installed <span=
class=3D"version-text">Version 3.5.3.1-1.el7.centos</span>, on centos 7.1,=
no HE.</div><div><br></div><div>for storage, I have only one server with g=
lusterfs:</div><div>glusterfs-fuse-3.7.3-1.el7.x86_64<br>glusterfs-server-3=
.7.3-1.el7.x86_64<br>glusterfs-libs-3.7.3-1.el7.x86_64<br>glusterfs-client-=
xlators-3.7.3-1.el7.x86_64<br>glusterfs-api-3.7.3-1.el7.x86_64<br>glusterfs=
-3.7.3-1.el7.x86_64<br>glusterfs-cli-3.7.3-1.el7.x86_64<br><br></div><div>#=
service glusterd status<br>Redirecting to /bin/systemctl status glus=
terd.service<br>glusterd.service - GlusterFS, a clustered file-system serve=
r<br> Loaded: loaded (/usr/lib/systemd/system/glusterd.service;=
enabled)<br> Active: active (running) since Thu <span class=3D=
"Object" id=3D"OBJ_PREFIX_DWT374_com_zimbra_phone"><a href=3D"callto:2015-0=
9-03%2011" data-mce-href=3D"callto:2015-09-03%2011">2015-09-03 11</a></span=
>:23:32 WEST; 10min ago<br> Process: 1153 ExecStart=3D/usr/sbin/glust=
erd -p /var/run/glusterd.pid (code=3Dexited, status=3D0/SUCCESS)<br> M=
ain PID: 1387 (glusterd)<br> CGroup: /system.slice/glusterd.ser=
vice<br> =C3=A2=
=C3=A21387 /usr/sbin/glusterd -p /var/run/glusterd.pid<br>  =
; =C3=A2=C3=A22314 /usr/sbin/glus=
terfsd -s gfs3.acloud.pt --volfile-id gv0.gfs...<br><br>Sep 03 11:23:31 gfs=
3.domain.pt systemd[1]: Starting GlusterFS, a clustered f....<br>Sep 03 11:=
23:32 gfs3.domain.pt systemd[1]: Started GlusterFS, a clustered fi....<br>H=
int: Some lines were ellipsized, use -l to show in full.<br><br></div><div>=
<br></div><div>Everything was running until I need to restart the node (hos=
t), after that I was not ables to make the host active. This is the error m=
essage:</div><div>Gluster command [<UNKNOWN>] failed on server</div><=
div><br></div><div><br></div><div>I also disable JSON protocol, but no succ=
ess</div><div><br></div><div>vdsm.log:</div><div>Thread-14::DEBUG::<span cl=
ass=3D"Object" id=3D"OBJ_PREFIX_DWT375_com_zimbra_phone"><a href=3D"callto:=
2015-09-03%2011" data-mce-href=3D"callto:2015-09-03%2011">2015-09-03 11</a>=
</span>:37:23,131::BindingXMLRPC::1133::vds::(wrapper) client [<span class=
=3D"Object" id=3D"OBJ_PREFIX_DWT376_com_zimbra_phone"><a href=3D"callto:192=
.168.6.200" data-mce-href=3D"callto:192.168.6.200">192.168.6.200</a></span>=
]::call getHardwareInfo with () {}<br>Thread-14::DEBUG::<span class=3D"Obje=
ct" id=3D"OBJ_PREFIX_DWT377_com_zimbra_phone"><a href=3D"callto:2015-09-03%=
2011" data-mce-href=3D"callto:2015-09-03%2011">2015-09-03 11</a></span>:37:=
23,132::BindingXMLRPC::1140::vds::(wrapper) return getHardwareInfo with {'s=
tatus': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'PRIM=
ERGY RX2520 M1', 'systemSerialNumber': 'YLSK005705', 'systemFamily': 'SERVE=
R', 'systemVersion': 'GS01', 'systemUUID': '4600EA20-2BFF-B34F-B607-DBF9F6B=
278CE', 'systemManufacturer': 'FUJITSU'}}<br>Thread-14::DEBUG::<span class=
=3D"Object" id=3D"OBJ_PREFIX_DWT378_com_zimbra_phone"><a href=3D"callto:201=
5-09-03%2011" data-mce-href=3D"callto:2015-09-03%2011">2015-09-03 11</a></s=
pan>:37:23,266::BindingXMLRPC::1133::vds::(wrapper) client [<span class=3D"=
Object" id=3D"OBJ_PREFIX_DWT379_com_zimbra_phone"><a href=3D"callto:192.168=
.6.200" data-mce-href=3D"callto:192.168.6.200">192.168.6.200</a></span>]::c=
all hostsList with () {} flowID [4acc5233]<br>Thread-14::ERROR::<span class=
=3D"Object" id=3D"OBJ_PREFIX_DWT380_com_zimbra_phone"><a href=3D"callto:201=
5-09-03%2011" data-mce-href=3D"callto:2015-09-03%2011">2015-09-03 11</a></s=
pan>:37:23,279::BindingXMLRPC::1149::vds::(wrapper) vdsm exception occured<=
br>Traceback (most recent call last):<br> File "/usr/share/vdsm/rpc/B=
indingXMLRPC.py", line 1136, in wrapper<br> res =3D f(*ar=
gs, **kwargs)<br> File "/usr/share/vdsm/gluster/api.py", line 54, in =
wrapper<br> rv =3D func(*args, **kwargs)<br> File "=
/usr/share/vdsm/gluster/api.py", line 251, in hostsList<br> &nbs=
p; return {'hosts': self.svdsmProxy.glusterPeerStatus()}<br> File "/u=
sr/share/vdsm/supervdsm.py", line 50, in __call__<br> ret=
urn callMethod()<br> File "/usr/share/vdsm/supervdsm.py", line 48, in=
<lambda><br> **kwargs)<br> File "<string&=
gt;", line 2, in glusterPeerStatus<br> File "/usr/lib64/python2.7/mul=
tiprocessing/managers.py", line 773, in _callmethod<br> r=
aise convert_to_error(kind, result)<br>GlusterCmdExecFailedException: Comma=
nd execution failed<br>error: Connection failed. Please check if gluster da=
emon is operational.<br>return code: 1<br><br></div><div><br></div><div>sup=
ervdsm.log:</div><div>MainProcess|Thread-14::DEBUG::<span class=3D"Object" =
id=3D"OBJ_PREFIX_DWT381_com_zimbra_phone"><a href=3D"callto:2015-09-03%2011=
" data-mce-href=3D"callto:2015-09-03%2011">2015-09-03 11</a></span>:37:23,1=
31::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call getHardw=
areInfo with () {}<br>MainProcess|Thread-14::DEBUG::<span class=3D"Object" =
id=3D"OBJ_PREFIX_DWT382_com_zimbra_phone"><a href=3D"callto:2015-09-03%2011=
" data-mce-href=3D"callto:2015-09-03%2011">2015-09-03 11</a></span>:37:23,1=
32::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return getHar=
dwareInfo with {'systemProductName': 'PRIMERGY RX2520 M1', 'systemSerialNum=
ber': 'YLSK005705', 'systemFamily': 'SERVER', 'systemVersion': 'GS01', 'sys=
temUUID': '4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer': 'FU=
JITSU'}<br>MainProcess|Thread-14::DEBUG::<span class=3D"Object" id=3D"OBJ_P=
REFIX_DWT383_com_zimbra_phone"><a href=3D"callto:2015-09-03%2011" data-mce-=
href=3D"callto:2015-09-03%2011">2015-09-03 11</a></span>:37:23,266::supervd=
smServer::102::SuperVdsm.ServerCallback::(wrapper) call wrapper with () {}<=
br>MainProcess|Thread-14::DEBUG::<span class=3D"Object" id=3D"OBJ_PREFIX_DW=
T384_com_zimbra_phone"><a href=3D"callto:2015-09-03%2011" data-mce-href=3D"=
callto:2015-09-03%2011">2015-09-03 11</a></span>:37:23,267::utils::739::roo=
t::(execCmd) /usr/sbin/gluster --mode=3Dscript peer status --xml (cwd None)=
<br>MainProcess|Thread-14::DEBUG::<span class=3D"Object" id=3D"OBJ_PREFIX_D=
WT385_com_zimbra_phone"><a href=3D"callto:2015-09-03%2011" data-mce-href=3D=
"callto:2015-09-03%2011">2015-09-03 11</a></span>:37:23,278::utils::759::ro=
ot::(execCmd) FAILED: <err> =3D ''; <rc> =3D 1<br>MainProcess|T=
hread-14::ERROR::<span class=3D"Object" id=3D"OBJ_PREFIX_DWT386_com_zimbra_=
phone"><a href=3D"callto:2015-09-03%2011" data-mce-href=3D"callto:2015-09-0=
3%2011">2015-09-03 11</a></span>:37:23,279::supervdsmServer::106::SuperVdsm=
.ServerCallback::(wrapper) Error in wrapper<br>Traceback (most recent call =
last):<br> File "/usr/share/vdsm/supervdsmServer", line 104, in wrapp=
er<br> res =3D func(*args, **kwargs)<br> File "/usr=
/share/vdsm/supervdsmServer", line 414, in wrapper<br> re=
turn func(*args, **kwargs)<br> File "/usr/share/vdsm/gluster/__init__=
.py", line 31, in wrapper<br> return func(*args, **kwargs=
)<br> File "/usr/share/vdsm/gluster/cli.py", line 909, in peerStatus<=
br> xmltree =3D _execGlusterXml(command)<br> File "=
/usr/share/vdsm/gluster/cli.py", line 90, in _execGlusterXml<br>  =
; raise ge.GlusterCmdExecFailedException(rc, out, err)<br>GlusterCmdE=
xecFailedException: Command execution failed<br>error: Connection failed. P=
lease check if gluster daemon is operational.<br>return code: 1<br><br></di=
v><div><br></div><div><br></div><div>Any idea?</div><div><br></div><div>Tha=
nks</div><div><br></div>Jos=C3=A9<br><div><br></div></div><div><br></div><d=
iv>-- <br></div><div><span name=3D"x"></span><hr style=3D"width: 100%; heig=
ht: 2px;" data-mce-style=3D"width: 100%; height: 2px;">Jose Ferradeira<br>h=
ttp://www.logicworks.pt<br><span name=3D"x"></span><br></div></div></body><=
/html>
------=_Part_276953_1869596252.1441280421047--
4
9
Hi,
I used the two links below to setup a test DC :
http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/
http://community.redhat.com/blog/2014/11/up-and-running-with-ovirt-3-5-part…
The only thing I did different is I did not usea hosted engine, but I
dedicated a solid server for that.
So I have one engine (CentOS 6.6), and 3 hosts (CentOS 7.0)
As in the doc above, my 3 hosts are publishing 300 Go of replicated
gluster storage, above which ctdb is managing a floating virtual ip that
is used by NFS as the master storage domain.
The last point is that the manager is also presenting a NFS storage I'm
using as an export domain.
It took me some time to plug all this setup as it is a bit more
complicated as my other DC with a real SAN and no gluster, but it is
eventually working (I can run VMs, migrate them...)
I have made many severe tests (from a very dumb user point of view :
unplug/replug the power cable of this server - does ctdb floats the vIP?
does gluster self-heals?, does the VM restart?...)
When precisely looking each layer one by one, all seems to be correct :
ctdb is fast at managing the ip, NFS is OK, gluster seems to
reconstruct, fencing eventually worked with the lanplus workaround, and
so on...
But from times to times, there seem to appear a severe hicup which I
have great difficulties to diagnose.
The messages in the web gui are not very precise, and not consistent:
- some tell about some host having network issues, but I can ping it
from every place it needs to be reached (especially from the SPM and the
manager)
"On host serv-vm-al01, Error: Network error during communication with
the Host"
- some tell that some volume is degraded, when it's not (gluster
commands are showing no issue. Even the oVirt tab about the volumes are
all green)
- "Host serv-vm-al03 cannot access the Storage Domain(s) <UNKNOWN>
attached to the Data Center"
Just by waiting a couple of seconds lead to a self heal with no action.
- Repeated "Detected change in status of brick
serv-vm-al03:/gluster/data/brick of volume data from DOWN to UP."
but absolutely no action is made on this filesystem.
At this time, zero VM is running in this test datacenter, and no action
is made on the hosts. Though, I see some looping errors coming and
going, and I find no way to diagnose.
Amongst the *actions* that I had the idea to use to solve some issues :
- I've found that trying to force the self-healing, and playing with
gluster commands had no effect
- I've found that playing with gluster adviced actions "find /gluster
-exec stat {} \; ..." seem to have no either effect
- I've found that forcing ctdb to move the vIp ("ctdb stop, ctdb
continue") DID SOLVE most of these issue.
I believe that it's not what ctdb is doing that helps, but maybe one of
its shell hook that is cleaning some troubles?
As this setup is complexe, I don't ask anyone a silver bullet, but maybe
you may know which layer is the most fragile, and which one I should
look at more closely?
--
Nicolas ECARNOT
6
10
The oVirt team is pleased to announce that the oVirt 3.5.4 Final Release is
now available as of September 3rd 2015.
oVirt is an open source alternative to VMware vSphere, and provides an
excellent KVM management interface for multi-node virtualization.
oVirt is available now for Fedora 20,
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar).
This release of oVirt includes numerous bug fixes.
See the release notes [1] for a list of the new features and bugs fixed.
Please refer to release notes [1] for Installation / Upgrade instructions.
A new oVirt Live ISO will be soon available[2]
Please note that mirrors[3] may need usually one day before being
synchronized.
Please refer to the release notes for known issues in this release.
[1] http://www.ovirt.org/OVirt_3.5.3_Release_Notes
[2] http://resources.ovirt.org/pub/ovirt-3.5/iso/ovirt-live/
<http://resources.ovirt.org/pub/ovirt-3.5/iso/ovirt-live/el6-3.5.3/ovirt-liv…>
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
2
4
Hi all
When I send a message to #ovirt on OFTC , I get a response - #ovirt
:Cannot send to channel
Anyone else facing this?
thanks
sahina
4
3
Hi All,
I have been trying the above and keep getting an error at the end about
unable to write to HEConfImage, see attached log.
Host is Fedora22 (clean system), engine is Centos-7.1, followed the
readme from the 3.6beta release notes but in short:
- setup a nfs server on the fedora22 host
- exported /nfs/ovirt-he/data
- installed yum, installed the 3.6 beta repo
- installed hosted engine
- ran setup
- installed centos7.1, ran engine-setup
Tried with and without selinux/iptables/firewalld.
Regards,
Joop
3
2
Hi all
The following 3.6 videos on new features were omitted from last Brian P report:
oVirt 3.6 power management UI changes : https://www.youtube.com/watch?v=AkfAMpEykdU&html5=1
oVirt 3.6 external status for host & storage domain : https://www.youtube.com/watch?v=xUIbNeN-AxA&html5=1
Thanks
Eli Mesika
1
0
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--3W6fXveMv22FLGti6dq6jEIOIdx6nU7sR
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi,
I'm trying to test a self hosted engine oVirt 3.6 setup on a CentOS
7.1 minimal installation. But it fails quite early after running
hosted-engine --deploy with
[ ERROR ] Failed to execute stage 'Environment setup': <Fault 1:
"<type 'exceptions.TypeError'>:cannot marshal None unless allow_none
is enabled">
So far I've followed the repository installation instructions as
mentioned on http://www.ovirt.org/OVirt_3.6_Release_Management, and
added the current gluster repo to the default minimal CentOS 7.1 setup.
The output of hosted-engine --deploy is as follows:
# hosted-engine --deploy
[ INFO ] Stage: Initializing
[ INFO ] Generating a temporary VNC password.
[ INFO ] Stage: Environment setup
Continuing will configure this host for serving as
hypervisor and create a VM where you have to install oVirt Engine
afterwards.
Are you sure you want to continue? (Yes, No)[Yes]:
Configuration files: []
Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-201508311232=
58-w2syys.log
Version: otopi-1.4.0_master
(otopi-1.4.0-0.0.master.20150727232243.git04fa8c9.el7)
It has been detected that this program is executed through
an SSH connection without using screen.
Continuing with the installation may lead to broken
installation if the network connection fails.
It is highly recommended to abort the installation and run
it inside a screen session using command "screen".
Do you want to continue anyway? (Yes, No)[No]: yes
[ INFO ] Hardware supports virtualization
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ ERROR ] Failed to execute stage 'Environment setup': <Fault 1:
"<type 'exceptions.TypeError'>:cannot marshal None unless allow_none
is enabled">
[ INFO ] Stage: Clean up
[ INFO ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20150831123315.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
The VDSM log shows that it fails to run dmidecode to gather hardware
information. I've had the same issue with oVirt 3.5 but the access
restriction to /dev/mem is kernel imposed so I'm not sure what to
make of it since this kernel option is enabled on all the systems
I've tested by default.
There seem to be some gluster packages missing but I'm guessing
that's not the problem at hand.
I'm not sure what to search for in the logs so I'm kind of stuck as
to what to try next. Any help is greatly appreciated.
All the best
Richard
The rest of the VDSM log during the hosted-engine setup is as follows:
BindingXMLRPC::INFO::2015-08-31
12:32:33,395::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47391
Thread-51::INFO::2015-08-31
12:32:33,396::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47391 started
Thread-51::INFO::2015-08-31
12:32:33,399::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47391 stopped
Reactor thread::INFO::2015-08-31
12:32:48,416::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47392
Reactor thread::DEBUG::2015-08-31
12:32:48,428::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:32:48,429::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47392
Reactor thread::DEBUG::2015-08-31
12:32:48,429::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47392)
BindingXMLRPC::INFO::2015-08-31
12:32:48,429::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47392
Thread-52::INFO::2015-08-31
12:32:48,430::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47392 started
Thread-52::INFO::2015-08-31
12:32:48,434::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47392 stopped
Reactor thread::INFO::2015-08-31
12:33:03,452::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47393
Reactor thread::DEBUG::2015-08-31
12:33:03,464::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:03,465::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47393
Reactor thread::DEBUG::2015-08-31
12:33:03,465::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47393)
BindingXMLRPC::INFO::2015-08-31
12:33:03,465::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47393
Thread-53::INFO::2015-08-31
12:33:03,466::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47393 started
Thread-53::INFO::2015-08-31
12:33:03,469::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47393 stopped
Reactor thread::INFO::2015-08-31
12:33:04,772::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47394
Reactor thread::DEBUG::2015-08-31
12:33:04,783::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:04,783::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47394
Reactor thread::DEBUG::2015-08-31
12:33:04,784::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47394)
BindingXMLRPC::INFO::2015-08-31
12:33:04,784::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47394
Thread-54::INFO::2015-08-31
12:33:04,786::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47394 started
Thread-54::DEBUG::2015-08-31
12:33:04,787::bindingxmlrpc::1256::vds::(wrapper) client
[127.0.0.1]::call getHardwareInfo with () {}
Thread-54::ERROR::2015-08-31
12:33:04,791::API::1328::vds::(getHardwareInfo) failed to retrieve
hardware info
Traceback (most recent call last):
File "/usr/share/vdsm/API.py", line 1325, in getHardwareInfo
hw =3D supervdsm.getProxy().getHardwareInfo()
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in getHardwareInfo
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
raise convert_to_error(kind, result)
RuntimeError: [src/dmidecodemodule.c:317] Error decoding DMI data
Thread-54::DEBUG::2015-08-31
12:33:04,793::bindingxmlrpc::1263::vds::(wrapper) return
getHardwareInfo with {'status': {'message': 'Failed to read hardware
information', 'code': 57}}
Thread-54::INFO::2015-08-31
12:33:04,795::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47394 stopped
Reactor thread::INFO::2015-08-31
12:33:05,798::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47395
Reactor thread::DEBUG::2015-08-31
12:33:05,812::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:05,812::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47395
Reactor thread::DEBUG::2015-08-31
12:33:05,812::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47395)
BindingXMLRPC::INFO::2015-08-31
12:33:05,813::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47395
Thread-55::INFO::2015-08-31
12:33:05,814::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47395 started
Thread-55::DEBUG::2015-08-31
12:33:05,815::bindingxmlrpc::1256::vds::(wrapper) client
[127.0.0.1]::call getHardwareInfo with () {}
Thread-55::ERROR::2015-08-31
12:33:05,818::API::1328::vds::(getHardwareInfo) failed to retrieve
hardware info
Traceback (most recent call last):
File "/usr/share/vdsm/API.py", line 1325, in getHardwareInfo
hw =3D supervdsm.getProxy().getHardwareInfo()
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in getHardwareInfo
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
raise convert_to_error(kind, result)
RuntimeError: [src/dmidecodemodule.c:317] Error decoding DMI data
Thread-55::DEBUG::2015-08-31
12:33:05,818::bindingxmlrpc::1263::vds::(wrapper) return
getHardwareInfo with {'status': {'message': 'Failed to read hardware
information', 'code': 57}}
Thread-55::INFO::2015-08-31
12:33:05,821::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47395 stopped
Reactor thread::INFO::2015-08-31
12:33:06,824::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47396
Reactor thread::DEBUG::2015-08-31
12:33:06,836::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:06,836::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47396
Reactor thread::DEBUG::2015-08-31
12:33:06,837::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47396)
BindingXMLRPC::INFO::2015-08-31
12:33:06,837::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47396
Thread-56::INFO::2015-08-31
12:33:06,838::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47396 started
Thread-56::DEBUG::2015-08-31
12:33:06,839::bindingxmlrpc::1256::vds::(wrapper) client
[127.0.0.1]::call getHardwareInfo with () {}
Thread-56::ERROR::2015-08-31
12:33:06,842::API::1328::vds::(getHardwareInfo) failed to retrieve
hardware info
Traceback (most recent call last):
File "/usr/share/vdsm/API.py", line 1325, in getHardwareInfo
hw =3D supervdsm.getProxy().getHardwareInfo()
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in getHardwareInfo
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
raise convert_to_error(kind, result)
RuntimeError: [src/dmidecodemodule.c:317] Error decoding DMI data
Thread-56::DEBUG::2015-08-31
12:33:06,842::bindingxmlrpc::1263::vds::(wrapper) return
getHardwareInfo with {'status': {'message': 'Failed to read hardware
information', 'code': 57}}
Thread-56::INFO::2015-08-31
12:33:06,844::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47396 stopped
Reactor thread::INFO::2015-08-31
12:33:07,847::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47397
Reactor thread::DEBUG::2015-08-31
12:33:07,859::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:07,859::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47397
Reactor thread::DEBUG::2015-08-31
12:33:07,860::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47397)
BindingXMLRPC::INFO::2015-08-31
12:33:07,860::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47397
Thread-57::INFO::2015-08-31
12:33:07,861::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47397 started
Thread-57::DEBUG::2015-08-31
12:33:07,862::bindingxmlrpc::1256::vds::(wrapper) client
[127.0.0.1]::call getHardwareInfo with () {}
Thread-57::ERROR::2015-08-31
12:33:07,865::API::1328::vds::(getHardwareInfo) failed to retrieve
hardware info
Traceback (most recent call last):
File "/usr/share/vdsm/API.py", line 1325, in getHardwareInfo
hw =3D supervdsm.getProxy().getHardwareInfo()
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in getHardwareInfo
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
raise convert_to_error(kind, result)
RuntimeError: [src/dmidecodemodule.c:317] Error decoding DMI data
Thread-57::DEBUG::2015-08-31
12:33:07,865::bindingxmlrpc::1263::vds::(wrapper) return
getHardwareInfo with {'status': {'message': 'Failed to read hardware
information', 'code': 57}}
Thread-57::INFO::2015-08-31
12:33:07,867::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47397 stopped
Reactor thread::INFO::2015-08-31
12:33:08,870::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47398
Reactor thread::DEBUG::2015-08-31
12:33:08,881::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:08,882::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47398
Reactor thread::DEBUG::2015-08-31
12:33:08,882::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47398)
BindingXMLRPC::INFO::2015-08-31
12:33:08,882::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47398
Thread-58::INFO::2015-08-31
12:33:08,884::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47398 started
Thread-58::DEBUG::2015-08-31
12:33:08,885::bindingxmlrpc::1256::vds::(wrapper) client
[127.0.0.1]::call getHardwareInfo with () {}
Thread-58::ERROR::2015-08-31
12:33:08,887::API::1328::vds::(getHardwareInfo) failed to retrieve
hardware info
Traceback (most recent call last):
File "/usr/share/vdsm/API.py", line 1325, in getHardwareInfo
hw =3D supervdsm.getProxy().getHardwareInfo()
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in getHardwareInfo
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
raise convert_to_error(kind, result)
RuntimeError: [src/dmidecodemodule.c:317] Error decoding DMI data
Thread-58::DEBUG::2015-08-31
12:33:08,888::bindingxmlrpc::1263::vds::(wrapper) return
getHardwareInfo with {'status': {'message': 'Failed to read hardware
information', 'code': 57}}
Thread-58::INFO::2015-08-31
12:33:08,890::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47398 stopped
Reactor thread::INFO::2015-08-31
12:33:09,892::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47399
Reactor thread::DEBUG::2015-08-31
12:33:09,904::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:09,904::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47399
Reactor thread::DEBUG::2015-08-31
12:33:09,905::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47399)
BindingXMLRPC::INFO::2015-08-31
12:33:09,905::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47399
Thread-59::INFO::2015-08-31
12:33:09,906::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47399 started
Thread-59::DEBUG::2015-08-31
12:33:09,907::bindingxmlrpc::1256::vds::(wrapper) client
[127.0.0.1]::call getHardwareInfo with () {}
Thread-59::ERROR::2015-08-31
12:33:09,909::API::1328::vds::(getHardwareInfo) failed to retrieve
hardware info
Traceback (most recent call last):
File "/usr/share/vdsm/API.py", line 1325, in getHardwareInfo
hw =3D supervdsm.getProxy().getHardwareInfo()
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in getHardwareInfo
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
raise convert_to_error(kind, result)
RuntimeError: [src/dmidecodemodule.c:317] Error decoding DMI data
Thread-59::DEBUG::2015-08-31
12:33:09,910::bindingxmlrpc::1263::vds::(wrapper) return
getHardwareInfo with {'status': {'message': 'Failed to read hardware
information', 'code': 57}}
Thread-59::INFO::2015-08-31
12:33:09,912::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47399 stopped
Reactor thread::INFO::2015-08-31
12:33:10,914::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47400
Reactor thread::DEBUG::2015-08-31
12:33:10,926::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:10,926::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47400
Reactor thread::DEBUG::2015-08-31
12:33:10,927::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47400)
BindingXMLRPC::INFO::2015-08-31
12:33:10,927::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47400
Thread-60::INFO::2015-08-31
12:33:10,928::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47400 started
Thread-60::DEBUG::2015-08-31
12:33:10,929::bindingxmlrpc::1256::vds::(wrapper) client
[127.0.0.1]::call getHardwareInfo with () {}
Thread-60::ERROR::2015-08-31
12:33:10,931::API::1328::vds::(getHardwareInfo) failed to retrieve
hardware info
Traceback (most recent call last):
File "/usr/share/vdsm/API.py", line 1325, in getHardwareInfo
hw =3D supervdsm.getProxy().getHardwareInfo()
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in getHardwareInfo
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
raise convert_to_error(kind, result)
RuntimeError: [src/dmidecodemodule.c:317] Error decoding DMI data
Thread-60::DEBUG::2015-08-31
12:33:10,932::bindingxmlrpc::1263::vds::(wrapper) return
getHardwareInfo with {'status': {'message': 'Failed to read hardware
information', 'code': 57}}
Thread-60::INFO::2015-08-31
12:33:10,934::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47400 stopped
Reactor thread::INFO::2015-08-31
12:33:11,936::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47401
Reactor thread::DEBUG::2015-08-31
12:33:11,948::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:11,948::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47401
Reactor thread::DEBUG::2015-08-31
12:33:11,948::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47401)
BindingXMLRPC::INFO::2015-08-31
12:33:11,949::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47401
Thread-61::INFO::2015-08-31
12:33:11,949::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47401 started
Thread-61::DEBUG::2015-08-31
12:33:11,950::bindingxmlrpc::1256::vds::(wrapper) client
[127.0.0.1]::call getHardwareInfo with () {}
Thread-61::ERROR::2015-08-31
12:33:11,953::API::1328::vds::(getHardwareInfo) failed to retrieve
hardware info
Traceback (most recent call last):
File "/usr/share/vdsm/API.py", line 1325, in getHardwareInfo
hw =3D supervdsm.getProxy().getHardwareInfo()
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in getHardwareInfo
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
raise convert_to_error(kind, result)
RuntimeError: [src/dmidecodemodule.c:317] Error decoding DMI data
Thread-61::DEBUG::2015-08-31
12:33:11,954::bindingxmlrpc::1263::vds::(wrapper) return
getHardwareInfo with {'status': {'message': 'Failed to read hardware
information', 'code': 57}}
Thread-61::INFO::2015-08-31
12:33:11,955::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47401 stopped
Reactor thread::INFO::2015-08-31
12:33:12,958::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47402
Reactor thread::DEBUG::2015-08-31
12:33:12,969::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:12,970::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47402
Reactor thread::DEBUG::2015-08-31
12:33:12,970::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47402)
BindingXMLRPC::INFO::2015-08-31
12:33:12,970::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47402
Thread-62::INFO::2015-08-31
12:33:12,971::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47402 started
Thread-62::DEBUG::2015-08-31
12:33:12,972::bindingxmlrpc::1256::vds::(wrapper) client
[127.0.0.1]::call getHardwareInfo with () {}
Thread-62::ERROR::2015-08-31
12:33:12,975::API::1328::vds::(getHardwareInfo) failed to retrieve
hardware info
Traceback (most recent call last):
File "/usr/share/vdsm/API.py", line 1325, in getHardwareInfo
hw =3D supervdsm.getProxy().getHardwareInfo()
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in getHardwareInfo
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
raise convert_to_error(kind, result)
RuntimeError: [src/dmidecodemodule.c:317] Error decoding DMI data
Thread-62::DEBUG::2015-08-31
12:33:12,976::bindingxmlrpc::1263::vds::(wrapper) return
getHardwareInfo with {'status': {'message': 'Failed to read hardware
information', 'code': 57}}
Thread-62::INFO::2015-08-31
12:33:12,977::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47402 stopped
Reactor thread::INFO::2015-08-31
12:33:13,980::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47403
Reactor thread::DEBUG::2015-08-31
12:33:13,991::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:13,992::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47403
Reactor thread::DEBUG::2015-08-31
12:33:13,992::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47403)
BindingXMLRPC::INFO::2015-08-31
12:33:13,993::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47403
Thread-63::INFO::2015-08-31
12:33:13,994::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47403 started
Thread-63::DEBUG::2015-08-31
12:33:13,995::bindingxmlrpc::1256::vds::(wrapper) client
[127.0.0.1]::call getHardwareInfo with () {}
Thread-63::ERROR::2015-08-31
12:33:13,998::API::1328::vds::(getHardwareInfo) failed to retrieve
hardware info
Traceback (most recent call last):
File "/usr/share/vdsm/API.py", line 1325, in getHardwareInfo
hw =3D supervdsm.getProxy().getHardwareInfo()
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in getHardwareInfo
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
raise convert_to_error(kind, result)
RuntimeError: [src/dmidecodemodule.c:317] Error decoding DMI data
Thread-63::DEBUG::2015-08-31
12:33:13,998::bindingxmlrpc::1263::vds::(wrapper) return
getHardwareInfo with {'status': {'message': 'Failed to read hardware
information', 'code': 57}}
Thread-63::INFO::2015-08-31
12:33:14,000::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47403 stopped
Reactor thread::INFO::2015-08-31
12:33:15,042::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47404
Reactor thread::DEBUG::2015-08-31
12:33:15,054::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:15,054::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47404
Reactor thread::DEBUG::2015-08-31
12:33:15,054::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47404)
BindingXMLRPC::INFO::2015-08-31
12:33:15,055::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47404
Thread-64::INFO::2015-08-31
12:33:15,056::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47404 started
Thread-64::DEBUG::2015-08-31
12:33:15,057::bindingxmlrpc::1256::vds::(wrapper) client
[127.0.0.1]::call getCapabilities with () {}
Thread-64::DEBUG::2015-08-31
12:33:15,111::utils::661::root::(execCmd) /usr/sbin/tc qdisc show
(cwd None)
Thread-64::DEBUG::2015-08-31
12:33:15,124::utils::679::root::(execCmd) SUCCESS: <err> =3D ''; <rc> =3D=
0
Thread-64::DEBUG::2015-08-31
12:33:15,127::utils::661::root::(execCmd) /usr/bin/sudo -n
/usr/sbin/dmidecode -s system-uuid (cwd None)
Thread-64::DEBUG::2015-08-31
12:33:15,153::utils::679::root::(execCmd) FAILED: <err> =3D '/dev/mem:
Operation not permitted\n'; <rc> =3D 1
Thread-64::WARNING::2015-08-31
12:33:15,154::utils::812::root::(getHostUUID) Could not find host UUID.
Thread-64::DEBUG::2015-08-31
12:33:15,156::caps::780::root::(_getKeyPackages) rpm package
('glusterfs-rdma',) not found
Thread-64::DEBUG::2015-08-31
12:33:15,158::caps::780::root::(_getKeyPackages) rpm package
('gluster-swift',) not found
Thread-64::DEBUG::2015-08-31
12:33:15,160::caps::780::root::(_getKeyPackages) rpm package
('gluster-swift-object',) not found
Thread-64::DEBUG::2015-08-31
12:33:15,161::caps::780::root::(_getKeyPackages) rpm package
('gluster-swift-plugin',) not found
Thread-64::DEBUG::2015-08-31
12:33:15,164::caps::780::root::(_getKeyPackages) rpm package
('gluster-swift-account',) not found
Thread-64::DEBUG::2015-08-31
12:33:15,164::caps::780::root::(_getKeyPackages) rpm package
('gluster-swift-proxy',) not found
Thread-64::DEBUG::2015-08-31
12:33:15,165::caps::780::root::(_getKeyPackages) rpm package
('gluster-swift-doc',) not found
Thread-64::DEBUG::2015-08-31
12:33:15,165::caps::780::root::(_getKeyPackages) rpm package
('gluster-swift-container',) not found
Thread-64::DEBUG::2015-08-31
12:33:15,170::bindingxmlrpc::1263::vds::(wrapper) return
getCapabilities with {'status': {'message': 'Done', 'code': 0},
'info': {'HBAInventory': {'iSCSI': [{'InitiatorName':
'iqn.1994-05.com.redhat:aa2f33e6faca'}], 'FC': []}, 'packages2':
{'kernel': {'release': '229.el7.x86_64', 'buildtime': 1425638202.0,
'version': '3.10.0'}, 'glusterfs-fuse': {'release': '1.el7',
'buildtime': 1438093544L, 'version': '3.7.3'}, 'spice-server':
{'release': '9.el7', 'buildtime': 1426031557L, 'version': '0.12.4'},
'librbd1': {'release': '2.el7', 'buildtime': 1425594433L, 'version':
'0.80.7'}, 'vdsm': {'release': '0.el7.centos', 'buildtime':
1440055696L, 'version': '4.17.3'}, 'qemu-kvm': {'release':
'23.el7_1.6.1', 'buildtime': 1438078890L, 'version': '2.1.2'},
'qemu-img': {'release': '23.el7_1.6.1', 'buildtime': 1438078890L,
'version': '2.1.2'}, 'libvirt': {'release': '16.el7_1.3',
'buildtime': 1431461920L, 'version': '1.2.8'}, 'glusterfs':
{'release': '1.el7', 'buildtime': 1438093544L, 'version': '3.7.3'},
'mom': {'release': '1.el7.centos', 'buildtime': 1436814841L,
'version': '0.5.0'}, 'glusterfs-server': {'release': '1.el7',
'buildtime': 1438093544L, 'version': '3.7.3'},
'glusterfs-geo-replication': {'release': '1.el7', 'buildtime':
1438093544L, 'version': '3.7.3'}}, 'numaNodeDistance': {'1': [21,
10], '0': [10, 21]}, 'cpuModel': 'Intel(R) Xeon(R) CPU E5-2690 v2 @
3.00GHz', 'liveMerge': 'true', 'hooks': {'before_vm_start':
{'50_hostedengine': {'md5': '2a6d96c26a3599812be6cf1a13d9f485'}}},
'vmTypes': ['kvm'], 'selinux': {'mode': '1'}, 'liveSnapshot':
'true', 'kdumpStatus': 0, 'networks': {}, 'bridges': {}, 'uuid':
None, 'onlineCpus':
'0,1,2,3,4,5,6,7,8,9,20,21,22,23,24,25,26,27,28,29,10,11,12,13,14,15,16,1=
7,18,19,30,31,32,33,34,35,36,37,38,39',
'nics': {'eno1': {'permhwaddr': '00:1e:67:b9:33:f9', 'addr': '',
'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4':
False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg':
{'SLAVE': 'yes', 'BOOTPROTO': 'none', 'MASTER': 'bond0', 'DEVICE':
'eno1', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr':
'00:1e:67:b9:33:f9', 'speed': 1000, 'gateway': ''}, 'eno2':
{'permhwaddr': '00:1e:67:b9:33:fa', 'addr': '', 'ipv6gateway': '::',
'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': False, 'netmask': '',
'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes',
'BOOTPROTO': 'none', 'MASTER': 'bond0', 'DEVICE': 'eno2', 'TYPE':
'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:1e:67:b9:33:f9',
'speed': 1000, 'gateway': ''}, 'eno3': {'permhwaddr':
'00:1e:67:b9:33:fb', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs':
[], 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False,
'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'BOOTPROTO': 'none',
'MASTER': 'bond0', 'DEVICE': 'eno3', 'TYPE': 'Ethernet', 'ONBOOT':
'yes'}, 'hwaddr': '00:1e:67:b9:33:f9', 'speed': 1000, 'gateway':
''}, 'eno4': {'permhwaddr': '00:1e:67:b9:33:fc', 'addr': '',
'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4':
False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg':
{'SLAVE': 'yes', 'BOOTPROTO': 'none', 'MASTER': 'bond0', 'DEVICE':
'eno4', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr':
'00:1e:67:b9:33:f9', 'speed': 1000, 'gateway': ''}},
'software_revision': '0', 'hostdevPassthrough': 'false',
'clusterLevels': ['3.4', '3.5', '3.6'], 'cpuFlags':
'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,=
clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp=
,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_ts=
c,aperfmperf,eagerfpu,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2=
,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,x2apic,popcnt,tsc_deadline_t=
imer,aes,xsave,avx,f16c,rdrand,lahf_lm,ida,arat,epb,xsaveopt,pln,pts,dthe=
rm,tpr_shadow,vnmi,flexpriority,ept,vpid,fsgsbase,smep,erms,model_Nehalem=
,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,mo=
del_n270,model_SandyBridge',
'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:aa2f33e6faca',
'netConfigDirty': 'False', 'supportedENGINEs': ['3.4', '3.5',
'3.6'], 'autoNumaBalancing': 1, 'additionalFeatures':
['GLUSTER_SNAPSHOT', 'GLUSTER_GEO_REPLICATION',
'GLUSTER_BRICK_MANAGEMENT'], 'reservedMem': '321', 'bondings':
{'bond0': {'ipv4addrs': ['131.130.44.101/24'], 'addr':
'131.130.44.101', 'cfg': {'IPV6INIT': 'no', 'BONDING_MASTER': 'yes',
'IPADDR': '131.130.44.101', 'IPV4_FAILURE_FATAL': 'no', 'PREFIX':
'24', 'BOOTPROTO': 'none', 'BONDING_OPTS': 'miimon=3D100
mode=3D802.3ad', 'DEVICE': 'bond0', 'TYPE': 'Bond', 'ONBOOT': 'yes',
'NAME': 'Bond connection bond0'}, 'ipv6addrs':
['fe80::21e:67ff:feb9:33f9/64'], 'active_slave': '', 'mtu': '1500',
'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6': False,
'slaves': ['eno1', 'eno2', 'eno3', 'eno4'], 'hwaddr':
'00:1e:67:b9:33:f9', 'ipv6gateway': '::', 'gateway': '131.130.44.1',
'opts': {'miimon': '100', 'mode': '4'}}}, 'software_version':
'4.17', 'memSize': '515720', 'cpuSpeed': '1272.304', 'numaNodes':
{'1': {'totalMemory': '262144', 'cpus': [10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39]}, '0':
{'totalMemory': '262065', 'cpus': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 20,
21, 22, 23, 24, 25, 26, 27, 28, 29]}}, 'cpuSockets': '2', 'vlans':
{}, 'lastClientIface': 'lo', 'cpuCores': '20', 'kvmEnabled': 'true',
'guestOverhead': '65', 'version_name': 'Snow Man', 'cpuThreads':
'40', 'emulatedMachines': ['pc-i440fx-rhel7.1.0', 'rhel6.3.0',
'pc-q35-rhel7.0.0', 'rhel6.1.0', 'rhel6.6.0', 'rhel6.2.0', 'pc',
'pc-q35-rhel7.1.0', 'q35', 'rhel6.4.0', 'rhel6.0.0', 'rhel6.5.0',
'pc-i440fx-rhel7.0.0'], 'rngSources': ['random'], 'operatingSystem':
{'release': '1.1503.el7.centos.2.8', 'version': '7', 'name':
'RHEL'}, 'lastClient': '127.0.0.1'}}
Thread-64::INFO::2015-08-31
12:33:15,298::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47404 stopped
Reactor thread::INFO::2015-08-31
12:33:18,486::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47405
Reactor thread::DEBUG::2015-08-31
12:33:18,498::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:18,499::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47405
Reactor thread::DEBUG::2015-08-31
12:33:18,499::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47405)
BindingXMLRPC::INFO::2015-08-31
12:33:18,499::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47405
Thread-65::INFO::2015-08-31
12:33:18,501::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47405 started
Thread-65::INFO::2015-08-31
12:33:18,504::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47405 stopped
Reactor thread::INFO::2015-08-31
12:33:33,520::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handl=
e_accept)
Accepting connection from 127.0.0.1:47406
Reactor thread::DEBUG::2015-08-31
12:33:33,532::protocoldetector::82::ProtocolDetector.Detector::(__init__)=
Using required_size=3D11
Reactor thread::INFO::2015-08-31
12:33:33,533::protocoldetector::118::ProtocolDetector.Detector::(handle_r=
ead)
Detected protocol xml from 127.0.0.1:47406
Reactor thread::DEBUG::2015-08-31
12:33:33,533::bindingxmlrpc::1296::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 47406)
BindingXMLRPC::INFO::2015-08-31
12:33:33,533::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for 127.0.0.1:47406
Thread-66::INFO::2015-08-31
12:33:33,535::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47406 started
Thread-66::INFO::2015-08-31
12:33:33,538::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:47406 stopped
--=20
/dev/null
--3W6fXveMv22FLGti6dq6jEIOIdx6nU7sR
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iEYEARECAAYFAlXkND4ACgkQnGohgOrO9GEHSQCfS+zyjah2+6JV0GVK1MzIfESJ
POYAoNOeh/pz1DIE7wKZOLKAXJfXdB79
=JwoY
-----END PGP SIGNATURE-----
--3W6fXveMv22FLGti6dq6jEIOIdx6nU7sR--
3
9
Re: [ovirt-users] oVirt 3.5.3.1 - Snapshot Failure with Error creating a new volume, code = 205
by Christian Rebel 01 Sep '15
by Christian Rebel 01 Sep '15
01 Sep '15
This is a multipart message in MIME format.
------=_NextPart_000_033A_01D0E4E6.F69CDA30
Content-Type: text/plain;
charset="us-ascii"
Content-Transfer-Encoding: 7bit
This issue is also blocking me to perform a Clone, Export and so on, does
anyone have an idea what is wrong with this VM and how I can fix it?
From: Christian Rebel [mailto:christian.rebel@gmx.at]
Sent: Freitag, 28. August 2015 15:45
To: users(a)ovirt.org
Subject: oVirt 3.5.3.1 - Snapshot Failure with Error creating a new volume,
code = 205
Hi all,
I have a problem to perform a Snapshot on one of my important VMs, can
anyone please be so kind and assist me.
#### start of problematic vm snapshot ####
2015-08-28 15:36:08,172 INFO
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
(ajp--127.0.0.1-8702-6) [439efb74] Lock Acquired to object EngineLock
[exclusiveLocks= key: ee2ea036-2af3-4a18-9329-08a7b0e7ce7c value: VM
, sharedLocks= ]
2015-08-28 15:36:08,224 INFO
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
(org.ovirt.thread.pool-8-thread-45) Command
44173a42-970f-42b3-8d09-ca113c58b5df persisting async task placeholder for
child command be3c1922-aca6-4caf-a432-34fe95043446
2015-08-28 15:36:08,367 INFO
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
(org.ovirt.thread.pool-8-thread-45) Command
44173a42-970f-42b3-8d09-ca113c58b5df persisting async task placeholder for
child command 03a43c66-0fed-4a82-9139-7b89328f4ae4
2015-08-28 15:36:08,517 INFO
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
(org.ovirt.thread.pool-8-thread-45) Running command:
CreateAllSnapshotsFromVmCommand internal: false. Entities affected : ID:
ee2ea036-2af3-4a18-9329-08a7b0e7ce7c Type: VMAction group
MANIPULATE_VM_SNAPSHOTS with role type USER
2015-08-28 15:36:08,550 INFO
[org.ovirt.engine.core.bll.CreateSnapshotCommand]
(org.ovirt.thread.pool-8-thread-45) [86e8aad] Running command:
CreateSnapshotCommand internal: true. Entities affected : ID:
00000000-0000-0000-0000-000000000000 Type: Storage
2015-08-28 15:36:08,560 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(org.ovirt.thread.pool-8-thread-45) [86e8aad] START,
CreateSnapshotVDSCommand( storagePoolId =
00000002-0002-0002-0002-000000000021, ignoreFailoverLimit = false,
storageDomainId = 937822d9-8a59-490f-95b7-48371ae32253, imageGroupId =
e7e99288-ad83-406e-9cb6-7a5aa443de9b, imageSizeInBytes = 21474836480,
volumeFormat = COW, newImageId = 2013aa82-6316-4b54-851b-88bf7f523b9c,
newImageDescription = , imageId = c5762dec-d9d1-4842-84d1-05896d4d27fb,
sourceImageGroupId = e7e99288-ad83-406e-9cb6-7a5aa443de9b), log id: d1ffce0
2015-08-28 15:36:08,567 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(org.ovirt.thread.pool-8-thread-45) [86e8aad] -- executeIrsBrokerCommand:
calling 'createVolume' with two new parameters: description and UUID
2015-08-28 15:36:08,655 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(org.ovirt.thread.pool-8-thread-45) [86e8aad] FINISH,
CreateSnapshotVDSCommand, return: 2013aa82-6316-4b54-851b-88bf7f523b9c, log
id: d1ffce0
2015-08-28 15:36:08,668 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-8-thread-45) [86e8aad] CommandAsyncTask::Adding
CommandMultiAsyncTasks object for command
44173a42-970f-42b3-8d09-ca113c58b5df
2015-08-28 15:36:08,670 INFO
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks]
(org.ovirt.thread.pool-8-thread-45) [86e8aad]
CommandMultiAsyncTasks::AttachTask: Attaching task
0221e559-0eec-468b-bc4f-a7aaa487661a to command
44173a42-970f-42b3-8d09-ca113c58b5df.
2015-08-28 15:36:08,734 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(org.ovirt.thread.pool-8-thread-45) [86e8aad] Adding task
0221e559-0eec-468b-bc4f-a7aaa487661a (Parent Command
CreateAllSnapshotsFromVm, Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling hasn't
started yet..
2015-08-28 15:36:08,793 INFO
[org.ovirt.engine.core.bll.CreateSnapshotCommand]
(org.ovirt.thread.pool-8-thread-45) [53768935] Running command:
CreateSnapshotCommand internal: true. Entities affected : ID:
00000000-0000-0000-0000-000000000000 Type: Storage
2015-08-28 15:36:08,797 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(org.ovirt.thread.pool-8-thread-45) [53768935] START,
CreateSnapshotVDSCommand( storagePoolId =
00000002-0002-0002-0002-000000000021, ignoreFailoverLimit = false,
storageDomainId = 937822d9-8a59-490f-95b7-48371ae32253, imageGroupId =
6281b597-020d-4ea7-a954-bb798a0ca4f1, imageSizeInBytes = 161061273600,
volumeFormat = COW, newImageId = fd9c6e36-90ca-488a-8cbd-534a0caf6886,
newImageDescription = , imageId = 2a2015a1-f62c-4e32-8b04-77ece2ba4cc1,
sourceImageGroupId = 6281b597-020d-4ea7-a954-bb798a0ca4f1), log id: 4bc33f2e
2015-08-28 15:36:08,799 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(org.ovirt.thread.pool-8-thread-45) [53768935] -- executeIrsBrokerCommand:
calling 'createVolume' with two new parameters: description and UUID
2015-08-28 15:36:09,000 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(org.ovirt.thread.pool-8-thread-45) [53768935] FINISH,
CreateSnapshotVDSCommand, return: fd9c6e36-90ca-488a-8cbd-534a0caf6886, log
id: 4bc33f2e
2015-08-28 15:36:09,011 INFO
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks]
(org.ovirt.thread.pool-8-thread-45) [53768935]
CommandMultiAsyncTasks::AttachTask: Attaching task
c502e058-4b72-4d7f-9c97-a264866289e2 to command
44173a42-970f-42b3-8d09-ca113c58b5df.
2015-08-28 15:36:09,076 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(org.ovirt.thread.pool-8-thread-45) [53768935] Adding task
c502e058-4b72-4d7f-9c97-a264866289e2 (Parent Command
CreateAllSnapshotsFromVm, Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling hasn't
started yet..
2015-08-28 15:36:09,235 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-45) Correlation ID: 439efb74, Job ID:
760f7564-fb7f-4bec-8ef7-5a1d3b7651fc, Call Stack: null, Custom Event ID: -1,
Message: Snapshot 'before upg' creation for VM 'Katello_2.2' was initiated
by admin@internal.
2015-08-28 15:36:09,237 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(org.ovirt.thread.pool-8-thread-45) BaseAsyncTask::startPollingTask:
Starting to poll task 0221e559-0eec-468b-bc4f-a7aaa487661a.
2015-08-28 15:36:09,237 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(org.ovirt.thread.pool-8-thread-45) BaseAsyncTask::startPollingTask:
Starting to poll task c502e058-4b72-4d7f-9c97-a264866289e2.
2015-08-28 15:36:11,279 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(DefaultQuartzScheduler_Worker-80) Polling and updating Async Tasks: 2
tasks, 2 tasks to poll now
2015-08-28 15:36:11,309 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
(DefaultQuartzScheduler_Worker-80) Failed in HSMGetAllTasksStatusesVDS
method
2015-08-28 15:36:11,311 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-80) SPMAsyncTask::PollTask: Polling task
0221e559-0eec-468b-bc4f-a7aaa487661a (Parent Command
CreateAllSnapshotsFromVm, Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status
finished, result 'cleanFailure'.
2015-08-28 15:36:11,343 ERROR [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-80) BaseAsyncTask::logEndTaskFailure: Task
0221e559-0eec-468b-bc4f-a7aaa487661a (Parent Command
CreateAllSnapshotsFromVm, Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended with
failure:
-- Result: cleanFailure
-- Message: VDSGenericException: VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = Error creating a new volume, code = 205,
-- Exception: VDSGenericException: VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = Error creating a new volume, code = 205
2015-08-28 15:36:11,349 INFO
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks]
(DefaultQuartzScheduler_Worker-80) Task with DB Task ID
639c5e5b-2713-4d9f-b95c-6dd7ed2fc370 and VDSM Task ID
c502e058-4b72-4d7f-9c97-a264866289e2 is in state Polling. End action for
command 44173a42-970f-42b3-8d09-ca113c58b5df will proceed when all the
entitys tasks are completed.
2015-08-28 15:36:11,352 WARN [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-80) SPMAsyncTask::PollTask: Polling task
c502e058-4b72-4d7f-9c97-a264866289e2 (Parent Command
CreateAllSnapshotsFromVm, Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status
aborting.
2015-08-28 15:36:11,355 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(DefaultQuartzScheduler_Worker-80) Finished polling Tasks, will poll again
in 10 seconds.
2015-08-28 15:36:21,367 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
(DefaultQuartzScheduler_Worker-4) Failed in HSMGetAllTasksStatusesVDS method
2015-08-28 15:36:21,369 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
(DefaultQuartzScheduler_Worker-4) Failed in HSMGetAllTasksStatusesVDS method
2015-08-28 15:36:21,402 ERROR [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-4) BaseAsyncTask::logEndTaskFailure: Task
0221e559-0eec-468b-bc4f-a7aaa487661a (Parent Command
CreateAllSnapshotsFromVm, Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended with
failure:
-- Result: cleanFailure
-- Message: VDSGenericException: VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = Error creating a new volume, code = 205,
-- Exception: VDSGenericException: VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = Error creating a new volume, code = 205
2015-08-28 15:36:21,409 INFO
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks]
(DefaultQuartzScheduler_Worker-4) Task with DB Task ID
639c5e5b-2713-4d9f-b95c-6dd7ed2fc370 and VDSM Task ID
c502e058-4b72-4d7f-9c97-a264866289e2 is in state Polling. End action for
command 44173a42-970f-42b3-8d09-ca113c58b5df will proceed when all the
entitys tasks are completed.
2015-08-28 15:36:21,412 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-4) SPMAsyncTask::PollTask: Polling task
c502e058-4b72-4d7f-9c97-a264866289e2 (Parent Command
CreateAllSnapshotsFromVm, Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status
finished, result 'cleanFailure'.
2015-08-28 15:36:21,427 ERROR [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-4) BaseAsyncTask::logEndTaskFailure: Task
c502e058-4b72-4d7f-9c97-a264866289e2 (Parent Command
CreateAllSnapshotsFromVm, Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended with
failure:
-- Result: cleanFailure
-- Message: VDSGenericException: VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = Error creating a new volume, code = 205,
-- Exception: VDSGenericException: VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = Error creating a new volume, code = 205
2015-08-28 15:36:21,433 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(DefaultQuartzScheduler_Worker-4) CommandAsyncTask::endActionIfNecessary:
All tasks of command 44173a42-970f-42b3-8d09-ca113c58b5df has ended ->
executing endAction
2015-08-28 15:36:21,436 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(DefaultQuartzScheduler_Worker-4) CommandAsyncTask::endAction: Ending action
for 2 tasks (command ID: 44173a42-970f-42b3-8d09-ca113c58b5df): calling
endAction .
2015-08-28 15:36:21,438 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-8-thread-21) CommandAsyncTask::endCommandAction
[within thread] context: Attempting to endAction CreateAllSnapshotsFromVm,
executionIndex: 0
2015-08-28 15:36:21,510 ERROR
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
(org.ovirt.thread.pool-8-thread-21) Ending command with failure:
org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand
2015-08-28 15:36:21,527 ERROR
[org.ovirt.engine.core.bll.CreateSnapshotCommand]
(org.ovirt.thread.pool-8-thread-21) [53768935] Ending command with failure:
org.ovirt.engine.core.bll.CreateSnapshotCommand
2015-08-28 15:36:21,660 ERROR
[org.ovirt.engine.core.bll.CreateSnapshotCommand]
(org.ovirt.thread.pool-8-thread-21) [86e8aad] Ending command with failure:
org.ovirt.engine.core.bll.CreateSnapshotCommand
2015-08-28 15:36:21,703 INFO
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
(org.ovirt.thread.pool-8-thread-21) Lock freed to object EngineLock
[exclusiveLocks= key: ee2ea036-2af3-4a18-9329-08a7b0e7ce7c value: VM
, sharedLocks= ]
2015-08-28 15:36:21,719 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-21) Correlation ID: 439efb74, Job ID:
760f7564-fb7f-4bec-8ef7-5a1d3b7651fc, Call Stack: null, Custom Event ID: -1,
Message: Failed to complete snapshot 'before upg' creation for VM
'Katello_2.2'.
2015-08-28 15:36:21,723 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-8-thread-21) CommandAsyncTask::HandleEndActionResult
[within thread]: endAction for action type CreateAllSnapshotsFromVm
completed, handling the result.
2015-08-28 15:36:21,725 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-8-thread-21) CommandAsyncTask::HandleEndActionResult
[within thread]: endAction for action type CreateAllSnapshotsFromVm hasn't
succeeded, clearing tasks.
2015-08-28 15:36:21,735 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(org.ovirt.thread.pool-8-thread-21) SPMAsyncTask::ClearAsyncTask: Attempting
to clear task c502e058-4b72-4d7f-9c97-a264866289e2
2015-08-28 15:36:21,738 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(org.ovirt.thread.pool-8-thread-21) START, SPMClearTaskVDSCommand(
storagePoolId = 00000002-0002-0002-0002-000000000021, ignoreFailoverLimit =
false, taskId = c502e058-4b72-4d7f-9c97-a264866289e2), log id: 60494436
2015-08-28 15:36:21,768 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(org.ovirt.thread.pool-8-thread-21) START, HSMClearTaskVDSCommand(HostName =
itsatltovirtaio.domain.local, HostId = b783a2ee-4a63-46ca-9afc-b3b74f0e10ce,
taskId=c502e058-4b72-4d7f-9c97-a264866289e2), log id: 6a1d669c
2015-08-28 15:36:21,788 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(org.ovirt.thread.pool-8-thread-21) FINISH, HSMClearTaskVDSCommand, log id:
6a1d669c
2015-08-28 15:36:21,790 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(org.ovirt.thread.pool-8-thread-21) FINISH, SPMClearTaskVDSCommand, log id:
60494436
2015-08-28 15:36:21,802 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(org.ovirt.thread.pool-8-thread-21) BaseAsyncTask::removeTaskFromDB: Removed
task c502e058-4b72-4d7f-9c97-a264866289e2 from DataBase
2015-08-28 15:36:21,804 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(org.ovirt.thread.pool-8-thread-21) SPMAsyncTask::ClearAsyncTask: Attempting
to clear task 0221e559-0eec-468b-bc4f-a7aaa487661a
2015-08-28 15:36:21,806 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(org.ovirt.thread.pool-8-thread-21) START, SPMClearTaskVDSCommand(
storagePoolId = 00000002-0002-0002-0002-000000000021, ignoreFailoverLimit =
false, taskId = 0221e559-0eec-468b-bc4f-a7aaa487661a), log id: 76547de3
2015-08-28 15:36:21,836 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(org.ovirt.thread.pool-8-thread-21) START, HSMClearTaskVDSCommand(HostName =
itsatltovirtaio.domain.local, HostId = b783a2ee-4a63-46ca-9afc-b3b74f0e10ce,
taskId=0221e559-0eec-468b-bc4f-a7aaa487661a), log id: 2514fec6
2015-08-28 15:36:21,856 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(org.ovirt.thread.pool-8-thread-21) FINISH, HSMClearTaskVDSCommand, log id:
2514fec6
2015-08-28 15:36:21,858 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(org.ovirt.thread.pool-8-thread-21) FINISH, SPMClearTaskVDSCommand, log id:
76547de3
2015-08-28 15:36:21,869 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(org.ovirt.thread.pool-8-thread-21) BaseAsyncTask::removeTaskFromDB: Removed
task 0221e559-0eec-468b-bc4f-a7aaa487661a from DataBase
2015-08-28 15:36:21,871 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-8-thread-21) CommandAsyncTask::HandleEndActionResult
[within thread]: Removing CommandMultiAsyncTasks object for entity
44173a42-970f-42b3-8d09-ca113c58b5df
------=_NextPart_000_033A_01D0E4E6.F69CDA30
Content-Type: text/html;
charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><META =
HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 14 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Tahoma;
panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal;
font-family:"Calibri","sans-serif";
color:windowtext;}
span.EmailStyle18
{mso-style-type:personal;
font-family:"Calibri","sans-serif";
color:#1F497D;}
span.EmailStyle19
{mso-style-type:personal;
font-family:"Calibri","sans-serif";
color:#1F497D;}
span.EmailStyle20
{mso-style-type:personal-reply;
font-family:"Calibri","sans-serif";
color:#1F497D;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:70.85pt 70.85pt 2.0cm 70.85pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DDE-AT link=3Dblue =
vlink=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal><span =
lang=3DEN-US style=3D'color:#1F497D'>This issue is also blocking me to =
perform a Clone, Export and so on, does anyone have an idea what is =
wrong with this VM and how I can fix it?<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'><o:p> </o:p></span></p><div><div =
style=3D'border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0cm =
0cm 0cm'><p class=3DMsoNormal><b><span lang=3DEN-US =
style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif";mso-fareast-l=
anguage:DE-AT'>From:</span></b><span lang=3DEN-US =
style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif";mso-fareast-l=
anguage:DE-AT'> Christian Rebel [mailto:christian.rebel@gmx.at] =
<br><b>Sent:</b> Freitag, 28. August 2015 15:45<br><b>To:</b> =
users(a)ovirt.org<br><b>Subject:</b> oVirt 3.5.3.1 - Snapshot Failure with =
Error creating a new volume, code =3D =
205<o:p></o:p></span></p></div></div><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal><span =
lang=3DEN-US style=3D'color:#1F497D'>Hi all,<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'><o:p> </o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>I have a =
problem to perform a Snapshot on one of my important VMs, can anyone =
please be so kind and assist me…<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'><o:p> </o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>#### start =
of problematic vm snapshot ####<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:08,172 INFO =
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] =
(ajp--127.0.0.1-8702-6) [439efb74] Lock Acquired to object EngineLock =
[exclusiveLocks=3D key: ee2ea036-2af3-4a18-9329-08a7b0e7ce7c value: =
VM<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'>, sharedLocks=3D ]<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:08,224 INFO =
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] =
(org.ovirt.thread.pool-8-thread-45) Command =
44173a42-970f-42b3-8d09-ca113c58b5df persisting async task placeholder =
for child command =
be3c1922-aca6-4caf-a432-34fe95043446<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:08,367 INFO =
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] =
(org.ovirt.thread.pool-8-thread-45) Command =
44173a42-970f-42b3-8d09-ca113c58b5df persisting async task placeholder =
for child command =
03a43c66-0fed-4a82-9139-7b89328f4ae4<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:08,517 INFO =
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] =
(org.ovirt.thread.pool-8-thread-45) Running command: =
CreateAllSnapshotsFromVmCommand internal: false. Entities affected =
: ID: ee2ea036-2af3-4a18-9329-08a7b0e7ce7c Type: VMAction group =
MANIPULATE_VM_SNAPSHOTS with role type USER<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:08,550 INFO =
[org.ovirt.engine.core.bll.CreateSnapshotCommand] =
(org.ovirt.thread.pool-8-thread-45) [86e8aad] Running command: =
CreateSnapshotCommand internal: true. Entities affected : ID: =
00000000-0000-0000-0000-000000000000 Type: =
Storage<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'>2015-08-28 15:36:08,560 INFO =
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand] =
(org.ovirt.thread.pool-8-thread-45) [86e8aad] START, =
CreateSnapshotVDSCommand( storagePoolId =3D =
00000002-0002-0002-0002-000000000021, ignoreFailoverLimit =3D false, =
storageDomainId =3D 937822d9-8a59-490f-95b7-48371ae32253, imageGroupId =
=3D e7e99288-ad83-406e-9cb6-7a5aa443de9b, imageSizeInBytes =3D =
21474836480, volumeFormat =3D COW, newImageId =3D =
2013aa82-6316-4b54-851b-88bf7f523b9c, newImageDescription =3D , imageId =
=3D c5762dec-d9d1-4842-84d1-05896d4d27fb, sourceImageGroupId =3D =
e7e99288-ad83-406e-9cb6-7a5aa443de9b), log id: =
d1ffce0<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'>2015-08-28 15:36:08,567 INFO =
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand] =
(org.ovirt.thread.pool-8-thread-45) [86e8aad] -- =
executeIrsBrokerCommand: calling 'createVolume' with two new parameters: =
description and UUID<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 15:36:08,655 INFO =
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand] =
(org.ovirt.thread.pool-8-thread-45) [86e8aad] FINISH, =
CreateSnapshotVDSCommand, return: 2013aa82-6316-4b54-851b-88bf7f523b9c, =
log id: d1ffce0<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 15:36:08,668 INFO =
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask] =
(org.ovirt.thread.pool-8-thread-45) [86e8aad] CommandAsyncTask::Adding =
CommandMultiAsyncTasks object for command =
44173a42-970f-42b3-8d09-ca113c58b5df<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:08,670 INFO =
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks] =
(org.ovirt.thread.pool-8-thread-45) [86e8aad] =
CommandMultiAsyncTasks::AttachTask: Attaching task =
0221e559-0eec-468b-bc4f-a7aaa487661a to command =
44173a42-970f-42b3-8d09-ca113c58b5df.<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:08,734 INFO =
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager] =
(org.ovirt.thread.pool-8-thread-45) [86e8aad] Adding task =
0221e559-0eec-468b-bc4f-a7aaa487661a (Parent Command =
CreateAllSnapshotsFromVm, Parameters Type =
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling =
hasn't started yet..<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 15:36:08,793 INFO =
[org.ovirt.engine.core.bll.CreateSnapshotCommand] =
(org.ovirt.thread.pool-8-thread-45) [53768935] Running command: =
CreateSnapshotCommand internal: true. Entities affected : ID: =
00000000-0000-0000-0000-000000000000 Type: =
Storage<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'>2015-08-28 15:36:08,797 INFO =
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand] =
(org.ovirt.thread.pool-8-thread-45) [53768935] START, =
CreateSnapshotVDSCommand( storagePoolId =3D =
00000002-0002-0002-0002-000000000021, ignoreFailoverLimit =3D false, =
storageDomainId =3D 937822d9-8a59-490f-95b7-48371ae32253, imageGroupId =
=3D 6281b597-020d-4ea7-a954-bb798a0ca4f1, imageSizeInBytes =3D =
161061273600, volumeFormat =3D COW, newImageId =3D =
fd9c6e36-90ca-488a-8cbd-534a0caf6886, newImageDescription =3D , imageId =
=3D 2a2015a1-f62c-4e32-8b04-77ece2ba4cc1, sourceImageGroupId =3D =
6281b597-020d-4ea7-a954-bb798a0ca4f1), log id: =
4bc33f2e<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'>2015-08-28 15:36:08,799 INFO =
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand] =
(org.ovirt.thread.pool-8-thread-45) [53768935] -- =
executeIrsBrokerCommand: calling 'createVolume' with two new parameters: =
description and UUID<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 15:36:09,000 INFO =
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand] =
(org.ovirt.thread.pool-8-thread-45) [53768935] FINISH, =
CreateSnapshotVDSCommand, return: fd9c6e36-90ca-488a-8cbd-534a0caf6886, =
log id: 4bc33f2e<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 15:36:09,011 INFO =
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks] =
(org.ovirt.thread.pool-8-thread-45) [53768935] =
CommandMultiAsyncTasks::AttachTask: Attaching task =
c502e058-4b72-4d7f-9c97-a264866289e2 to command =
44173a42-970f-42b3-8d09-ca113c58b5df.<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:09,076 INFO =
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager] =
(org.ovirt.thread.pool-8-thread-45) [53768935] Adding task =
c502e058-4b72-4d7f-9c97-a264866289e2 (Parent Command =
CreateAllSnapshotsFromVm, Parameters Type =
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling =
hasn't started yet..<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 15:36:09,235 INFO =
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] =
(org.ovirt.thread.pool-8-thread-45) Correlation ID: 439efb74, Job ID: =
760f7564-fb7f-4bec-8ef7-5a1d3b7651fc, Call Stack: null, Custom Event ID: =
-1, Message: Snapshot 'before upg' creation for VM 'Katello_2.2' was =
initiated by admin@internal.<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:09,237 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] =
(org.ovirt.thread.pool-8-thread-45) BaseAsyncTask::startPollingTask: =
Starting to poll task =
0221e559-0eec-468b-bc4f-a7aaa487661a.<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:09,237 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] =
(org.ovirt.thread.pool-8-thread-45) BaseAsyncTask::startPollingTask: =
Starting to poll task =
c502e058-4b72-4d7f-9c97-a264866289e2.<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:11,279 INFO =
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager] =
(DefaultQuartzScheduler_Worker-80) Polling and updating Async Tasks: 2 =
tasks, 2 tasks to poll now<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:11,309 ERROR =
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSComma=
nd] (DefaultQuartzScheduler_Worker-80) Failed in =
HSMGetAllTasksStatusesVDS method<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:11,311 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] =
(DefaultQuartzScheduler_Worker-80) SPMAsyncTask::PollTask: Polling task =
0221e559-0eec-468b-bc4f-a7aaa487661a (Parent Command =
CreateAllSnapshotsFromVm, Parameters Type =
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned =
status finished, result 'cleanFailure'.<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:11,343 ERROR [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] =
(DefaultQuartzScheduler_Worker-80) BaseAsyncTask::logEndTaskFailure: =
Task 0221e559-0eec-468b-bc4f-a7aaa487661a (Parent Command =
CreateAllSnapshotsFromVm, Parameters Type =
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended with =
failure:<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'>-- Result: cleanFailure<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>-- Message: =
VDSGenericException: VDSErrorException: Failed to =
HSMGetAllTasksStatusesVDS, error =3D Error creating a new volume, code =
=3D 205,<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'>-- Exception: VDSGenericException: =
VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error =3D Error =
creating a new volume, code =3D 205<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:11,349 INFO =
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks] =
(DefaultQuartzScheduler_Worker-80) Task with DB Task ID =
639c5e5b-2713-4d9f-b95c-6dd7ed2fc370 and VDSM Task ID =
c502e058-4b72-4d7f-9c97-a264866289e2 is in state Polling. End action for =
command 44173a42-970f-42b3-8d09-ca113c58b5df will proceed when all the =
entitys tasks are completed.<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:11,352 WARN [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] =
(DefaultQuartzScheduler_Worker-80) SPMAsyncTask::PollTask: Polling task =
c502e058-4b72-4d7f-9c97-a264866289e2 (Parent Command =
CreateAllSnapshotsFromVm, Parameters Type =
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned =
status aborting.<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 15:36:11,355 INFO =
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager] =
(DefaultQuartzScheduler_Worker-80) Finished polling Tasks, will poll =
again in 10 seconds.<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 15:36:21,367 ERROR =
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSComma=
nd] (DefaultQuartzScheduler_Worker-4) Failed in =
HSMGetAllTasksStatusesVDS method<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:21,369 ERROR =
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSComma=
nd] (DefaultQuartzScheduler_Worker-4) Failed in =
HSMGetAllTasksStatusesVDS method<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:21,402 ERROR [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] =
(DefaultQuartzScheduler_Worker-4) BaseAsyncTask::logEndTaskFailure: Task =
0221e559-0eec-468b-bc4f-a7aaa487661a (Parent Command =
CreateAllSnapshotsFromVm, Parameters Type =
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended with =
failure:<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'>-- Result: cleanFailure<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>-- Message: =
VDSGenericException: VDSErrorException: Failed to =
HSMGetAllTasksStatusesVDS, error =3D Error creating a new volume, code =
=3D 205,<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'>-- Exception: VDSGenericException: =
VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error =3D Error =
creating a new volume, code =3D 205<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:21,409 INFO =
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks] =
(DefaultQuartzScheduler_Worker-4) Task with DB Task ID =
639c5e5b-2713-4d9f-b95c-6dd7ed2fc370 and VDSM Task ID =
c502e058-4b72-4d7f-9c97-a264866289e2 is in state Polling. End action for =
command 44173a42-970f-42b3-8d09-ca113c58b5df will proceed when all the =
entitys tasks are completed.<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:21,412 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] =
(DefaultQuartzScheduler_Worker-4) SPMAsyncTask::PollTask: Polling task =
c502e058-4b72-4d7f-9c97-a264866289e2 (Parent Command =
CreateAllSnapshotsFromVm, Parameters Type =
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned =
status finished, result 'cleanFailure'.<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:21,427 ERROR [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] =
(DefaultQuartzScheduler_Worker-4) BaseAsyncTask::logEndTaskFailure: Task =
c502e058-4b72-4d7f-9c97-a264866289e2 (Parent Command =
CreateAllSnapshotsFromVm, Parameters Type =
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended with =
failure:<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'>-- Result: cleanFailure<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>-- Message: =
VDSGenericException: VDSErrorException: Failed to =
HSMGetAllTasksStatusesVDS, error =3D Error creating a new volume, code =
=3D 205,<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'>-- Exception: VDSGenericException: =
VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error =3D Error =
creating a new volume, code =3D 205<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:21,433 INFO =
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask] =
(DefaultQuartzScheduler_Worker-4) =
CommandAsyncTask::endActionIfNecessary: All tasks of command =
44173a42-970f-42b3-8d09-ca113c58b5df has ended -> executing =
endAction<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'>2015-08-28 15:36:21,436 INFO =
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask] =
(DefaultQuartzScheduler_Worker-4) CommandAsyncTask::endAction: Ending =
action for 2 tasks (command ID: 44173a42-970f-42b3-8d09-ca113c58b5df): =
calling endAction .<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 15:36:21,438 INFO =
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask] =
(org.ovirt.thread.pool-8-thread-21) CommandAsyncTask::endCommandAction =
[within thread] context: Attempting to endAction =
CreateAllSnapshotsFromVm, executionIndex: 0<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:21,510 ERROR =
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] =
(org.ovirt.thread.pool-8-thread-21) Ending command with failure: =
org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand<o:p></o:p></spa=
n></p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'>2015-08-28 15:36:21,527 ERROR =
[org.ovirt.engine.core.bll.CreateSnapshotCommand] =
(org.ovirt.thread.pool-8-thread-21) [53768935] Ending command with =
failure: =
org.ovirt.engine.core.bll.CreateSnapshotCommand<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:21,660 ERROR [org.ovirt.engine.core.bll.CreateSnapshotCommand] =
(org.ovirt.thread.pool-8-thread-21) [86e8aad] Ending command with =
failure: =
org.ovirt.engine.core.bll.CreateSnapshotCommand<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:21,703 INFO =
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] =
(org.ovirt.thread.pool-8-thread-21) Lock freed to object EngineLock =
[exclusiveLocks=3D key: ee2ea036-2af3-4a18-9329-08a7b0e7ce7c value: =
VM<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'>, sharedLocks=3D ]<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:21,719 ERROR =
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] =
(org.ovirt.thread.pool-8-thread-21) Correlation ID: 439efb74, Job ID: =
760f7564-fb7f-4bec-8ef7-5a1d3b7651fc, Call Stack: null, Custom Event ID: =
-1, Message: Failed to complete snapshot 'before upg' creation for VM =
'Katello_2.2'.<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 15:36:21,723 INFO =
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask] =
(org.ovirt.thread.pool-8-thread-21) =
CommandAsyncTask::HandleEndActionResult [within thread]: endAction for =
action type CreateAllSnapshotsFromVm completed, handling the =
result.<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'>2015-08-28 15:36:21,725 INFO =
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask] =
(org.ovirt.thread.pool-8-thread-21) =
CommandAsyncTask::HandleEndActionResult [within thread]: endAction for =
action type CreateAllSnapshotsFromVm hasn't succeeded, clearing =
tasks.<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'>2015-08-28 15:36:21,735 INFO =
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask] =
(org.ovirt.thread.pool-8-thread-21) SPMAsyncTask::ClearAsyncTask: =
Attempting to clear task =
c502e058-4b72-4d7f-9c97-a264866289e2<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:21,738 INFO =
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] =
(org.ovirt.thread.pool-8-thread-21) START, SPMClearTaskVDSCommand( =
storagePoolId =3D 00000002-0002-0002-0002-000000000021, =
ignoreFailoverLimit =3D false, taskId =3D =
c502e058-4b72-4d7f-9c97-a264866289e2), log id: =
60494436<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'>2015-08-28 15:36:21,768 INFO =
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] =
(org.ovirt.thread.pool-8-thread-21) START, =
HSMClearTaskVDSCommand(HostName =3D itsatltovirtaio.domain.local, HostId =
=3D b783a2ee-4a63-46ca-9afc-b3b74f0e10ce, =
taskId=3Dc502e058-4b72-4d7f-9c97-a264866289e2), log id: =
6a1d669c<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'>2015-08-28 15:36:21,788 INFO =
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] =
(org.ovirt.thread.pool-8-thread-21) FINISH, HSMClearTaskVDSCommand, log =
id: 6a1d669c<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 15:36:21,790 INFO =
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] =
(org.ovirt.thread.pool-8-thread-21) FINISH, SPMClearTaskVDSCommand, log =
id: 60494436<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 15:36:21,802 INFO =
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask] =
(org.ovirt.thread.pool-8-thread-21) BaseAsyncTask::removeTaskFromDB: =
Removed task c502e058-4b72-4d7f-9c97-a264866289e2 from =
DataBase<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'>2015-08-28 15:36:21,804 INFO =
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask] =
(org.ovirt.thread.pool-8-thread-21) SPMAsyncTask::ClearAsyncTask: =
Attempting to clear task =
0221e559-0eec-468b-bc4f-a7aaa487661a<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 =
15:36:21,806 INFO =
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] =
(org.ovirt.thread.pool-8-thread-21) START, SPMClearTaskVDSCommand( =
storagePoolId =3D 00000002-0002-0002-0002-000000000021, =
ignoreFailoverLimit =3D false, taskId =3D =
0221e559-0eec-468b-bc4f-a7aaa487661a), log id: =
76547de3<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'>2015-08-28 15:36:21,836 INFO =
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] =
(org.ovirt.thread.pool-8-thread-21) START, =
HSMClearTaskVDSCommand(HostName =3D itsatltovirtaio.domain.local, HostId =
=3D b783a2ee-4a63-46ca-9afc-b3b74f0e10ce, =
taskId=3D0221e559-0eec-468b-bc4f-a7aaa487661a), log id: =
2514fec6<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'>2015-08-28 15:36:21,856 INFO =
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] =
(org.ovirt.thread.pool-8-thread-21) FINISH, HSMClearTaskVDSCommand, log =
id: 2514fec6<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 15:36:21,858 INFO =
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] =
(org.ovirt.thread.pool-8-thread-21) FINISH, SPMClearTaskVDSCommand, log =
id: 76547de3<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US style=3D'color:#1F497D'>2015-08-28 15:36:21,869 INFO =
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask] =
(org.ovirt.thread.pool-8-thread-21) BaseAsyncTask::removeTaskFromDB: =
Removed task 0221e559-0eec-468b-bc4f-a7aaa487661a from =
DataBase<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'>2015-08-28 15:36:21,871 INFO =
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask] =
(org.ovirt.thread.pool-8-thread-21) =
CommandAsyncTask::HandleEndActionResult [within thread]: Removing =
CommandMultiAsyncTasks object for entity =
44173a42-970f-42b3-8d09-ca113c58b5df<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'><o:p> </o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'><o:p> </o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'><o:p> </o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'><o:p> </o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US =
style=3D'color:#1F497D'><o:p> </o:p></span></p></div></body></html>
------=_NextPart_000_033A_01D0E4E6.F69CDA30--
1
0
I have an ancient VM that I need to migrate to oVirt if possible. It is
Windows 2000 Server on VMWare ESXi 4.1 (yeah, I know, please don't
laugh).
Is there any possiblity of making this work? Reinstalling the Windows
system (or upgrading) is just not practical at this time, and I need to
get this thing off of the old VMWare box.
--
Chris Adams <cma(a)cmadams.net>
3
3