Re: [Users] OVirt-Engine 3.3 RC - Add Fedora 19 host fails [SOLVED]
by Markus Stockhausen
This is a multi-part message in MIME format.
------=_NextPartTM-000-88f4cae7-6eb5-41b0-b778-b9258cfed90b
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hello,=0A=
=0A=
just in case someone else got this problem. For me it seems to be a naming=
=0A=
convention problem. ifcfg file names did not match the shown interfaces. =
=0A=
Steps to fix it included:=0A=
=0A=
- rename ifcfg-enp1s0, ... scripts to real interface names ifcfg-p49p1, ...=
=0A=
- remove parameter NAME from these scripts=0A=
- Switch to network service=0A=
=0A=
systemctl disable NetworkManager=0A=
systemctl stop NetworkManager.service=0A=
service network start=0A=
chkconfig network on=0A=
=0A=
Markus=
------=_NextPartTM-000-88f4cae7-6eb5-41b0-b778-b9258cfed90b
Content-Type: text/plain;
name="InterScan_Disclaimer.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
filename="InterScan_Disclaimer.txt"
****************************************************************************
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.
Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
Vorstand:
Kadir Akin
Dr. Michael Höhnerbach
Vorsitzender des Aufsichtsrates:
Hans Kristian Langva
Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497
This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.
e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
executive board:
Kadir Akin
Dr. Michael Höhnerbach
President of the supervisory board:
Hans Kristian Langva
Registry office: district court Cologne
Register number: HRB 52 497
****************************************************************************
------=_NextPartTM-000-88f4cae7-6eb5-41b0-b778-b9258cfed90b--
11 years, 3 months
[Users] oVirt Solaris support
by René Koch (ovido)
Hi,
I want to start the discussion about Solaris support on oVirt again, as
there was no solution for it yet.
On my oVirt 3.2.2 environment I installed Solaris 11 U1 with the
following specs:
* Operating System: Other
* nic1: rtl8139
* Disk1: IDE (Thin Provision)
* Host: CentOS 6.4 with qemu-kvm-0.12.1.2-2.355.0.1.el6.centos.7.x86_64
These are the same settings as on my RHEL 6.4 KVM host (except I can
choose Solaris 10 as OS in virt-manager), which has KVM version:
qemu-kvm-rhev-0.12.1.2-2.295.el6_3.2.x86_64 (I wanted to use this host
as a RHEV host, so the qemu-kvm-rhev package is installed in case you
wounder)...
What's working:
* OS installation on IDE disk
* Bringing up network interface
What's not working on oVirt:
* Network connections - on RHEL 6.4 with plain libvirt/kvm this is
working...
I can see the mac address on my CentOS host, but can't ping the Solaris
vm:
# brctl showmacs ovirtmgmt | egrep '00:99:4a:00:64:83|port'
port no mac addr is local? ageing timer
2 00:99:4a:00:64:83 no 10.72
# arp -an | grep '00:99:4a:00:64:83'
? (10.0.100.123) at 00:99:4a:00:64:83 [ether] on ovirtmgmt
When using tcpdump on the vnet interface which belongs to the Solaris vm
(ip 10.0.100.123) I can see ARP requests from the vm for ip address of
my CentOS host (10.0.100.42) but no response to it. Same when pinging
other ips in this network:
# tcpdump -n -i vnet2
tcpdump: WARNING: vnet2: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol
decode
listening on vnet2, link-type EN10MB (Ethernet), capture size 65535
bytes
18:15:35.987868 ARP, Request who-has 10.0.100.42 (Broadcast) tell
10.0.100.123, length 46
18:15:36.487399 ARP, Request who-has 10.0.100.42 (Broadcast) tell
10.0.100.123, length 46
18:15:36.987536 ARP, Request who-has 10.0.100.42 (Broadcast) tell
10.0.100.123, length 46
I also compared the qemu-kvm process list on the KVM with the oVirt
machine and can't see much differences except that oVirt has more
information like smbios....
oVirt host:
/usr/libexec/qemu-kvm
<snip>
-netdev tap,fd=27,id=hostnet0
-device
rtl8139,netdev=hostnet0,id=net0,mac=00:99:4a:00:64:83,bus=pci.0,addr=0x3
RHEL KVM host:
/usr/libexec/qemu-kvm
<snip>
-netdev tap,fd=32,id=hostnet0
-device
rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:51:c2:97,bus=pci.0,addr=0x3
Any suggestions on how to troubleshoot / get Solaris networking running
is welcome.
Changing the interface to e1000 doesn't work either.
Thanks,
René
11 years, 3 months
[Users] Disk live migration ..isuue
by Anil Dhingra
Hi Guys
anyone have any idea why after move disk from 1 storage domain to another
it creates 2 disks under target domain
1- preallocated
2- thin pro
Please find attached output below moved VM-2_disk1 from "VM-disk" storage
domain to "Migration domain"
But under *"Migration" *storage domain 2 disk & under "*virtual
machine"*disk section only 1 disk ..why .. is it due to snapshot which
was created
while move or do we need to manually delete another disk after move.
Thanks
Anil D
11 years, 3 months
Re: [Users] Live Migration failed oVirt 3.3 Nightly
by Dan Kenigsberg
On Sun, Sep 15, 2013 at 08:44:18PM +1000, Andrew Lau wrote:
> On Sun, Sep 15, 2013 at 8:00 PM, Dan Kenigsberg <danken(a)redhat.com> wrote:
>
> > On Sun, Sep 15, 2013 at 06:48:41PM +1000, Andrew Lau wrote:
> > > Hi Dan,
> > >
> > > Certainly, I've uploaded them to fedora's paste bin and tried to snip
> > just
> > > the relevant details.
> > >
> > > Sender (hv01.melb.domain.net):
> > > http://paste.fedoraproject.org/39660/92339651/
> >
> > This one has
> >
> > libvirtError: operation failed: Failed to connect to remote libvirt
> > URI qemu+tls://hv02.melb.domain.net/system
> >
> > which is most often related to firewall issues, and some time to key
> > mismatch.
> >
> > Does
> > virsh -c qemu+tls://hv02.melb.domain.net/system capabilities
> > work when run from the command line of hv01?
> >
> > Dan.
> > > Receiver (hv02.melb.domain.net): `
> > > http://paste.fedoraproject.org/39661/23406913/
> > >
> > > VM being transfered is ovirt_guest_vm
> > >
> > > Thanks,
> > > Andrew
> >
>
> virsh -c qemu+tls://hv02.melb.domain.net/system
> 2013-09-15 10:41:10.620+0000: 23994: info : libvirt version: 0.10.2,
> package: 18.el6_4.9 (CentOS BuildSystem <http://bugs.centos.org>,
> 2013-07-02-11:19:29, c6b8.bsys.dev.centos.org)
> 2013-09-15 10:41:10.620+0000: 23994: warning :
> virNetTLSContextCheckCertificate:1102 : Certificate check failed
> Certificate failed validation: The certificate hasn't got a known issuer.
Would you share your
openssl x509 -in /etc/pki/vdsm/certs/cacert.pem -text
openssl x509 -in /etc/pki/vdsm/certs/vdsmcert.pem -text
on both hosts? This content may be sensitive, and may not
provide an answer why libvirt on src cannot contact libvirtd on the
other host. So before you do that, would you test if
vdsClient -s hv02.melb.domain.net getVdsCapabilities
works when run on hv01? It may be that the certificates are fine, but
libvirt is not configured to use the correct ones.
Dan.
11 years, 3 months
Re: [Users] single VM disappeared
by Hans-Joachim
--========GMXBoundary90601379087024213481
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Hi,
it should be, as you will loose all your imported VMs by restarting the engine.
Hans-Joachim
Date: Fri, 13 Sep 2013 11:19:09 -0400 (EDT) From: Ofer Schreiber <oschreib(a)redhat.com ../callgate-6.71.4.0/rms/6.71.4.0/mail/getBody?folderId=2&messageId=V2VyIEdyDCBjc9cZ0HAmITB0IBjrpcBZ&purpose=display&bodyType=htmlNoExternals# > To: Yair Zaslavsky <yzaslavs(a)redhat.com ../callgate-6.71.4.0/rms/6.71.4.0/mail/getBody?folderId=2&messageId=V2VyIEdyDCBjc9cZ0HAmITB0IBjrpcBZ&purpose=display&bodyType=htmlNoExternals# > Cc: "users(a)ovirt.org ../callgate-6.71.4.0/rms/6.71.4.0/mail/getBody?folderId=2&messageId=V2VyIEdyDCBjc9cZ0HAmITB0IBjrpcBZ&purpose=display&bodyType=htmlNoExternals# " <users(a)ovirt.org ../callgate-6.71.4.0/rms/6.71.4.0/mail/getBody?folderId=2&messageId=V2VyIEdyDCBjc9cZ0HAmITB0IBjrpcBZ&purpose=display&bodyType=htmlNoExternals# > Subject: Re: [Users] single VM disappeared Message-ID: <54B23153-0A06-45E9-8417-C5F4DF540257(a)redhat.com ../callgate-6.71.4.0/rms/6.71.4.0/mail/getBody?folderId=2&messageId=V2VyIEdyDCBjc9cZ0HAmITB0IBjrpcBZ&purpose=display&bodyType=htmlNoExternals# > Content-Type: text/plain; charset="utf-8" Nothing actually. Unless its a huge blocker, than we might rebuild the engine with it. Otherwise, it will be in 3.3.1 which will be a month from now.
--========GMXBoundary90601379087024213481
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
<span style=3D'font-family:Verdana'><span style=3D'font-size:12px'>Hi,<br /=
><br />it should be, as you will loose all your imported VMs by restarting =
the engine.<br /><br />Hans-Joachim<br />=C2=A0<pre style=3D"pre">Date: Fri=
, 13 Sep 2013 11:19:09 -0400 (EDT)=20
From: Ofer Schreiber <<a href=3D"../callgate-6.71.4.0/rms/6.71.4.0/mail/=
getBody?folderId=3D2&messageId=3DV2VyIEdyDCBjc9cZ0HAmITB0IBjrpcBZ&p=
urpose=3Ddisplay&bodyType=3DhtmlNoExternals#">oschreib(a)redhat.com</a>&g=
t;=20
To: Yair Zaslavsky <<a href=3D"../callgate-6.71.4.0/rms/6.71.4.0/mail/ge=
tBody?folderId=3D2&messageId=3DV2VyIEdyDCBjc9cZ0HAmITB0IBjrpcBZ&pur=
pose=3Ddisplay&bodyType=3DhtmlNoExternals#">yzaslavs(a)redhat.com</a>>=
=20
Cc: "<a href=3D"../callgate-6.71.4.0/rms/6.71.4.0/mail/getBody?folderId=3D2=
&messageId=3DV2VyIEdyDCBjc9cZ0HAmITB0IBjrpcBZ&purpose=3Ddisplay&=
;bodyType=3DhtmlNoExternals#">users(a)ovirt.org</a>" <<a href=3D"../callga=
te-6.71.4.0/rms/6.71.4.0/mail/getBody?folderId=3D2&messageId=3DV2VyIEdy=
DCBjc9cZ0HAmITB0IBjrpcBZ&purpose=3Ddisplay&bodyType=3DhtmlNoExterna=
ls#">users(a)ovirt.org</a>>=20
Subject: Re: [Users] single VM disappeared=20
Message-ID: <<a href=3D"../callgate-6.71.4.0/rms/6.71.4.0/mail/getBody?f=
olderId=3D2&messageId=3DV2VyIEdyDCBjc9cZ0HAmITB0IBjrpcBZ&purpose=3D=
display&bodyType=3DhtmlNoExternals#">54B23153-0A06-45E9-8417-C5F4DF5402=
57(a)redhat.com</a>>=20
Content-Type: text/plain; charset=3D"utf-8"=20
Nothing actually.=20
Unless its a huge blocker, than we might rebuild the engine with it.=20
Otherwise, it will be in 3.3.1 which will be a month from now.=20
</pre></span></span>
--========GMXBoundary90601379087024213481--
11 years, 3 months
[Users] Error 500 with python sdk on3.3
by Hervé Leclerc
(My apologies if the message is received twice by the mailing list)
I've got a 500 when a try to get the list of vms with python sdk on oVirt
3.3 with the statement : vms=api.vms.list(cluster='local_cluster')
Is it a known issue ?
I've also notice the same problem with rbovirt.
I've put code to reproduce here https://gist.github.com/herveleclerc/6492361
Here the log from server.log
2013-09-09 09:34:02,141 ERROR
[org.apache.catalina.core.ContainerBase.[jboss.web].[default-host].[/api].[org.ovirt.engine.api.restapi.BackendApplication]]
(ajp--127.0.0.1-8702-7) Servlet.service() for servlet
org.ovirt.engine.api.restapi.BackendApplication threw exception:
org.jboss.resteasy.spi.UnhandledException: java.lang.NullPointerException
at
org.jboss.resteasy.core.SynchronousDispatcher.handleApplicationException(SynchronousDispatcher.java:340)
[resteasy-jaxrs-2.3.2.Final.jar:]
at
org.jboss.resteasy.core.SynchronousDispatcher.handleException(SynchronousDispatcher.java:214)
[resteasy-jaxrs-2.3.2.Final.jar:]
at
org.jboss.resteasy.core.SynchronousDispatcher.handleInvokerException(SynchronousDispatcher.java:190)
[resteasy-jaxrs-2.3.2.Final.jar:]
at
org.jboss.resteasy.core.SynchronousDispatcher.getResponse(SynchronousDispatcher.java:540)
[resteasy-jaxrs-2.3.2.Final.jar:]
at
org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:502)
[resteasy-jaxrs-2.3.2.Final.jar:]
at
org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:119)
[resteasy-jaxrs-2.3.2.Final.jar:]
at
org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:208)
[resteasy-jaxrs-2.3.2.Final.jar:]
at
org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:55)
[resteasy-jaxrs-2.3.2.Final.jar:]
at
org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:50)
[resteasy-jaxrs-2.3.2.Final.jar:]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
[jboss-servlet-api_3.0_spec-1.0.0.Final.jar:1.0.0.Final]
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:329)
[jbossweb-7.0.13.Final.jar:]
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
[jbossweb-7.0.13.Final.jar:]
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
[jbossweb-7.0.13.Final.jar:]
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
[jbossweb-7.0.13.Final.jar:]
at
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:489)
[jbossweb-7.0.13.Final.jar:]
at
org.jboss.as.web.security.SecurityContextAssociationValve.invoke(SecurityContextAssociationValve.java:153)
[jboss-as-web-7.1.1.Final.jar:7.1.1.Final]
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:155)
[jbossweb-7.0.13.Final.jar:]
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
[jbossweb-7.0.13.Final.jar:]
at org.jboss.web.rewrite.RewriteValve.invoke(RewriteValve.java:466)
[jbossweb-7.0.13.Final.jar:]
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
[jbossweb-7.0.13.Final.jar:]
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:368)
[jbossweb-7.0.13.Final.jar:]
at
org.apache.coyote.ajp.AjpProcessor.process(AjpProcessor.java:505)
[jbossweb-7.0.13.Final.jar:]
at
org.apache.coyote.ajp.AjpProtocol$AjpConnectionHandler.process(AjpProtocol.java:445)
[jbossweb-7.0.13.Final.jar:]
at
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:930)
[jbossweb-7.0.13.Final.jar:]
at java.lang.Thread.run(Thread.java:724) [rt.jar:1.7.0_25]
Caused by: java.lang.NullPointerException
at
org.ovirt.engine.api.common.util.LinkHelper.getCollection(LinkHelper.java:523)
[interface-common-jaxrs.jar:]
at
org.ovirt.engine.api.common.util.LinkHelper.getUriBuilder(LinkHelper.java:569)
[interface-common-jaxrs.jar:]
at
org.ovirt.engine.api.restapi.resource.AbstractBackendResource.linkSubCollections(AbstractBackendResource.java:287)
[restapi-jaxrs.jar:]
at
org.ovirt.engine.api.restapi.resource.AbstractBackendResource.addLinks(AbstractBackendResource.java:228)
[restapi-jaxrs.jar:]
at
org.ovirt.engine.api.restapi.resource.AbstractBackendResource.addLinks(AbstractBackendResource.java:218)
[restapi-jaxrs.jar:]
at
org.ovirt.engine.api.restapi.resource.BackendVmsResource.mapCollection(BackendVmsResource.java:407)
[restapi-jaxrs.jar:]
at
org.ovirt.engine.api.restapi.resource.BackendVmsResource.list(BackendVmsResource.java:71)
[restapi-jaxrs.jar:]
at sun.reflect.GeneratedMethodAccessor297.invoke(Unknown Source)
[:1.7.0_25]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.7.0_25]
at java.lang.reflect.Method.invoke(Method.java:606)
[rt.jar:1.7.0_25]
at
org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:155)
[resteasy-jaxrs-2.3.2.Final.jar:]
at
org.jboss.resteasy.core.ResourceMethod.invokeOnTarget(ResourceMethod.java:257)
[resteasy-jaxrs-2.3.2.Final.jar:]
at
org.jboss.resteasy.core.ResourceMethod.invokeOnTarget(ResourceMethod.java:257)
[resteasy-jaxrs-2.3.2.Final.jar:]
at
org.jboss.resteasy.core.ResourceMethod.invoke(ResourceMethod.java:222)
[resteasy-jaxrs-2.3.2.Final.jar:]
at
org.jboss.resteasy.core.ResourceMethod.invoke(ResourceMethod.java:211)
[resteasy-jaxrs-2.3.2.Final.jar:]
at
org.jboss.resteasy.core.SynchronousDispatcher.getResponse(SynchronousDispatcher.java:525)
[resteasy-jaxrs-2.3.2.Final.jar:]
... 21 more
Hervé Leclerc
CTO
Alter Way
1, rue royale
9 ème étage
92210 St Cloud
+33 1 78152407
+33 6 83979598
<http://www.alterway.fr/signatures/url/1>
11 years, 3 months
[Users] Live Migration failed oVirt 3.3 Nightly
by Andrew Lau
Hi,
I recently upgraded to the oVirt Nightly repo because I needed some patches
not yet in 3.3, but it appears to have broken the live migration.
Both of the nodes have the storage (glusterfs replicate), oVirt gives the
error:
Migration failed due to Error: Fatal error during migration (VM: gl01,
Source: HV02, Destination: HV01).
It seems to successfully transfer but then the receiving vdsm node gives
the following error:
Thread-58935::ERROR::2013-09-13
18:36:52,004::vm::2062::vm.Vm::(_startUnderlyingVm)
vmId=`3867341c-b465-4285-9c35-0c5972527808`::The vm start process failed
if ret is None:raise libvirtError('virDomainLookupByUUIDString()
failed', conn=self)
Suggestions?
Thanks,
Andrew
11 years, 3 months
[Users] API bad SQL Statement (3.3.0-3.el6)
by Hervé Leclerc
Hello,
I have this SQL error when i try to use the engine API qith this query
https://myovirt/api/clusters?search=datacenter%3Dh2o
Results
<fault>
<reason>Operation Failed</reason>
<detail>statementcallback; bad sql grammar [select * from (select * from
vds groups view where ( vds group id in (select vds groups storage
domain.vds group id from vds groups storage domain left outer join
storage pool with storage domain on vds groups storage domain.storage pool
id=storage pool with storage domain.id where ( storage pool with
storage domain.name like '%h2o%' or storage pool with storage
domain.description like '%h2o%' or storage pool with storage
domain.comment like '%h2o%' ) )) order by name asc ) as t1 offset (1 -1)
limit 100]; nested exception is org.postgresql.util.psqlexception: erreur:
la colonne storage pool with storage domain.comment n'existe pas
position: 411
</detail>
</fault>
/api
<product_info><name>oVirt Engine</name><vendor>ovirt.org</vendor><version
major="3" minor="3" build="0"
revision="0"/><full_version>3.3.0-3.el6</full_version></product_info><summary>
Hervé Leclerc
CTO
Alter Way
1, rue royale
9 ème étage
92210 St Cloud
+33 1 78152407
+33 6 83979598
<http://www.alterway.fr/signatures/url/1>
11 years, 3 months
Re: [Users] oVirt 3.3 -- Failed to run VM: internal error unexpected address type for ide disk
by SULLIVAN, Chris (WGK)
Hi,
I am getting the exact same issue with a non-AIO oVirt 3.3.0-2.fc19 setup. The only workaround I've found so far is to delete the offending VM, recreate, and reattach the disks. The recreated VM will work normally until it is shutdown, after which it will fail to start with the same error.
Engine and VDSM log excepts below. Versions:
- Fedora 19 (3.10.10-200)
- oVirt 3.3.0-2
- VDSM 4.12.1
- libvirt 1.1.2-1
- gluster 3.4.0.8
I'll upgrade to the latest oVirt 3.3 RC to see if the issue persists.
Kind regards,
Chris
ovirt-engine.log
2013-09-12 15:01:21,746 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-6-thread-41) [4b57b27f] START, CreateVmVDSCommand(HostName = r410-05, HostId = 88811ea8-b030-47fd-ae3d-23cb2c24f6f6, vmId=980cb3c8-8af8-4795-9c21-85582d37e042, vm=VM [rhev-compute-01]), log id: 1ea52d74
2013-09-12 15:01:21,749 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-6-thread-41) [4b57b27f] START, CreateVDSCommand(HostName = r410-05, HostId = 88811ea8-b030-47fd-ae3d-23cb2c24f6f6, vmId=980cb3c8-8af8-4795-9c21-85582d37e042, vm=VM [rhev-compute-01]), log id: 735950cf
2013-09-12 15:01:21,801 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-6-thread-41) [4b57b27f] org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand spiceSslCipherSuite=DEFAULT,memSize=8144,kvmEnable=true,smp=4,vmType=kvm,emulatedMachine=pc-1.0,keyboardLayout=en-us,memGuaranteedSize=8144,pitReinjection=false,nice=0,display=vnc,smartcardEnable=false,tabletEnable=true,smpCoresPerSocket=4,spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,displayNetwork=ovirtmgmt,timeOffset=-61,transparentHugePages=true,vmId=980cb3c8-8af8-4795-9c21-85582d37e042,devices=[Ljava.util.HashMap;@12177fe2,acpiEnable=true,vmName=rhev-compute-01,cpuType=hostPassthrough,custom={}
2013-09-12 15:01:21,802 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (pool-6-thread-41) [4b57b27f] FINISH, CreateVDSCommand, log id: 735950cf
2013-09-12 15:01:21,812 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-6-thread-41) [4b57b27f] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id: 1ea52d74
2013-09-12 15:01:21,812 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-41) [4b57b27f] Lock freed to object EngineLock [exclusiveLocks= key: 980cb3c8-8af8-4795-9c21-85582d37e042 value: VM
, sharedLocks= ]
2013-09-12 15:01:21,820 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (pool-6-thread-41) [4b57b27f] Correlation ID: 4b57b27f, Job ID: 6be840c8-68cb-4c07-a365-c979c3c7e8ae, Call Stack: null, Custom Event ID: -1, Message: VM rhev-compute-01 was started by admin@internal (Host: r410-05).
2013-09-12 15:01:22,157 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (DefaultQuartzScheduler_Worker-53) START, DestroyVDSCommand(HostName = r410-05, HostId = 88811ea8-b030-47fd-ae3d-23cb2c24f6f6, vmId=980cb3c8-8af8-4795-9c21-85582d37e042, force=false, secondsToWait=0, gracefully=false), log id: 45ed2104
2013-09-12 15:01:22,301 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (DefaultQuartzScheduler_Worker-53) FINISH, DestroyVDSCommand, log id: 45ed2104
2013-09-12 15:01:22,317 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-53) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM rhev-compute-01 is down. Exit message: internal error: unexpected address type for ide disk.
2013-09-12 15:01:22,317 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-53) Running on vds during rerun failed vm: null
2013-09-12 15:01:22,318 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-53) vm rhev-compute-01 running in db and not running in vds - add to rerun treatment. vds r410-05
2013-09-12 15:01:22,318 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand] (DefaultQuartzScheduler_Worker-53) START, FullListVdsCommand(HostName = r410-05, HostId = 88811ea8-b030-47fd-ae3d-23cb2c24f6f6, vds=Host[r410-05], vmIds=[980cb3c8-8af8-4795-9c21-85582d37e042]), log id: 20beb10f
2013-09-12 15:01:22,321 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand] (DefaultQuartzScheduler_Worker-53) FINISH, FullListVdsCommand, return: [Ljava.util.HashMap;@475a6094, log id: 20beb10f
2013-09-12 15:01:22,334 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-53) Rerun vm 980cb3c8-8af8-4795-9c21-85582d37e042. Called from vds r410-05
2013-09-12 15:01:22,346 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (pool-6-thread-41) Correlation ID: 4b57b27f, Job ID: 6be840c8-68cb-4c07-a365-c979c3c7e8ae, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM rhev-compute-01 on Host r410-05.
2013-09-12 15:01:22,359 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-41) Lock Acquired to object EngineLock [exclusiveLocks= key: 980cb3c8-8af8-4795-9c21-85582d37e042 value: VM
, sharedLocks= ]
2013-09-12 15:01:22,378 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (pool-6-thread-41) START, IsVmDuringInitiatingVDSCommand( vmId = 980cb3c8-8af8-4795-9c21-85582d37e042), log id: 485ed444
2013-09-12 15:01:22,378 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (pool-6-thread-41) FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 485ed444
2013-09-12 15:01:22,380 WARN [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-41) CanDoAction of action RunVm failed. Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM
2013-09-12 15:01:22,380 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-41) Lock freed to object EngineLock [exclusiveLocks= key: 980cb3c8-8af8-4795-9c21-85582d37e042 value: VM
, sharedLocks= ]
2013-09-12 15:01:22,390 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (pool-6-thread-41) Correlation ID: 4b57b27f, Job ID: 6be840c8-68cb-4c07-a365-c979c3c7e8ae, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM rhev-compute-01 (User: admin@internal).
vdsm.log
Thread-143925::DEBUG::2013-09-12 15:01:21,777::BindingXMLRPC::979::vds::(wrapper) client [172.30.18.242]::call vmCreate with ({'acpiEnable': 'true', 'emulatedMachine': 'pc-1.0', 'tabletEnable': 'true', 'vmId': '980cb3c8-8af8-4795-9c21-85582d37e042', 'memGuaranteedSize': 8144, 'spiceSslCipherSuite': 'DEFAULT', 'timeOffset': '-61', 'cpuType': 'hostPassthrough', 'custom': {}, 'smp': '4', 'vmType': 'kvm', 'memSize': 8144, 'smpCoresPerSocket': '4', 'vmName': 'rhev-compute-01', 'nice': '0', 'smartcardEnable': 'false', 'keyboardLayout': 'en-us', 'kvmEnable': 'true', 'pitReinjection': 'false', 'transparentHugePages': 'true', 'displayNetwork': 'ovirtmgmt', 'devices': [{'device': 'cirrus', 'specParams': {'vram': '65536'}, 'type': 'video', 'deviceId': '87df9d21-bf47-45f9-ab45-7f2f950fd788', 'address': {'bus': '0x00', ' slot': '0x02', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'}}, {'index': '2', 'iface': 'ide', 'address': {'bus': '0x00', ' slot': '0x06', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'}, 'specParams': {'path': ''}, 'readonly': 'true', 'deviceId': 'ef25939b-a5ff-456e-978f-53e7600b83ce', 'path': '', 'device': 'cdrom', 'shared': 'false', 'type': 'disk'}, {'index': 0, 'iface': 'virtio', 'format': 'raw', 'bootOrder': '1', 'volumeID': '1abdc967-c32c-4862-a36b-b93441c4a7d5', 'imageID': '8863c4d0-0ff3-4590-8f37-e6bb6c9d195e', 'specParams': {}, 'readonly': 'false', 'domainID': 'e281bd49-bc11-4acb-8634-624eac6d3358', 'optional': 'false', 'deviceId': '8863c4d0-0ff3-4590-8f37-e6bb6c9d195e', 'poolID': 'accbd988-31c6-4803-9204-a584067fa157', 'device': 'disk', 'shared': 'false', 'propagateErrors': 'off', 'type': 'disk'}, {'device': 'scsi', 'model': 'virtio-scsi', 'type': 'controller'}, {'nicModel': 'pv', 'macAddr': '00:1a:4a:ab:9c:6a', 'linkActive': 'true', 'network': 'ovirtmgmt', 'custom': {}, 'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': 'e9f8e70f-8cb9-496b-b44e-d75e56515c27', 'address': {'bus': '0x00', ' slot': '0x03', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'}, 'device': 'bridge', 'type': 'interface'}, {'device': 'memballoon', 'specParams': {'model': 'virtio'}, 'type': 'balloon', 'deviceId': '1c4aef1b-f0eb-47c9-83a8-f983ad3e47bf'}], 'spiceSecureChannels': 'smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard', 'display': 'vnc'},) {} flowID [4b57b27f]
Thread-143925::INFO::2013-09-12 15:01:21,784::clientIF::366::vds::(createVm) vmContainerLock acquired by vm 980cb3c8-8af8-4795-9c21-85582d37e042
Thread-143925::DEBUG::2013-09-12 15:01:21,790::clientIF::380::vds::(createVm) Total desktops after creation of 980cb3c8-8af8-4795-9c21-85582d37e042 is 1
Thread-143926::DEBUG::2013-09-12 15:01:21,790::vm::2015::vm.Vm::(_startUnderlyingVm) vmId=`980cb3c8-8af8-4795-9c21-85582d37e042`::Start
Thread-143925::DEBUG::2013-09-12 15:01:21,791::BindingXMLRPC::986::vds::(wrapper) return vmCreate with {'status': {'message': 'Done', 'code': 0}, 'vmList': {'status': 'WaitForLaunch', 'acpiEnable': 'true', 'emulatedMachine': 'pc-1.0', 'tabletEnable': 'true', 'pid': '0', 'memGuaranteedSize': 8144, 'timeOffset': '-61', 'keyboardLayout': 'en-us', 'displayPort': '-1', 'displaySecurePort': '-1', 'spiceSslCipherSuite': 'DEFAULT', 'cpuType': 'hostPassthrough', 'smp': '4', 'clientIp': '', 'nicModel': 'rtl8139,pv', 'smartcardEnable': 'false', 'kvmEnable': 'true', 'pitReinjection': 'false', 'vmId': '980cb3c8-8af8-4795-9c21-85582d37e042', 'transparentHugePages': 'true', 'displayNetwork': 'ovirtmgmt', 'devices': [{'device': 'cirrus', 'specParams': {'vram': '65536'}, 'type': 'video', 'deviceId': '87df9d21-bf47-45f9-ab45-7f2f950fd788', 'address': {'bus': '0x00', ' slot': '0x02', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'}}, {'index': '2', 'iface': 'ide', 'address': {'bus': '0x00', ' slot': '0x06', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'}, 'specParams': {'path': ''}, 'readonly': 'true', 'deviceId': 'ef25939b-a5ff-456e-978f-53e7600b83ce', 'path': '', 'device': 'cdrom', 'shared': 'false', 'type': 'disk'}, {'index': 0, 'iface': 'virtio', 'format': 'raw', 'bootOrder': '1', 'volumeID': '1abdc967-c32c-4862-a36b-b93441c4a7d5', 'imageID': '8863c4d0-0ff3-4590-8f37-e6bb6c9d195e', 'specParams': {}, 'readonly': 'false', 'domainID': 'e281bd49-bc11-4acb-8634-624eac6d3358', 'optional': 'false', 'deviceId': '8863c4d0-0ff3-4590-8f37-e6bb6c9d195e', 'poolID': 'accbd988-31c6-4803-9204-a584067fa157', 'device': 'disk', 'shared': 'false', 'propagateErrors': 'off', 'type': 'disk'}, {'device': 'scsi', 'model': 'virtio-scsi', 'type': 'controller'}, {'nicModel': 'pv', 'macAddr': '00:1a:4a:ab:9c:6a', 'linkActive': 'true', 'network': 'ovirtmgmt', 'custom': {}, 'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': 'e9f8e70f-8cb9-496b-b44e-d75e56515c27', 'address': {'bus': '0x00', ' slot': '0x03', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'}, 'device': 'bridge', 'type': 'interface'}, {'device': 'memballoon', 'specParams': {'model': 'virtio'}, 'type': 'balloon', 'deviceId': '1c4aef1b-f0eb-47c9-83a8-f983ad3e47bf'}], 'custom': {}, 'vmType': 'kvm', 'memSize': 8144, 'displayIp': '172.30.18.247', 'spiceSecureChannels': 'smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard', 'smpCoresPerSocket': '4', 'vmName': 'rhev-compute-01', 'display': 'vnc', 'nice': '0'}}
Thread-143926::DEBUG::2013-09-12 15:01:21,792::vm::2019::vm.Vm::(_startUnderlyingVm) vmId=`980cb3c8-8af8-4795-9c21-85582d37e042`::_ongoingCreations acquired
Thread-143926::INFO::2013-09-12 15:01:21,794::vm::2815::vm.Vm::(_run) vmId=`980cb3c8-8af8-4795-9c21-85582d37e042`::VM wrapper has started
Thread-143926::DEBUG::2013-09-12 15:01:21,798::task::579::TaskManager.Task::(_updateState) Task=`f5a3b7b8-3ac9-4b57-b184-64580530aed2`::moving from state init -> state preparing
Thread-143926::INFO::2013-09-12 15:01:21,800::logUtils::44::dispatcher::(wrapper) Run and protect: getVolumeSize(sdUUID='e281bd49-bc11-4acb-8634-624eac6d3358', spUUID='accbd988-31c6-4803-9204-a584067fa157', imgUUID='8863c4d0-0ff3-4590-8f37-e6bb6c9d195e', volUUID='1abdc967-c32c-4862-a36b-b93441c4a7d5', options=None)
Thread-143926::DEBUG::2013-09-12 15:01:21,815::fileVolume::520::Storage.Volume::(validateVolumePath) validate path for 1abdc967-c32c-4862-a36b-b93441c4a7d5
Thread-143926::INFO::2013-09-12 15:01:21,818::logUtils::47::dispatcher::(wrapper) Run and protect: getVolumeSize, Return response: {'truesize': '10737418240', 'apparentsize': '10737418240'}
Thread-143926::DEBUG::2013-09-12 15:01:21,818::task::1168::TaskManager.Task::(prepare) Task=`f5a3b7b8-3ac9-4b57-b184-64580530aed2`::finished: {'truesize': '10737418240', 'apparentsize': '10737418240'}
Thread-143926::DEBUG::2013-09-12 15:01:21,818::task::579::TaskManager.Task::(_updateState) Task=`f5a3b7b8-3ac9-4b57-b184-64580530aed2`::moving from state preparing -> state finished
Thread-143926::DEBUG::2013-09-12 15:01:21,818::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-143926::DEBUG::2013-09-12 15:01:21,819::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-143926::DEBUG::2013-09-12 15:01:21,819::task::974::TaskManager.Task::(_decref) Task=`f5a3b7b8-3ac9-4b57-b184-64580530aed2`::ref 0 aborting False
Thread-143926::INFO::2013-09-12 15:01:21,819::clientIF::325::vds::(prepareVolumePath) prepared volume path:
Thread-143926::DEBUG::2013-09-12 15:01:21,820::task::579::TaskManager.Task::(_updateState) Task=`80aa83f4-5f90-4a9e-97da-1f7edab49894`::moving from state init -> state preparing
Thread-143926::INFO::2013-09-12 15:01:21,820::logUtils::44::dispatcher::(wrapper) Run and protect: prepareImage(sdUUID='e281bd49-bc11-4acb-8634-624eac6d3358', spUUID='accbd988-31c6-4803-9204-a584067fa157', imgUUID='8863c4d0-0ff3-4590-8f37-e6bb6c9d195e', volUUID='1abdc967-c32c-4862-a36b-b93441c4a7d5')
Thread-143926::DEBUG::2013-09-12 15:01:21,821::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.e281bd49-bc11-4acb-8634-624eac6d3358`ReqID=`4b30196c-7b93-41b9-92c4-b632161a94a0`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3240' at 'prepareImage'
Thread-143926::DEBUG::2013-09-12 15:01:21,821::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358' for lock type 'shared'
Thread-143926::DEBUG::2013-09-12 15:01:21,821::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358' is free. Now locking as 'shared' (1 active user)
Thread-143926::DEBUG::2013-09-12 15:01:21,822::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.e281bd49-bc11-4acb-8634-624eac6d3358`ReqID=`4b30196c-7b93-41b9-92c4-b632161a94a0`::Granted request
Thread-143926::DEBUG::2013-09-12 15:01:21,822::task::811::TaskManager.Task::(resourceAcquired) Task=`80aa83f4-5f90-4a9e-97da-1f7edab49894`::_resourcesAcquired: Storage.e281bd49-bc11-4acb-8634-624eac6d3358 (shared)
Thread-143926::DEBUG::2013-09-12 15:01:21,822::task::974::TaskManager.Task::(_decref) Task=`80aa83f4-5f90-4a9e-97da-1f7edab49894`::ref 1 aborting False
Thread-143926::DEBUG::2013-09-12 15:01:21,824::fileVolume::520::Storage.Volume::(validateVolumePath) validate path for 1abdc967-c32c-4862-a36b-b93441c4a7d5
Thread-143926::INFO::2013-09-12 15:01:21,877::image::215::Storage.Image::(getChain) sdUUID=e281bd49-bc11-4acb-8634-624eac6d3358 imgUUID=8863c4d0-0ff3-4590-8f37-e6bb6c9d195e chain=[<storage.glusterVolume.GlusterVolume object at 0x2500d10>]
Thread-143926::DEBUG::2013-09-12 15:01:21,904::fileVolume::520::Storage.Volume::(validateVolumePath) validate path for 1abdc967-c32c-4862-a36b-b93441c4a7d5
Thread-143926::INFO::2013-09-12 15:01:21,954::logUtils::47::dispatcher::(wrapper) Run and protect: prepareImage, Return response: {'info': {'volPort': '0', 'volType': 'network', 'volfileServer': 'r410-02', 'volTransport': 'tcp', 'protocol': 'gluster', 'path': 'hades/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5'}, 'path': '/rhev/data-center/accbd988-31c6-4803-9204-a584067fa157/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5', 'chain': [{'path': '/rhev/data-center/accbd988-31c6-4803-9204-a584067fa157/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5', 'domainID': 'e281bd49-bc11-4acb-8634-624eac6d3358', 'vmVolInfo': {'volPort': '0', 'volType': 'network', 'volfileServer': 'r410-02', 'volTransport': 'tcp', 'protocol': 'gluster', 'path': 'hades/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5'}, 'volumeID': '1abdc967-c32c-4862-a36b-b93441c4a7d5', 'imageID': '8863c4d0-0ff3-4590-8f37-e6bb6c9d195e'}]}
Thread-143926::DEBUG::2013-09-12 15:01:21,954::task::1168::TaskManager.Task::(prepare) Task=`80aa83f4-5f90-4a9e-97da-1f7edab49894`::finished: {'info': {'volPort': '0', 'volType': 'network', 'volfileServer': 'r410-02', 'volTransport': 'tcp', 'protocol': 'gluster', 'path': 'hades/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5'}, 'path': '/rhev/data-center/accbd988-31c6-4803-9204-a584067fa157/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5', 'chain': [{'path': '/rhev/data-center/accbd988-31c6-4803-9204-a584067fa157/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5', 'domainID': 'e281bd49-bc11-4acb-8634-624eac6d3358', 'vmVolInfo': {'volPort': '0', 'volType': 'network', 'volfileServer': 'r410-02', 'volTransport': 'tcp', 'protocol': 'gluster', 'path': 'hades/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5'}, 'volumeID': '1abdc967-c32c-4862-a36b-b93441c4a7d5', 'imageID': '8863c4d0-0ff3-4590-8f37-e6bb6c9d195e'}]}
Thread-143926::DEBUG::2013-09-12 15:01:21,954::task::579::TaskManager.Task::(_updateState) Task=`80aa83f4-5f90-4a9e-97da-1f7edab49894`::moving from state preparing -> state finished
Thread-143926::DEBUG::2013-09-12 15:01:21,955::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.e281bd49-bc11-4acb-8634-624eac6d3358': < ResourceRef 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358', isValid: 'True' obj: 'None'>}
Thread-143926::DEBUG::2013-09-12 15:01:21,955::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-143926::DEBUG::2013-09-12 15:01:21,955::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358'
Thread-143926::DEBUG::2013-09-12 15:01:21,956::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358' (0 active users)
Thread-143926::DEBUG::2013-09-12 15:01:21,956::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358' is free, finding out if anyone is waiting for it.
Thread-143926::DEBUG::2013-09-12 15:01:21,956::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358', Clearing records.
Thread-143926::DEBUG::2013-09-12 15:01:21,957::task::974::TaskManager.Task::(_decref) Task=`80aa83f4-5f90-4a9e-97da-1f7edab49894`::ref 0 aborting False
Thread-143926::INFO::2013-09-12 15:01:21,957::clientIF::325::vds::(prepareVolumePath) prepared volume path: /rhev/data-center/accbd988-31c6-4803-9204-a584067fa157/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5
Thread-143926::DEBUG::2013-09-12 15:01:21,974::vm::2872::vm.Vm::(_run) vmId=`980cb3c8-8af8-4795-9c21-85582d37e042`::<?xml version="1.0" encoding="utf-8"?>
<domain type="kvm">
<name>rhev-compute-01</name>
<uuid>980cb3c8-8af8-4795-9c21-85582d37e042</uuid>
<memory>8339456</memory>
<currentMemory>8339456</currentMemory>
<vcpu>4</vcpu>
<memtune>
<min_guarantee>8339456</min_guarantee>
</memtune>
<devices>
<channel type="unix">
<target name="com.redhat.rhevm.vdsm" type="virtio"/>
<source mode="bind" path="/var/lib/libvirt/qemu/channels/980cb3c8-8af8-4795-9c21-85582d37e042.com.redhat.rhevm.vdsm"/>
</channel>
<channel type="unix">
<target name="org.qemu.guest_agent.0" type="virtio"/>
<source mode="bind" path="/var/lib/libvirt/qemu/channels/980cb3c8-8af8-4795-9c21-85582d37e042.org.qemu.guest_agent.0"/>
</channel>
<input bus="usb" type="tablet"/>
<graphics autoport="yes" keymap="en-us" passwd="*****" passwdValidTo="1970-01-01T00:00:01" port="-1" type="vnc">
<listen network="vdsm-ovirtmgmt" type="network"/>
</graphics>
<controller model="virtio-scsi" type="scsi"/>
<video>
<address bus="0x00" domain="0x0000" function="0x0" slot="0x02" type="pci"/>
<model heads="1" type="cirrus" vram="65536"/>
</video>
<interface type="bridge">
<address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci"/>
<mac address="00:1a:4a:ab:9c:6a"/>
<model type="virtio"/>
<source bridge="ovirtmgmt"/>
<filterref filter="vdsm-no-mac-spoofing"/>
<link state="up"/>
</interface>
<disk device="cdrom" snapshot="no" type="file">
<address bus="0x00" domain="0x0000" function="0x0" slot="0x06" type="pci"/>
<source file="" startupPolicy="optional"/>
<target bus="ide" dev="hdc"/>
<readonly/>
<serial/>
</disk>
<disk device="disk" snapshot="no" type="network">
<source name="hades/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5" protocol="gluster">
<host name="r410-02" port="0" transport="tcp"/>
</source>
<target bus="virtio" dev="vda"/>
<serial>8863c4d0-0ff3-4590-8f37-e6bb6c9d195e</serial>
<boot order="1"/>
<driver cache="none" error_policy="stop" io="threads" name="qemu" type="raw"/>
</disk>
<memballoon model="virtio"/>
</devices>
<os>
<type arch="x86_64" machine="pc-1.0">hvm</type>
<smbios mode="sysinfo"/>
</os>
<sysinfo type="smbios">
<system>
<entry name="manufacturer">oVirt</entry>
<entry name="product">oVirt Node</entry>
<entry name="version">19-3</entry>
<entry name="serial">4C4C4544-0031-4810-8042-B4C04F353253</entry>
<entry name="uuid">980cb3c8-8af8-4795-9c21-85582d37e042</entry>
</system>
</sysinfo>
<clock adjustment="-61" offset="variable">
<timer name="rtc" tickpolicy="catchup"/>
</clock>
<features>
<acpi/>
</features>
<cpu match="exact" mode="host-passthrough">
<topology cores="4" sockets="1" threads="1"/>
</cpu>
</domain>
Thread-143926::DEBUG::2013-09-12 15:01:21,987::libvirtconnection::101::libvirtconnection::(wrapper) Unknown libvirterror: ecode: 1 edom: 10 level: 2 message: internal error: unexpected address type for ide disk
Thread-143926::DEBUG::2013-09-12 15:01:21,987::vm::2036::vm.Vm::(_startUnderlyingVm) vmId=`980cb3c8-8af8-4795-9c21-85582d37e042`::_ongoingCreations released
Thread-143926::ERROR::2013-09-12 15:01:21,987::vm::2062::vm.Vm::(_startUnderlyingVm) vmId=`980cb3c8-8af8-4795-9c21-85582d37e042`::The vm start process failed
Traceback (most recent call last):
File "/usr/share/vdsm/vm.py", line 2022, in _startUnderlyingVm
self._run()
File "/usr/share/vdsm/vm.py", line 2906, in _run
self._connection.createXML(domxml, flags),
File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py", line 76, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2909, in createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: internal error: unexpected address type for ide disk
Thread-143926::DEBUG::2013-09-12 15:01:21,989::vm::2448::vm.Vm::(setDownStatus) vmId=`980cb3c8-8af8-4795-9c21-85582d37e042`::Changed state to Down: internal error: unexpected address type for ide disk
Thread-143929::DEBUG::2013-09-12 15:01:22,162::BindingXMLRPC::979::vds::(wrapper) client [172.30.18.242]::call vmGetStats with ('980cb3c8-8af8-4795-9c21-85582d37e042',) {}
Thread-143929::DEBUG::2013-09-12 15:01:22,162::BindingXMLRPC::986::vds::(wrapper) return vmGetStats with {'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': 'Down', 'hash': '0', 'exitMessage': 'internal error: unexpected address type for ide disk', 'vmId': '980cb3c8-8af8-4795-9c21-85582d37e042', 'timeOffset': '-61', 'exitCode': 1}]}
Thread-143930::DEBUG::2013-09-12 15:01:22,166::BindingXMLRPC::979::vds::(wrapper) client [172.30.18.242]::call vmDestroy with ('980cb3c8-8af8-4795-9c21-85582d37e042',) {}
Thread-143930::INFO::2013-09-12 15:01:22,167::API::317::vds::(destroy) vmContainerLock acquired by vm 980cb3c8-8af8-4795-9c21-85582d37e042
Thread-143930::DEBUG::2013-09-12 15:01:22,167::vm::4258::vm.Vm::(destroy) vmId=`980cb3c8-8af8-4795-9c21-85582d37e042`::destroy Called
Thread-143930::INFO::2013-09-12 15:01:22,167::vm::4204::vm.Vm::(releaseVm) vmId=`980cb3c8-8af8-4795-9c21-85582d37e042`::Release VM resources
Thread-143930::WARNING::2013-09-12 15:01:22,168::vm::1717::vm.Vm::(_set_lastStatus) vmId=`980cb3c8-8af8-4795-9c21-85582d37e042`::trying to set state to Powering down when already Down
Thread-143930::WARNING::2013-09-12 15:01:22,168::clientIF::337::vds::(teardownVolumePath) Drive is not a vdsm image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2 VOLWM_FREE_PCT:50 _blockDev:False _checkIoTuneCategories:<bound method Drive._checkIoTuneCategories of <vm.Drive object at 0x7f0fb8a7d610>> _customize:<bound method Drive._customize of <vm.Drive object at 0x7f0fb8a7d610>> _deviceXML:<disk device="cdrom" snapshot="no" type="file"><address domain="0x0000" function="0x0" slot="0x06" type="pci" bus="0x00"/><source file="" startupPolicy="optional"/><target bus="ide" dev="hdc"/><readonly/><serial></serial></disk> _makeName:<bound method Drive._makeName of <vm.Drive object at 0x7f0fb8a7d610>> _validateIoTuneParams:<bound method Drive._validateIoTuneParams of <vm.Drive object at 0x7f0fb8a7d610>> address:{'bus': '0x00', ' slot': '0x06', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'} apparentsize:0 blockDev:False cache:none conf:{'status': 'Down', 'acpiEnable': 'true', 'emulatedMachine': 'pc-1.0', 'tabletEnable': 'true', 'pid': '0', 'memGuaranteedSize': 8144, 'timeOffset': '-61', 'keyboardLayout': 'en-us', 'displayPort': '-1', 'displaySecurePort': '-1', 'spiceSslCipherSuite': 'DEFAULT', 'cpuType': 'hostPassthrough', 'smp': '4', 'clientIp': '', 'exitCode': 1, 'nicModel': 'rtl8139,pv', 'smartcardEnable': 'false', 'kvmEnable': 'true', 'pitReinjection': 'false', 'vmId': '980cb3c8-8af8-4795-9c21-85582d37e042', 'transparentHugePages': 'true', 'displayNetwork': 'ovirtmgmt', 'devices': [{'device': 'scsi', 'model': 'virtio-scsi', 'type': 'controller'}, {'device': 'cirrus', 'specParams': {'vram': '65536'}, 'type': 'video', 'deviceId': '87df9d21-bf47-45f9-ab45-7f2f950fd788', 'address': {'bus': '0x00', ' slot': '0x02', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'}}, {'nicModel': 'pv', 'macAddr': '00:1a:4a:ab:9c:6a', 'linkActive': 'true', 'network': 'ovirtmgmt', 'custom': {}, 'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': 'e9f8e70f-8cb9-496b-b44e-d75e56515c27', 'address': {'bus': '0x00', ' slot': '0x03', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'}, 'device': 'bridge', 'type': 'interface'}, {'index': '2', 'iface': 'ide', 'address': {'bus': '0x00', ' slot': '0x06', ' domain': '0x0000', ' type': 'pci', ' function': '0x0'}, 'specParams': {'path': ''}, 'readonly': 'true', 'deviceId': 'ef25939b-a5ff-456e-978f-53e7600b83ce', 'path': '', 'device': 'cdrom', 'shared': 'false', 'type': 'disk'}, {'volumeInfo': {'volPort': '0', 'volType': 'network', 'volfileServer': 'r410-02', 'volTransport': 'tcp', 'protocol': 'gluster', 'path': 'hades/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5'}, 'index': 0, 'iface': 'virtio', 'apparentsize': '10737418240', 'imageID': '8863c4d0-0ff3-4590-8f37-e6bb6c9d195e', 'readonly': 'false', 'shared': 'false', 'truesize': '10737418240', 'type': 'disk', 'domainID': 'e281bd49-bc11-4acb-8634-624eac6d3358', 'reqsize': '0', 'format': 'raw', 'deviceId': '8863c4d0-0ff3-4590-8f37-e6bb6c9d195e', 'poolID': 'accbd988-31c6-4803-9204-a584067fa157', 'device': 'disk', 'path': '/rhev/data-center/accbd988-31c6-4803-9204-a584067fa157/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5', 'propagateErrors': 'off', 'optional': 'false', 'bootOrder': '1', 'volumeID': '1abdc967-c32c-4862-a36b-b93441c4a7d5', 'specParams': {}, 'volumeChain': [{'path': '/rhev/data-center/accbd988-31c6-4803-9204-a584067fa157/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5', 'domainID': 'e281bd49-bc11-4acb-8634-624eac6d3358', 'vmVolInfo': {'volPort': '0', 'volType': 'network', 'volfileServer': 'r410-02', 'volTransport': 'tcp', 'protocol': 'gluster', 'path': 'hades/e281bd49-bc11-4acb-8634-624eac6d3358/images/8863c4d0-0ff3-4590-8f37-e6bb6c9d195e/1abdc967-c32c-4862-a36b-b93441c4a7d5'}, 'volumeID': '1abdc967-c32c-4862-a36b-b93441c4a7d5', 'imageID': '8863c4d0-0ff3-4590-8f37-e6bb6c9d195e'}]}, {'device': 'memballoon', 'specParams': {'model': 'virtio'}, 'type': 'balloon', 'deviceId': '1c4aef1b-f0eb-47c9-83a8-f983ad3e47bf', 'target': 8339456}], 'custom': {}, 'vmType': 'kvm', 'exitMessage': 'internal error: unexpected address type for ide disk', 'memSize': 8144, 'displayIp': '172.30.18.247', 'spiceSecureChannels': 'smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard', 'smpCoresPerSocket': '4', 'vmName': 'rhev-compute-01', 'display': 'vnc', 'nice': '0'} createXmlElem:<bound method Drive.createXmlElem of <vm.Drive object at 0x7f0fb8a7d610>> device:cdrom deviceId:ef25939b-a5ff-456e-978f-53e7600b83ce getNextVolumeSize:<bound method Drive.getNextVolumeSize of <vm.Drive object at 0x7f0fb8a7d610>> getXML:<bound method Drive.getXML of <vm.Drive object at 0x7f0fb8a7d610>> iface:ide index:2 isDiskReplicationInProgress:<bound method Drive.isDiskReplicationInProgress of <vm.Drive object at 0x7f0fb8a7d610>> isVdsmImage:<bound method Drive.isVdsmImage of <vm.Drive object at 0x7f0fb8a7d610>> log:<logUtils.SimpleLogAdapter object at 0x7f0fb8a9ea90> name:hdc networkDev:False path: readonly:true reqsize:0 serial: shared:false specParams:{'path': ''} truesize:0 type:disk volExtensionChunk:1024 watermarkLimit:536870912
Traceback (most recent call last):
File "/usr/share/vdsm/clientIF.py", line 331, in teardownVolumePath
res = self.irs.teardownImage(drive['domainID'],
File "/usr/share/vdsm/vm.py", line 1344, in __getitem__
raise KeyError(key)
KeyError: 'domainID'
Thread-143930::DEBUG::2013-09-12 15:01:22,171::task::579::TaskManager.Task::(_updateState) Task=`19501aff-60ce-46f4-b3c6-63cb8b6d8598`::moving from state init -> state preparing
Thread-143930::INFO::2013-09-12 15:01:22,172::logUtils::44::dispatcher::(wrapper) Run and protect: teardownImage(sdUUID='e281bd49-bc11-4acb-8634-624eac6d3358', spUUID='accbd988-31c6-4803-9204-a584067fa157', imgUUID='8863c4d0-0ff3-4590-8f37-e6bb6c9d195e', volUUID=None)
Thread-143930::DEBUG::2013-09-12 15:01:22,172::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.e281bd49-bc11-4acb-8634-624eac6d3358`ReqID=`3d2eb551-2767-44b0-958c-e2bc26b650ca`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3282' at 'teardownImage'
Thread-143930::DEBUG::2013-09-12 15:01:22,173::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358' for lock type 'shared'
Thread-143930::DEBUG::2013-09-12 15:01:22,173::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358' is free. Now locking as 'shared' (1 active user)
Thread-143930::DEBUG::2013-09-12 15:01:22,173::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.e281bd49-bc11-4acb-8634-624eac6d3358`ReqID=`3d2eb551-2767-44b0-958c-e2bc26b650ca`::Granted request
Thread-143930::DEBUG::2013-09-12 15:01:22,174::task::811::TaskManager.Task::(resourceAcquired) Task=`19501aff-60ce-46f4-b3c6-63cb8b6d8598`::_resourcesAcquired: Storage.e281bd49-bc11-4acb-8634-624eac6d3358 (shared)
Thread-143930::DEBUG::2013-09-12 15:01:22,174::task::974::TaskManager.Task::(_decref) Task=`19501aff-60ce-46f4-b3c6-63cb8b6d8598`::ref 1 aborting False
Thread-143930::DEBUG::2013-09-12 15:01:22,188::fileVolume::520::Storage.Volume::(validateVolumePath) validate path for 1abdc967-c32c-4862-a36b-b93441c4a7d5
Thread-143930::DEBUG::2013-09-12 15:01:22,217::fileVolume::520::Storage.Volume::(validateVolumePath) validate path for 1abdc967-c32c-4862-a36b-b93441c4a7d5
Thread-143930::DEBUG::2013-09-12 15:01:22,246::fileVolume::520::Storage.Volume::(validateVolumePath) validate path for 1abdc967-c32c-4862-a36b-b93441c4a7d5
Thread-143930::INFO::2013-09-12 15:01:22,300::image::215::Storage.Image::(getChain) sdUUID=e281bd49-bc11-4acb-8634-624eac6d3358 imgUUID=8863c4d0-0ff3-4590-8f37-e6bb6c9d195e chain=[<storage.glusterVolume.GlusterVolume object at 0x7f0fb8760d90>]
Thread-143930::INFO::2013-09-12 15:01:22,300::logUtils::47::dispatcher::(wrapper) Run and protect: teardownImage, Return response: None
Thread-143930::DEBUG::2013-09-12 15:01:22,300::task::1168::TaskManager.Task::(prepare) Task=`19501aff-60ce-46f4-b3c6-63cb8b6d8598`::finished: None
Thread-143930::DEBUG::2013-09-12 15:01:22,301::task::579::TaskManager.Task::(_updateState) Task=`19501aff-60ce-46f4-b3c6-63cb8b6d8598`::moving from state preparing -> state finished
Thread-143930::DEBUG::2013-09-12 15:01:22,301::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.e281bd49-bc11-4acb-8634-624eac6d3358': < ResourceRef 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358', isValid: 'True' obj: 'None'>}
Thread-143930::DEBUG::2013-09-12 15:01:22,301::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-143930::DEBUG::2013-09-12 15:01:22,302::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358'
Thread-143930::DEBUG::2013-09-12 15:01:22,302::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358' (0 active users)
Thread-143930::DEBUG::2013-09-12 15:01:22,302::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358' is free, finding out if anyone is waiting for it.
Thread-143930::DEBUG::2013-09-12 15:01:22,302::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.e281bd49-bc11-4acb-8634-624eac6d3358', Clearing records.
Thread-143930::DEBUG::2013-09-12 15:01:22,303::task::974::TaskManager.Task::(_decref) Task=`19501aff-60ce-46f4-b3c6-63cb8b6d8598`::ref 0 aborting False
Thread-143930::WARNING::2013-09-12 15:01:22,303::utils::113::root::(rmFile) File: /var/lib/libvirt/qemu/channels/980cb3c8-8af8-4795-9c21-85582d37e042.com.redhat.rhevm.vdsm already removed
Thread-143930::WARNING::2013-09-12 15:01:22,303::utils::113::root::(rmFile) File: /var/lib/libvirt/qemu/channels/980cb3c8-8af8-4795-9c21-85582d37e042.org.qemu.guest_agent.0 already removed
Thread-143930::DEBUG::2013-09-12 15:01:22,304::task::579::TaskManager.Task::(_updateState) Task=`277b0c74-d3f2-4a8a-aa18-3084bbd591cf`::moving from state init -> state preparing
Thread-143930::INFO::2013-09-12 15:01:22,304::logUtils::44::dispatcher::(wrapper) Run and protect: inappropriateDevices(thiefId='980cb3c8-8af8-4795-9c21-85582d37e042')
Thread-143930::INFO::2013-09-12 15:01:22,306::logUtils::47::dispatcher::(wrapper) Run and protect: inappropriateDevices, Return response: None
Thread-143930::DEBUG::2013-09-12 15:01:22,306::task::1168::TaskManager.Task::(prepare) Task=`277b0c74-d3f2-4a8a-aa18-3084bbd591cf`::finished: None
Thread-143930::DEBUG::2013-09-12 15:01:22,307::task::579::TaskManager.Task::(_updateState) Task=`277b0c74-d3f2-4a8a-aa18-3084bbd591cf`::moving from state preparing -> state finished
Thread-143930::DEBUG::2013-09-12 15:01:22,307::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-143930::DEBUG::2013-09-12 15:01:22,307::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-143930::DEBUG::2013-09-12 15:01:22,307::task::974::TaskManager.Task::(_decref) Task=`277b0c74-d3f2-4a8a-aa18-3084bbd591cf`::ref 0 aborting False
Thread-143930::DEBUG::2013-09-12 15:01:22,307::vm::4252::vm.Vm::(deleteVm) vmId=`980cb3c8-8af8-4795-9c21-85582d37e042`::Total desktops after destroy of 980cb3c8-8af8-4795-9c21-85582d37e042 is 0
Thread-143930::DEBUG::2013-09-12 15:01:22,307::BindingXMLRPC::986::vds::(wrapper) return vmDestroy with {'status': {'message': 'Machine destroyed', 'code': 0}}
PLEASE CONSIDER THE ENVIRONMENT, DON'T PRINT THIS EMAIL UNLESS YOU REALLY NEED TO.
This email and its attachments may contain information which is confidential and/or legally privileged. If you are not the intended recipient of this e-mail please notify the sender immediately by e-mail and delete this e-mail and its attachments from your computer and IT systems. You must not copy, re-transmit, use or disclose (other than to the sender) the existence or contents of this email or its attachments or permit anyone else to do so.
-----------------------------
-----Original Message-----
From: users-bounces(a)ovirt.org [mailto:users-bounces@ovirt.org] On Behalf Of users-request(a)ovirt.org
Sent: Friday, September 13, 2013 7:03 AM
To: users(a)ovirt.org
Subject: Users Digest, Vol 24, Issue 54
Message: 5
Date: Thu, 12 Sep 2013 16:45:49 -0400 (EDT)
From: Jason Brooks <jbrooks(a)redhat.com>
To: users <users(a)ovirt.org>
Subject: [Users] oVirt 3.3 -- Failed to run VM: internal error
unexpected address type for ide disk
Message-ID:
<1080415344.14385259.1379018749875.JavaMail.root(a)redhat.com>
Content-Type: text/plain; charset=utf-8
I'm experiencing an issue today on my oVirt 3.3 test setup -- it's an AIO
engine+host setup, with a second node on a separate machine. Both machines
are running F19, both have all current F19 updates and all current ovirt-
beta repo updates.
This is on a GlusterFS domain, hosted from a volume on the AIO machine.
Also, I have the neutron external network provider configured, but these
VMs aren't using one of these networks.
selinux permissive on both machines, firewall down on both as well
(firewall rules for gluster don't appear to be set by the engine)
1. Create a new VM w/ virtio disk
2. VM runs normally
3. Power down VM
4. VM won't start, w/ error msg:
internal error unexpected address type for ide disk
5. Changing disk to IDE, removing and re-adding, VM still won't start
6. If created w/ IDE disk from the beginning, VM runs and restarts as
expected.
Is anyone else experiencing something like this? It appears to render the
Gluster FS domain type totally unusable. I wasn't having this problem last
week...
Here's a chunk from the VDSM log:
Thread-4526::ERROR::2013-09-12 16:02:53,199::vm::2062::vm.Vm::
(_startUnderlyingVm) vmId=`cc86596b-0a69-4f5e-a4c2-e8d8ca18067e`::
The vm start process failed
Traceback (most recent call last):
File "/usr/share/vdsm/vm.py", line 2022, in _startUnderlyingVm
self._run()
File "/usr/share/vdsm/vm.py", line 2906, in _run
self._connection.createXML(domxml, flags),
File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py",
line 76, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2805, in createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: internal error unexpected address type for ide disk
Regards,
Jason
---
Jason Brooks
Red Hat Open Source and Standards
@jasonbrooks | @redhatopen
http://community.redhat.com
------------------------------
11 years, 3 months