Re: [Users] How to run Xen kernel inside ovirt node!
by Mike Burns
On 07/18/2013 07:58 AM, Vishvendra Singh Chauhan wrote:
> Hello mike,
> Right now I m using fedora host with vdsm. And I instslled
> vdsm-hook-nestedvt plugin. But I m unable to create it.
I'm not familiar with the plugin other than it exists. re-directing
back to the users@ list.
Mike
>
> On 18 Jul 2013 17:07, "Mike Burns" <mburns(a)redhat.com
> <mailto:mburns@redhat.com>> wrote:
>
> On 07/18/2013 03:04 AM, Vishvendra Singh Chauhan wrote:
>
>
>
>
> On Thu, Jul 18, 2013 at 11:16 AM, Gianluca Cecchi
> <gianluca.cecchi(a)gmail.com <mailto:gianluca.cecchi@gmail.com>
> <mailto:gianluca.cecchi@gmail.__com
> <mailto:gianluca.cecchi@gmail.com>>> wrote:
>
> On Thu, Jul 18, 2013 at 6:45 AM, Itamar Heim wrote:
> > On 07/18/2013 06:59 AM, Vishvendra Singh Chauhan wrote:
> >>
> >> kernel it starts normally. But when i tried it
> with Xen
> kernel it
> >> is
> >> giving error related CPU. (kernel cpu )/*
> >>
> >>
> >> oVirt is kvm based, not Xen - why do you expect it
> to work
> with a pv
> >> kernel (iiuc)?
> >>
> >>
> >>
> >> Actually I want to create vm of rhel5 with xen. and
> also want to
> create
> >> the vm of MS server 2012 with hyper-v. So i just want
> to know is
> this
> >> possible or not in Ovirt ?
> >
> >
> > Its not clear to me what you are trying to achieve - can you
> please provide
> > some more details?
> >
>
> It seems to me he wants to test nested virtualization where:
> L0 = Qemu/KVM (managed by oVirt)
> L1 = Xen and/or Hyper-V
>
> Did I understand correctly?
>
>
> Yes that is currect. Mr. Gianluca Cecchi . i want it....
>
>
> This is not possible currently with ovirt-node. You can achieve it
> with a standard Fedora host with vdsm and vdsm-plugin-nestedvt
>
> Mike
>
>
>
> Gianluca
>
>
>
>
> --
> /*Thanks and Regards.*/
> /*Vishvendra Singh Chauhan*/
> /*(*//*RHC{SA,E,SS,VA}CC{NA,__NP})*/
> /*+91-8750625343, +91-9555975004
> */
> http://chauhan-rhce.blogspot.__com
> <http://chauhan-rhce.blogspot.com>
> God First Work Hard Success is Sure...
>
>
> _________________________________________________
> Users mailing list
> Users(a)ovirt.org <mailto:Users@ovirt.org>
> http://lists.ovirt.org/__mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>
>
>
11 years, 5 months
[Users] ovirt LACP/port aggregation fails to initialize on node boot
by Jason Keltz
I'm experimenting with LACP/port aggregation on my public network
interface in my oVirt test setup. My goal is to bind two 1G ports
together. Our network operations team configured two 1G switch ports
appropriately, and I set up the bond between eth0 and eth1 on the node
using the engine. I can't configure the IP of the interface statically
because then I don't get the option to set a gateway, which I need for
our public network. Using DHCP worked before enabling LACP (which
acquires the gateway from the DHCP record). After enabling LACP, when
the node boots, it doesn't get an address. The node gets hostname
"localhost". I have to login to the admin, hit F2, "ifdown PublicNet"
"ifup PublicNet", but then it works! There's obviously some minor delay
issue during node initialization, but there should be some way to tell
it to wait a bit longer?
Jason.
--
Jason Keltz
Manager of Development
Department of Electrical Engineering and Computer Science
York University, Toronto, Canada
Tel: 416-736-2100 x. 33570
Fax: 416-736-5872
11 years, 5 months
[Users] 3.2.2 allinone install fails on CentOS 6.4
by Jim Kinney
I'm tying to install $STABLE (3.2.2) on CentOS 6.4 . I have the repo for
the el6 from ovirt.
Before the engine-setup --with-allinone=yes can complete it errors out with
the following in the setup log:
2013-07-17 15:52:47::DEBUG::all_in_one_100::451::root:: Checking JBoss
status.
2013-07-17 15:52:47::INFO::all_in_one_100::454::root:: JBoss is up and
running.
2013-07-17 15:52:47::DEBUG::setup_sequences::59::root:: running initAPI
2013-07-17 15:52:47::DEBUG::all_in_one_100::240::root:: Initiating the API
object
2013-07-17 15:52:47::ERROR::all_in_one_100::251::root:: Traceback (most
recent call last):
File "/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py", line
248, in initAPI
ca_file=basedefs.FILE_CA_CRT_SRC,
File "/usr/lib/python2.6/site-packages/ovirtsdk/api.py", line 119, in
__init__
url='/api'
File "/usr/lib/python2.6/site-packages/ovirtsdk/infrastructure/proxy.py",
line 112, in request
persistent_auth=self._persistent_auth)
File "/usr/lib/python2.6/site-packages/ovirtsdk/infrastructure/proxy.py",
line 134, in __doRequest
persistent_auth=persistent_auth
File "/usr/lib/python2.6/site-packages/ovirtsdk/web/connection.py", line
148, in doRequest
raise ConnectionError, str(e)
ConnectionError: [ERROR]::oVirt API connection failure, [Errno 111]
Connection refused
2013-07-17 15:52:47::DEBUG::setup_sequences::62::root:: Traceback (most
recent call last):
File "/usr/share/ovirt-engine/scripts/setup_sequences.py", line 60, in run
function()
File "/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py", line
252, in initAPI
raise Exception(ERROR_CREATE_API_OBJECT)
Exception: Error: could not create ovirtsdk API object
2013-07-17 15:52:47::DEBUG::engine-setup::1972::root:: *** The following
params were used as user input:
2013-07-17 15:52:47::DEBUG::engine-setup::1977::root::
override-httpd-config: no
2013-07-17 15:52:47::DEBUG::engine-setup::1977::root:: http-port: 8700
2013-07-17 15:52:47::DEBUG::engine-setup::1977::root:: https-port: 8701
2013-07-17 15:52:47::DEBUG::engine-setup::1977::root:: random-passwords: no
2013-07-17 15:52:47::DEBUG::engine-setup::1977::root:: mac-range:
00:1A:4A:8C:8A:00-00:1A:4A:8C:8A:FF
2013-07-17 15:52:47::DEBUG::engine-setup::1977::root:: host-fqdn:
storage01.mydomain.me
2013-07-17 15:52:47::DEBUG::engine-setup::1977::root:: auth-pass: ********
2013-07-17 15:52:47::DEBUG::engine-setup::1977::root:: org-name: mydomain.me
2013-07-17 15:52:47::DEBUG::engine-setup::1977::root:: application-mode:
virt
2013-07-17 15:52:47::DEBUG::engine-setup::1977::root:: default-dc-type:
POSIXFS
2013-07-17 15:52:47::DEBUG::engine-setup::1977::root:: db-remote-install:
local
2013-07-17 15:52:47::DEBUG::engine-setup::1977::root:: db-host: localhost
2013-07-17 15:52:47::DEBUG::engine-setup::1977::root:: db-local-pass:
********
2013-07-17 15:52:47::DEBUG::engine-setup::1977::root:: nfs-mp:
/var/lib/exports/iso
2013-07-17 15:52:47::DEBUG::engine-setup::1977::root:: iso-domain-name:
ISO_DOMAIN
2013-07-17 15:52:47::DEBUG::engine-setup::1977::root:: config-nfs: yes
2013-07-17 15:52:47::DEBUG::engine-setup::1977::root:: override-firewall:
None
2013-07-17 15:52:47::DEBUG::engine-setup::1977::root:: config-allinone: yes
2013-07-17 15:52:47::DEBUG::engine-setup::1977::root:: storage-path:
/var/lib/images
2013-07-17 15:52:47::DEBUG::engine-setup::1977::root:: superuser-pass:
********
2013-07-17 15:52:47::ERROR::engine-setup::2392::root:: Traceback (most
recent call last):
File "/usr/bin/engine-setup", line 2386, in <module>
main(confFile)
File "/usr/bin/engine-setup", line 2169, in main
runSequences()
File "/usr/bin/engine-setup", line 2092, in runSequences
controller.runAllSequences()
File "/usr/share/ovirt-engine/scripts/setup_controller.py", line 54, in
runAllSequences
sequence.run()
File "/usr/share/ovirt-engine/scripts/setup_sequences.py", line 154, in
run
step.run()
File "/usr/share/ovirt-engine/scripts/setup_sequences.py", line 60, in run
function()
File "/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py", line
252, in initAPI
raise Exception(ERROR_CREATE_API_OBJECT)
Exception: Error: could not create ovirtsdk API object
After much digging, it seems like the issue is in the certs but it's not
making sense to me why it fails. From the server.log:
2013-07-17 16:37:28,873 INFO [org.jboss.as.server.deployment.scanner] (MSC
service thread 1-3) JBAS015012: Started FileSystemDeploymentService for
directory /var/lib/ovirt-engine/deployments
2013-07-17 16:37:28,877 ERROR
[org.apache.tomcat.util.net.jsse.JSSESocketFactory] (MSC service thread
1-4) Failed to load keystore type PKCS12 with path
/etc/pki/ovirt-engine/keys/apache.p12 due to
/etc/pki/ovirt-engine/keys/apache.p12 (Permission denied):
java.io.FileNotFoundException: /etc/pki/ovirt-engine/keys/apache.p12
(Permission denied)
at java.io.FileInputStream.open(Native Method) [rt.jar:1.7.0_25]
at java.io.FileInputStream.<init>(FileInputStream.java:138)
[rt.jar:1.7.0_25]
at
org.apache.tomcat.util.net.jsse.JSSESocketFactory.getStore(JSSESocketFactory.java:374)
[jbossweb-7.0.13.Final.jar:]
at
org.apache.tomcat.util.net.jsse.JSSESocketFactory.getKeystore(JSSESocketFactory.java:299)
[jbossweb-7.0.13.Final.jar:]
at
org.apache.tomcat.util.net.jsse.JSSESocketFactory.getKeyManagers(JSSESocketFactory.java:515)
[jbossweb-7.0.13.Final.jar:]
at
org.apache.tomcat.util.net.jsse.JSSESocketFactory.init(JSSESocketFactory.java:452)
[jbossweb-7.0.13.Final.jar:]
at
org.apache.tomcat.util.net.jsse.JSSESocketFactory.createSocket(JSSESocketFactory.java:168)
[jbossweb-7.0.13.Final.jar:]
at org.apache.tomcat.util.net.JIoEndpoint.init(JIoEndpoint.java:977)
[jbossweb-7.0.13.Final.jar:]
at
org.apache.coyote.http11.Http11Protocol.init(Http11Protocol.java:190)
[jbossweb-7.0.13.Final.jar:]
at org.apache.catalina.connector.Connector.init(Connector.java:983)
[jbossweb-7.0.13.Final.jar:]
at
org.jboss.as.web.WebConnectorService.start(WebConnectorService.java:267)
[jboss-as-web-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811)
[jboss-msc-1.0.2.GA.jar:1.0.2.GA]
at
org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746)
[jboss-msc-1.0.2.GA.jar:1.0.2.GA]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
[rt.jar:1.7.0_25]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
[rt.jar:1.7.0_25]
at java.lang.Thread.run(Thread.java:724) [rt.jar:1.7.0_25]
2013-07-17 16:37:28,883 ERROR [org.apache.coyote.http11.Http11Protocol]
(MSC service thread 1-4) Error initializing endpoint:
java.io.FileNotFoundException: /etc/pki/ovirt-engine/keys/apache.p12
(Permission denied)
at java.io.FileInputStream.open(Native Method) [rt.jar:1.7.0_25]
at java.io.FileInputStream.<init>(FileInputStream.java:138)
[rt.jar:1.7.0_25]
at
org.apache.tomcat.util.net.jsse.JSSESocketFactory.getStore(JSSESocketFactory.java:374)
[jbossweb-7.0.13.Final.jar:]
at
org.apache.tomcat.util.net.jsse.JSSESocketFactory.getKeystore(JSSESocketFactory.java:299)
[jbossweb-7.0.13.Final.jar:]
at
org.apache.tomcat.util.net.jsse.JSSESocketFactory.getKeyManagers(JSSESocketFactory.java:515)
[jbossweb-7.0.13.Final.jar:]
at
org.apache.tomcat.util.net.jsse.JSSESocketFactory.init(JSSESocketFactory.java:452)
[jbossweb-7.0.13.Final.jar:]
at
org.apache.tomcat.util.net.jsse.JSSESocketFactory.createSocket(JSSESocketFactory.java:168)
[jbossweb-7.0.13.Final.jar:]
at org.apache.tomcat.util.net.JIoEndpoint.init(JIoEndpoint.java:977)
[jbossweb-7.0.13.Final.jar:]
at
org.apache.coyote.http11.Http11Protocol.init(Http11Protocol.java:190)
[jbossweb-7.0.13.Final.jar:]
at org.apache.catalina.connector.Connector.init(Connector.java:983)
[jbossweb-7.0.13.Final.jar:]
at
org.jboss.as.web.WebConnectorService.start(WebConnectorService.java:267)
[jboss-as-web-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811)
[jboss-msc-1.0.2.GA.jar:1.0.2.GA]
at
org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746)
[jboss-msc-1.0.2.GA.jar:1.0.2.GA]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
[rt.jar:1.7.0_25]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
[rt.jar:1.7.0_25]
at java.lang.Thread.run(Thread.java:724) [rt.jar:1.7.0_25]
2013-07-17 16:37:28,892 ERROR [org.jboss.msc.service.fail] (MSC service
thread 1-4) MSC00001: Failed to start service jboss.web.connector.https:
org.jboss.msc.service.StartException in service jboss.web.connector.https:
JBAS018007: Error starting web connector
at
org.jboss.as.web.WebConnectorService.start(WebConnectorService.java:271)
at
org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811)
[jboss-msc-1.0.2.GA.jar:1.0.2.GA]
at
org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746)
[jboss-msc-1.0.2.GA.jar:1.0.2.GA]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
[rt.jar:1.7.0_25]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
[rt.jar:1.7.0_25]
at java.lang.Thread.run(Thread.java:724) [rt.jar:1.7.0_25]
Caused by: LifecycleException: Protocol handler initialization failed:
java.io.FileNotFoundException: /etc/pki/ovirt-engine/keys/apache.p12
(Permission denied)
at org.apache.catalina.connector.Connector.init(Connector.java:985)
at
org.jboss.as.web.WebConnectorService.start(WebConnectorService.java:267)
... 5 more
2013-07-17 16:37:28,904 INFO [org.jboss.as.server.deployment.scanner]
(DeploymentScanner-threads - 1) JBAS015003: Found engine.ear in deployment
directory. To trigger deployment create a file called engine.ear.dodeploy
2013-07-17 16:37:28,957 INFO
[org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-11)
JBAS010400: Bound data source [java:/ENGINEDataSource]
2013-07-17 16:37:28,966 INFO [org.jboss.as.controller] (Controller Boot
Thread) JBAS014774: Service status report
JBAS014777: Services which failed to start: service
jboss.web.connector.https: org.jboss.msc.service.StartException in service
jboss.web.connector.https: JBAS018007: Error starting web connector
the /etc/pki/ovirt-engine/keys:
ls -la /etc/pki/ovirt-engine/keys/
total 24
drwxr-xr-x. 2 ovirt ovirt 4096 Jul 17 15:51 .
drwxr-xr-x. 6 ovirt ovirt 4096 Jul 17 15:51 ..
-rw-r-----. 1 apache apache 1828 Jul 17 15:51 apache.key.nopass
-rw-r-----. 1 apache apache 2685 Jul 17 15:51 apache.p12
-rw-------. 1 root root 1832 Jul 17 15:51 engine_id_rsa
-rw-r-----. 1 ovirt ovirt 2685 Jul 17 15:51 engine.p12
I've tried with setenforce 0 and no change.
I've downgraded to earlier 3.2.0 versions, earlier jboss-as, beta allinone
plugin for 3.2, no changes. At one point I added some additional
debuggingto the allinone script to make sure that reasonable variables were
being passed around (they are).
I'm stumped.
--
--
James P. Kinney III
*
*Every time you stop a school, you will have to build a jail. What you gain
at one end you lose at the other. It's like feeding a dog on his own tail.
It won't fatten the dog.
- Speech 11/23/1900 Mark Twain
*
http://electjimkinney.org
http://heretothereideas.blogspot.com/
*
11 years, 5 months
[Users] How to run Xen kernel inside ovirt node!
by Vishvendra Singh Chauhan
--
*Hello Members,*
*
*
*I am facing problem in Ovirt. when i start my RHEL5 Vm with normal kernel
it starts normally. But when i tried it with Xen kernel it is giving
error related CPU. (kernel cpu )*
*
*
*So please give me the solutions for it. *
*
*
*
*
*
*
*Thanks and Regards.*
*Vishvendra Singh Chauhan*
*(**RHC{SA,E,SS,VA}CC{NA,NP})*
*+91-8750625343, +91-9555975004
*
http://chauhan-rhce.blogspot.com
God First Work Hard Success is Sure...
11 years, 5 months
Re: [Users] p2v Import error
by Carlo Turco
There is no other VM with the same name. I even tried to change the name in the OVF file, but when I try to restore it, the same error again.
Em 16/07/2013 10:37, Richard W.M. Jones escreveu:
> Please reply on the list.
>
> Rich.
>
11 years, 5 months
[Users] bug with ovirt nightlies and engine-setup-2 script
by Hetz Ben Hamo
*
*
Hi,
I installed the all in one plugin and run the engine-setup-2 script. it
detects the plugin, and I'm using the iptables option. problem is that it
writes the wrong lines in iptables:
-A INPUT -p tcp -m state --state NEW -m tcp --dport 5634-6166 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 49152-49216 -j ACCEPT
iptables does not accept the minus sign when mentioning port range, so
5634-6166 should be 5634:6166 (I wish it would accept the minus sign, it's
a stupid thing that it doesn't).
So the script failes to start iptables due to these wrong lines, and I'm
stuck with oVirt that doesn't work.
Log file - enclosed.
Hetz
11 years, 5 months
[Users] Ovirt 3.3 nightly, Gluster 3.4 stable, cannot launch VM with gluster storage domain backed disk
by Steve Dainard
I'm getting an error when attempting to run up a VM with a disk in a
gluster storage domain. Note gluster is running on the same host as the
Ovirt virt node, but not managed by ovirt manager.
*Ovirt Host RPM's:*
vdsm-xmlrpc-4.11.0-143.git5fe89d4.fc18.noarch
vdsm-python-cpopen-4.11.0-142.git24ad94d.fc18.x86_64
vdsm-python-4.11.0-143.git5fe89d4.fc18.x86_64
vdsm-cli-4.11.0-143.git5fe89d4.fc18.noarch
vdsm-4.11.0-143.git5fe89d4.fc18.x86_64
glusterfs-3.4.0-1.fc18.x86_64
glusterfs-fuse-3.4.0-1.fc18.x86_64
glusterfs-server-3.4.0-1.fc18.x86_64
glusterfs-rdma-3.4.0-1.fc18.x86_64
*Ovirt Manager RPM's:
*
ovirt-engine-webadmin-portal-3.3.0-0.2.master.20130706220107.git598f593.fc18.noarch
ovirt-log-collector-3.3.0-0.2.master.20130715.git8affa81.fc18.noarch
ovirt-host-deploy-java-1.1.0-0.2.master.20130716.git26f4110.fc18.noarch
ovirt-engine-backend-3.3.0-0.2.master.20130706220107.git598f593.fc18.noarch
ovirt-iso-uploader-3.3.0-0.2.master.20130715.gitdf42ec9.fc18.noarch
ovirt-engine-userportal-3.3.0-0.2.master.20130706220107.git598f593.fc18.noarch
ovirt-engine-restapi-3.3.0-0.2.master.20130706220107.git598f593.fc18.noarch
ovirt-engine-tools-3.3.0-0.2.master.20130706220107.git598f593.fc18.noarch
ovirt-engine-dbscripts-3.3.0-0.2.master.20130706220107.git598f593.fc18.noarch
ovirt-host-deploy-1.1.0-0.2.master.20130716.git26f4110.fc18.noarch
ovirt-engine-sdk-3.3.0.3-1.20130621.git2bbf0b8.fc18.noarch
ovirt-engine-3.3.0-0.2.master.20130706220107.git598f593.fc18.noarch
ovirt-image-uploader-3.3.0-0.2.master.20130715.git7674462.fc18.noarch
ovirt-engine-setup-3.3.0-0.2.master.20130716053857.git3dd1ea3.fc18.noarch
*Web-UI displays:*
VM VM1 is down. Exit message: internal error process exited while
connecting to monitor: qemu-system-x86_64: -drive
file=gluster://ovirt001/vol1/a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf/images/238cc6cf-070c-4483-b686-c0de7ddf0dfa/ff2bca2d-4ed1-46c6-93c8-22a39bb1626a,if=none,id=drive-virtio-disk0,format=raw,serial=238cc6cf-070c-4483-b686-c0de7ddf0dfa,cache=none,werror=stop,rerror=stop,aio=threads:
could not open disk image
gluster://ovirt001/vol1/a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf/images/238cc6cf-070c-4483-b686-c0de7ddf0dfa/ff2bca2d-4ed1-46c6-93c8-22a39bb1626a:
No such file or directory .
VM VM1 was started by admin@internal (Host: ovirt001).
The disk VM1_Disk1 was successfully added to VM VM1.
*I can see the image on the gluster machine, and it looks to have the
correct permissions:*
[root@ovirt001 238cc6cf-070c-4483-b686-c0de7ddf0dfa]# pwd
/mnt/storage1/vol1/a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf/images/238cc6cf-070c-4483-b686-c0de7ddf0dfa
[root@ovirt001 238cc6cf-070c-4483-b686-c0de7ddf0dfa]# ll
total 1028
-rw-rw----. 2 vdsm kvm 32212254720 Jul 17 11:11
ff2bca2d-4ed1-46c6-93c8-22a39bb1626a
-rw-rw----. 2 vdsm kvm 1048576 Jul 17 11:11
ff2bca2d-4ed1-46c6-93c8-22a39bb1626a.lease
-rw-r--r--. 2 vdsm kvm 268 Jul 17 11:11
ff2bca2d-4ed1-46c6-93c8-22a39bb1626a.meta
[root@ovirt001 238cc6cf-070c-4483-b686-c0de7ddf0dfa]#
*engine.log:*
2013-07-17 11:12:17,474 INFO [org.ovirt.engine.core.bll.AddDiskCommand]
(ajp--127.0.0.1-8702-6) Running command: AddDiskCommand internal: false.
Entities affected : ID: 8e
2c9057-deee-48a6-8314-a34530fc53cb Type: VM, ID:
a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf Type: Storage
2013-07-17 11:12:17,691 INFO
[org.ovirt.engine.core.bll.AddImageFromScratchCommand]
(ajp--127.0.0.1-8702-6) Running command: AddImageFromScratchCommand
internal: true. Enti
ties affected : ID: a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf Type: Storage
2013-07-17 11:12:17,746 INFO
[org.ovirt.engine.core.bll.AddImageFromScratchCommand]
(ajp--127.0.0.1-8702-6) Lock freed to object EngineLock [exclusiveLocks=
key: 8e2c9057-d
eee-48a6-8314-a34530fc53cb value: VM_DISK_BOOT
, sharedLocks= key: 8e2c9057-deee-48a6-8314-a34530fc53cb value: VM
]
2013-07-17 11:12:17,752 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand]
(ajp--127.0.0.1-8702-6) START, CreateImageVDSCommand( storagePoolId =
5849b03
0-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false,
compatabilityVersion = 3.3, storageDomainId =
a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf, imageGroupId = 238cc6cf-070c-
4483-b686-c0de7ddf0dfa, imageSizeInBytes = 32212254720, volumeFormat = RAW,
newImageId = ff2bca2d-4ed1-46c6-93c8-22a39bb1626a, newImageDescription = ),
log id: 4a1dbc41
2013-07-17 11:12:17,754 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand]
(ajp--127.0.0.1-8702-6) -- CreateImageVDSCommand::ExecuteIrsBrokerCommand:
ca
lling 'createVolume' with two new parameters: description and UUID
2013-07-17 11:12:17,755 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand]
(ajp--127.0.0.1-8702-6) -- createVolume parameters:
sdUUID=a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf
spUUID=5849b030-626e-47cb-ad90-3ce782d831b3
imgGUID=238cc6cf-070c-4483-b686-c0de7ddf0dfa
size=32,212,254,720 bytes
volFormat=RAW
volType=Sparse
volUUID=ff2bca2d-4ed1-46c6-93c8-22a39bb1626a
descr=
srcImgGUID=00000000-0000-0000-0000-000000000000
srcVolUUID=00000000-0000-0000-0000-000000000000
2013-07-17 11:12:17,995 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand]
(ajp--127.0.0.1-8702-6) FINISH, CreateImageVDSCommand, return:
ff2bca2d-4ed1-
46c6-93c8-22a39bb1626a, log id: 4a1dbc41
2013-07-17 11:12:18,129 INFO [org.ovirt.engine.core.bll.CommandAsyncTask]
(ajp--127.0.0.1-8702-6) CommandAsyncTask::Adding CommandMultiAsyncTasks
object for command 1329503
b-d488-4fec-a5b0-10849679f025
2013-07-17 11:12:18,130 INFO
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (ajp--127.0.0.1-8702-6)
CommandMultiAsyncTasks::AttachTask: Attaching task f222c17e-2402-492
8-a0db-4f9fcaeb08b6 to command 1329503b-d488-4fec-a5b0-10849679f025.
2013-07-17 11:12:18,156 INFO [org.ovirt.engine.core.bll.AsyncTaskManager]
(ajp--127.0.0.1-8702-6) Adding task f222c17e-2402-4928-a0db-4f9fcaeb08b6
(Parent Command AddDisk,
Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling
hasn't started yet..
2013-07-17 11:12:18,279 INFO [org.ovirt.engine.core.bll.SPMAsyncTask]
(ajp--127.0.0.1-8702-6) BaseAsyncTask::StartPollingTask: Starting to poll
task f222c17e-2402-4928-a0db
-4f9fcaeb08b6.
2013-07-17 11:12:19,122 INFO [org.ovirt.engine.core.bll.AsyncTaskManager]
(DefaultQuartzScheduler_Worker-92) Polling and updating Async Tasks: 1
tasks, 1 tasks to poll now
2013-07-17 11:12:19,147 INFO [org.ovirt.engine.core.bll.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-92) SPMAsyncTask::PollTask: Polling task
f222c17e-2402-4928-a0db-4f9fcaeb08b6 (Parent Command AddDisk, Parameters
Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned
status running.
2013-07-17 11:12:19,150 INFO [org.ovirt.engine.core.bll.AsyncTaskManager]
(DefaultQuartzScheduler_Worker-92) Finished polling Tasks, will poll again
in 10 seconds.
2013-07-17 11:12:29,170 INFO [org.ovirt.engine.core.bll.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-97) SPMAsyncTask::PollTask: Polling task
f222c17e-2402-4928-a0db-4f9fcaeb08b6 (Parent Command AddDisk, Parameters
Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned
status finished, result 'success'.
2013-07-17 11:12:29,209 INFO [org.ovirt.engine.core.bll.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-97) BaseAsyncTask::OnTaskEndSuccess: Task
f222c17e-2402-4928-a0db-4f9fcaeb08b6 (Parent Command AddDisk, Parameters
Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended
successfully.
2013-07-17 11:12:29,211 INFO [org.ovirt.engine.core.bll.CommandAsyncTask]
(DefaultQuartzScheduler_Worker-97) CommandAsyncTask::EndActionIfNecessary:
All tasks of entity 1329503b-d488-4fec-a5b0-10849679f025 has ended ->
executing EndAction
2013-07-17 11:12:29,214 INFO [org.ovirt.engine.core.bll.CommandAsyncTask]
(DefaultQuartzScheduler_Worker-97) CommandAsyncTask::EndAction: Ending
action for 1 tasks (command ID: 1329503b-d488-4fec-a5b0-10849679f025):
calling EndAction .
2013-07-17 11:12:29,219 INFO [org.ovirt.engine.core.bll.CommandAsyncTask]
(pool-6-thread-49) CommandAsyncTask::EndCommandAction [within thread]
context: Attempting to EndAction AddDisk, executionIndex: 0
2013-07-17 11:12:29,265 INFO [org.ovirt.engine.core.bll.AddDiskCommand]
(pool-6-thread-49) [1cf36ce5] Ending command successfully:
org.ovirt.engine.core.bll.AddDiskCommand
2013-07-17 11:12:29,315 INFO
[org.ovirt.engine.core.bll.AddImageFromScratchCommand] (pool-6-thread-49)
[78361e89] Ending command successfully:
org.ovirt.engine.core.bll.AddImageFromScratchCommand
2013-07-17 11:12:29,343 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
(pool-6-thread-49) [78361e89] START, GetImageInfoVDSCommand( storagePoolId
= 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false,
compatabilityVersion = null, storageDomainId =
a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf, imageGroupId =
238cc6cf-070c-4483-b686-c0de7ddf0dfa, imageId =
ff2bca2d-4ed1-46c6-93c8-22a39bb1626a), log id: 61787855
2013-07-17 11:12:29,462 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
(pool-6-thread-49) [78361e89] FINISH, GetImageInfoVDSCommand, return:
org.ovirt.engine.core.common.businessentities.DiskImage@380026d0, log id:
61787855
2013-07-17 11:12:29,547 INFO [org.ovirt.engine.core.bll.CommandAsyncTask]
(pool-6-thread-49) CommandAsyncTask::HandleEndActionResult [within thread]:
EndAction for action type AddDisk completed, handling the result.
2013-07-17 11:12:29,549 INFO [org.ovirt.engine.core.bll.CommandAsyncTask]
(pool-6-thread-49) CommandAsyncTask::HandleEndActionResult [within thread]:
EndAction for action type AddDisk succeeded, clearing tasks.
2013-07-17 11:12:29,562 INFO [org.ovirt.engine.core.bll.SPMAsyncTask]
(pool-6-thread-49) SPMAsyncTask::ClearAsyncTask: Attempting to clear task
f222c17e-2402-4928-a0db-4f9fcaeb08b6
2013-07-17 11:12:29,566 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(pool-6-thread-49) START, SPMClearTaskVDSCommand( storagePoolId =
5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false,
compatabilityVersion = null, taskId =
f222c17e-2402-4928-a0db-4f9fcaeb08b6), log id: fb2eabf
2013-07-17 11:12:29,572 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(pool-6-thread-49) START, HSMClearTaskVDSCommand(HostName = ovirt001,
HostId = d07967ab-3764-47ff-8755-bc539a7feb3b,
taskId=f222c17e-2402-4928-a0db-4f9fcaeb08b6), log id: 2b51a9a6
2013-07-17 11:12:29,600 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(pool-6-thread-49) FINISH, HSMClearTaskVDSCommand, log id: 2b51a9a6
2013-07-17 11:12:29,601 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(pool-6-thread-49) FINISH, SPMClearTaskVDSCommand, log id: fb2eabf
2013-07-17 11:12:29,615 INFO [org.ovirt.engine.core.bll.SPMAsyncTask]
(pool-6-thread-49) BaseAsyncTask::RemoveTaskFromDB: Removed task
f222c17e-2402-4928-a0db-4f9fcaeb08b6 from DataBase
2013-07-17 11:12:29,616 INFO [org.ovirt.engine.core.bll.CommandAsyncTask]
(pool-6-thread-49) CommandAsyncTask::HandleEndActionResult [within thread]:
Removing CommandMultiAsyncTasks object for entity
1329503b-d488-4fec-a5b0-10849679f025
2013-07-17 11:12:36,749 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(ajp--127.0.0.1-8702-5) [f287998] Lock Acquired to object EngineLock
[exclusiveLocks= key: 8e2c9057-deee-48a6-8314-a34530fc53cb value: VM
, sharedLocks= ]
2013-07-17 11:12:36,773 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(ajp--127.0.0.1-8702-5) [f287998] START, IsVmDuringInitiatingVDSCommand(
vmId = 8e2c9057-deee-48a6-8314-a34530fc53cb), log id: 482ec087
2013-07-17 11:12:36,773 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(ajp--127.0.0.1-8702-5) [f287998] FINISH, IsVmDuringInitiatingVDSCommand,
return: false, log id: 482ec087
2013-07-17 11:12:36,932 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(pool-6-thread-49) [f287998] Running command: RunVmCommand internal: false.
Entities affected : ID: 8e2c9057-deee-48a6-8314-a34530fc53cb Type: VM
2013-07-17 11:12:37,038 INFO
[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-6-thread-49)
[f287998] START, CreateVmVDSCommand(HostName = ovirt001, HostId =
d07967ab-3764-47ff-8755-bc539a7feb3b,
vmId=8e2c9057-deee-48a6-8314-a34530fc53cb, vm=VM [VM1]), log id: 3ae8c11e
2013-07-17 11:12:37,057 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
(pool-6-thread-49) [f287998] START, CreateVDSCommand(HostName = ovirt001,
HostId = d07967ab-3764-47ff-8755-bc539a7feb3b,
vmId=8e2c9057-deee-48a6-8314-a34530fc53cb, vm=VM [VM1]), log id: 367ff496
2013-07-17 11:12:37,168 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
(pool-6-thread-49) [f287998]
org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand
spiceSslCipherSuite=DEFAULT,memSize=1024,kvmEnable=true,smp=1,vmType=kvm,emulatedMachine=pc-1.0,keyboardLayout=en-us,pitReinjection=false,nice=0,display=vnc,smartcardEnable=false,tabletEnable=true,smpCoresPerSocket=1,spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,timeOffset=0,transparentHugePages=true,vmId=8e2c9057-deee-48a6-8314-a34530fc53cb,devices=[Ljava.util.HashMap;@f1e0215
,acpiEnable=true,vmName=VM1,cpuType=SandyBridge,custom={}
2013-07-17 11:12:37,172 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
(pool-6-thread-49) [f287998] FINISH, CreateVDSCommand, log id: 367ff496
2013-07-17 11:12:37,253 INFO
[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (pool-6-thread-49)
[f287998] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id:
3ae8c11e
2013-07-17 11:12:37,255 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(pool-6-thread-49) [f287998] Lock freed to object EngineLock
[exclusiveLocks= key: 8e2c9057-deee-48a6-8314-a34530fc53cb value: VM
, sharedLocks= ]
2013-07-17 11:12:39,267 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(DefaultQuartzScheduler_Worker-2) START, DestroyVDSCommand(HostName =
ovirt001, HostId = d07967ab-3764-47ff-8755-bc539a7feb3b,
vmId=8e2c9057-deee-48a6-8314-a34530fc53cb, force=false, secondsToWait=0,
gracefully=false), log id: 20fae62
2013-07-17 11:12:39,354 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(DefaultQuartzScheduler_Worker-2) FINISH, DestroyVDSCommand, log id: 20fae62
2013-07-17 11:12:39,433 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-2) Running on vds during rerun failed vm:
null
2013-07-17 11:12:39,437 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-2) vm VM1 running in db and not running in
vds - add to rerun treatment. vds ovirt001
2013-07-17 11:12:39,441 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand]
(DefaultQuartzScheduler_Worker-2) START, FullListVdsCommand(HostName =
ovirt001, HostId = d07967ab-3764-47ff-8755-bc539a7feb3b,
vds=Host[ovirt001], vmIds=[8e2c9057-deee-48a6-8314-a34530fc53cb]), log id:
119758a
2013-07-17 11:12:39,453 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand]
(DefaultQuartzScheduler_Worker-2) FINISH, FullListVdsCommand, return:
[Ljava.util.HashMap;@2e73b796, log id: 119758a
2013-07-17 11:12:39,478 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-2) Rerun vm
8e2c9057-deee-48a6-8314-a34530fc53cb. Called from vds ovirt001
2013-07-17 11:12:39,574 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(pool-6-thread-49) Lock Acquired to object EngineLock [exclusiveLocks= key:
8e2c9057-deee-48a6-8314-a34530fc53cb value: VM
, sharedLocks= ]
2013-07-17 11:12:39,603 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(pool-6-thread-49) START, IsVmDuringInitiatingVDSCommand( vmId =
8e2c9057-deee-48a6-8314-a34530fc53cb), log id: 497e83ec
2013-07-17 11:12:39,606 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(pool-6-thread-49) FINISH, IsVmDuringInitiatingVDSCommand, return: false,
log id: 497e83ec
2013-07-17 11:12:39,661 INFO
[org.ovirt.engine.core.bll.scheduling.VdsSelector] (pool-6-thread-49) VDS
ovirt001 d07967ab-3764-47ff-8755-bc539a7feb3b have failed running this VM
in the current selection cycle
2013-07-17 11:12:39,663 WARN [org.ovirt.engine.core.bll.RunVmCommand]
(pool-6-thread-49) CanDoAction of action RunVm failed.
Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,ACTION_TYPE_FAILED_VDS_VM_CLUSTER
2013-07-17 11:12:39,664 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(pool-6-thread-49) Lock freed to object EngineLock [exclusiveLocks= key:
8e2c9057-deee-48a6-8314-a34530fc53cb value: VM
, sharedLocks= ]
2013-07-17 11:13:49,097 INFO [org.ovirt.engine.core.bll.AsyncTaskManager]
(DefaultQuartzScheduler_Worker-42) Setting new tasks map. The map contains
now 0 tasks
2013-07-17 11:13:49,099 INFO [org.ovirt.engine.core.bll.AsyncTaskManager]
(DefaultQuartzScheduler_Worker-42) Cleared all tasks of pool
5849b030-626e-47cb-ad90-3ce782d831b3.
*Steve Dainard *
Infrastructure Manager
Miovision <http://miovision.com/> | *Rethink Traffic*
519-513-2407 ex.250
877-646-8476 (toll-free)
*Blog <http://miovision.com/blog> |
**LinkedIn<https://www.linkedin.com/company/miovision-technologies> |
Twitter <https://twitter.com/miovision> |
Facebook<https://www.facebook.com/miovision>
*
------------------------------
Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON,
Canada | N2C 1L3
This e-mail may contain information that is privileged or confidential. If
you are not the intended recipient, please delete the e-mail and any
attachments and notify us immediately.
11 years, 5 months
[Users] Node 3.0.0-5 fc18 with ovirt 3.2.1
by Jakub Bittner
Hi,
I am trying to use latest stable node iso
(ovirt-node-iso-3.0.0-5.0.1.fc18.iso) with oVirt 3.2.1, but it fails
when installing from web gui. believe, that the problem is missing
/usr/share/vdsm/addNetwork on node. Is there any way to use node version
3 with oVirt?
Here is the log:
2013-07-17 09:10:06 DEBUG otopi.plugins.ovirt_host_deploy.vdsm.bridge
bridge._rhel_getInterfaceConfigParameters:479 parameters of em3:
['GATEWAY=192.168.3.1', 'IPADDR=192.168.3.207', 'NETMASK=255.255.255.0',
'ONBOOT=yes', 'PEERDNS=no', 'PEERNTP=yes']
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.services.systemd
systemd.exists:85 check if service firewalld exists
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:347 execute: ('/bin/systemctl', 'show', '-p',
'LoadState', 'firewalld.service'), executable='None', cwd='None', env=None
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:364 execute-result: ('/bin/systemctl', 'show', '-p',
'LoadState', 'firewalld.service'), rc=0
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:412 execute-output: ('/bin/systemctl', 'show', '-p',
'LoadState', 'firewalld.service') stdout:
LoadState=loaded
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:417 execute-output: ('/bin/systemctl', 'show', '-p',
'LoadState', 'firewalld.service') stderr:
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.services.systemd
systemd.state:131 starting service firewalld
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:347 execute: ('/bin/systemctl', 'stop',
'firewalld.service'), executable='None', cwd='None', env=None
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:364 execute-result: ('/bin/systemctl', 'stop',
'firewalld.service'), rc=0
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:412 execute-output: ('/bin/systemctl', 'stop',
'firewalld.service') stdout:
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:417 execute-output: ('/bin/systemctl', 'stop',
'firewalld.service') stderr:
2013-07-17 09:10:06 DEBUG otopi.plugins.ovirt_host_deploy.vdsm.bridge
plugin.executeRaw:347 execute: ['/usr/share/vdsm/addNetwork',
'ovirtmgmt', '', '', u'em3', 'GATEWAY=192.168.3.1',
'IPADDR=192.168.3.207', 'NETMASK=255.255.255.0', 'ONBOOT=yes',
'PEERDNS=no', 'PEERNTP=yes', 'blockingdhcp=true'], executable='None',
cwd='None', env=None
2013-07-17 09:10:06 DEBUG otopi.plugins.ovirt_host_deploy.vdsm.bridge
plugin.executeRaw:370 execute-result: ['/usr/share/vdsm/addNetwork',
'ovirtmgmt', '', '', u'em3', 'GATEWAY=192.168.3.1',
'IPADDR=192.168.3.207', 'NETMASK=255.255.255.0', 'ONBOOT=yes',
'PEERDNS=no', 'PEERNTP=yes', 'blockingdhcp=true'], exception
Traceback (most recent call last):
File "/tmp/ovirt-rPK0bnkLRQ/pythonlib/otopi/plugin.py", line 357, in
executeRaw
env=env,
File "/usr/lib64/python2.7/subprocess.py", line 679, in __init__
File "/usr/lib64/python2.7/subprocess.py", line 1249, in _execute_child
OSError: [Errno 2] No such file or directory
2013-07-17 09:10:06 DEBUG otopi.context context._executeMethod:130
method exception
Traceback (most recent call last):
File "/tmp/ovirt-rPK0bnkLRQ/pythonlib/otopi/context.py", line 120, in
_executeMethod
method['method']()
File
"/tmp/ovirt-rPK0bnkLRQ/otopi-plugins/ovirt-host-deploy/vdsm/bridge.py",
line 770, in _misc
parameters=parameters,
File
"/tmp/ovirt-rPK0bnkLRQ/otopi-plugins/ovirt-host-deploy/vdsm/bridge.py",
line 544, in _createBridge
parameters
File "/tmp/ovirt-rPK0bnkLRQ/pythonlib/otopi/plugin.py", line 404, in
execute
**kwargs
File "/tmp/ovirt-rPK0bnkLRQ/pythonlib/otopi/plugin.py", line 357, in
executeRaw
env=env,
File "/usr/lib64/python2.7/subprocess.py", line 679, in __init__
File "/usr/lib64/python2.7/subprocess.py", line 1249, in _execute_child
OSError: [Errno 2] No such file or directory
2013-07-17 09:10:06 ERROR otopi.context context._executeMethod:139
Failed to execute stage 'Misc configuration': [Errno 2] No such file or
directory
2013-07-17 09:10:06 DEBUG otopi.transaction transaction.abort:131
aborting 'Yum Transaction'
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.packagers.yumpackager
miniyumlocal.verbose:49 Yum Performing rollback
2013-07-17 09:10:06 DEBUG otopi.transaction transaction.abort:131
aborting 'File transaction for '/etc/vdsm/vdsm.conf''
2013-07-17 09:10:06 DEBUG otopi.transaction transaction.abort:131
aborting 'File transaction for '/root/.ssh/authorized_keys''
2013-07-17 09:10:06 DEBUG otopi.transaction transaction.abort:131
aborting 'File transaction for '/etc/udev/rules.d/12-ovirt-iosched.rules''
2013-07-17 09:10:06 DEBUG otopi.transaction transaction.abort:131
aborting 'File transaction for '/etc/vdsm/vdsm.id''
2013-07-17 09:10:06 DEBUG otopi.context context.dumpEnvironment:418
ENVIRONMENT DUMP - BEGIN
2013-07-17 09:10:06 DEBUG otopi.context context.dumpEnvironment:428 ENV
BASE/error=bool:'True'
2013-07-17 09:10:06 DEBUG otopi.context context.dumpEnvironment:430
ENVIRONMENT DUMP - END
2013-07-17 09:10:06 INFO otopi.context context.runSequence:359 Stage:
Pre-termination
2013-07-17 09:10:06 DEBUG otopi.context context.runSequence:363 STAGE
pre-terminate
2013-07-17 09:10:06 DEBUG otopi.context context._executeMethod:116 Stage
pre-terminate METHOD otopi.plugins.otopi.dialog.cli.Plugin._pre_terminate
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ***Q:STRING TERMINATION_COMMAND
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ###
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ### Processing ended, use
'quit' to quit
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ### COMMAND>
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:RECEIVE env-get -k BASE/error
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ***D:VALUE BASE/error=bool:True
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ***Q:STRING TERMINATION_COMMAND
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ###
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ### Processing ended, use
'quit' to quit
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ### COMMAND>
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:RECEIVE env-get -k BASE/aborted
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ***D:VALUE BASE/aborted=bool:False
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ***Q:STRING TERMINATION_COMMAND
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ###
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ### Processing ended, use
'quit' to quit
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ### COMMAND>
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:RECEIVE env-get -k
ODEPLOY/installIncomplete
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ***D:VALUE
ODEPLOY/installIncomplete=bool:False
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ***Q:STRING TERMINATION_COMMAND
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ###
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ### Processing ended, use
'quit' to quit
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ### COMMAND>
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:RECEIVE noop
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ***Q:STRING TERMINATION_COMMAND
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ###
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ### Processing ended, use
'quit' to quit
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ### COMMAND>
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:RECEIVE env-get -k SYSTEM/reboot
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ***D:VALUE SYSTEM/reboot=bool:False
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ***Q:STRING TERMINATION_COMMAND
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ###
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ### Processing ended, use
'quit' to quit
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:SEND ### COMMAND>
2013-07-17 09:10:06 DEBUG otopi.plugins.otopi.dialog.machine
dialog.__logString:215 DIALOG:RECEIVE log
11 years, 5 months