Users
Threads by month
- ----- 2026 -----
- April
- March
- February
- January
- ----- 2025 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- 19176 discussions
Hi everyone,
i just started to use oVirt and i tried to get it work with my
OpenLDAP/Kerberos setup. After a lot of searching for the error message
i finally stumbled over an e-mail on this list that OpenLDAP is not yet
supported [1]. As advised i opened a RFE bug [2] :).
Is there any developer documentation how the access to the directory
is supposed to work? Maybe one of my humble co-workers with more Java
skills then me can contribute something.
Best wishes,
Matthias
[1] http://lists.ovirt.org/pipermail/users/2012-February/000678.html
[2] https://bugzilla.redhat.com/show_bug.cgi?id=836839
2
1
2012/6/29 Doron Fediuck <dfediuck(a)redhat.com>:
> Looks like your db is down. Please try
> service postgresql status
> If it's not running, start it.
>
apparently starting postgres service was failed also, perhaps there's
a package missing?
is this a known issue on fedora 16?
[root@ovirt01 ~]# service postgres status
Redirecting to /bin/systemctl status postgres.service
postgres.service
Loaded: error (Reason: No such file or directory)
Active: inactive (dead)
[root@ovirt01 ~]# service postgres stop
Redirecting to /bin/systemctl stop postgres.service
[root@ovirt01 ~]# service postgres start
Redirecting to /bin/systemctl start postgres.service
Failed to issue method call: Unit postgres.service failed to load: No
such file or directory. See system logs and 'systemctl status
postgres.service' for details.
--
Regards,
Umarzuki Mochlis
http://debmal.my
4
8
Hi everyone,
how is the right way to get engine-manage-domains more
verbose? I found a log4j.xml in utils.jar but my configuration
seems not to work still only INFO and ERROR
in /var/log/engine/engine-manage-domains/engine-manage-domains.log.
I'am running Fedora 17 with ovirt 3.1 [1].
Best wishes,
Matthias
ovirt-iso-uploader-3.1.0-0.git1841d9.fc17.noarch
ovirt-engine-webadmin-portal-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
ovirt-engine-notification-service-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
ovirt-image-uploader-3.1.0-0.git9c42c8.fc17.noarch
ovirt-engine-setup-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
ovirt-engine-config-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
ovirt-engine-backend-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
ovirt-log-collector-3.1.0-0.fc17.noarch
ovirt-engine-restapi-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
ovirt-engine-genericapi-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
ovirt-engine-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
ovirt-engine-sdk-3.1.0.2-gita89f4e.fc17.noarch
ovirt-engine-userportal-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
ovirt-engine-dbscripts-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
ovirt-engine-tools-common-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
2
1
01 Jul '12
Hi,
Is there a way to directly add an LDAP server to ovirt? Currently I
run engine-manage-domains with -domain=<domain-name>. This finds all
the ldap servers in the domain. Can I skip this and just add the one I
want? I have the fqdn of the ldap server.
Regards
Sharad Mishra
IBM
4
4
[Users] BSTRAP component='CreateConf' status='FAIL' message='Basic configuration failed to import default values'
by Karli Sjöberg 30 Jun '12
by Karli Sjöberg 30 Jun '12
30 Jun '12
--_000_A973225C083E47A1A05CBFC2393B24E6sluse_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hi,
I am running Fedora 17 and added the ovirt beta repository to have access t=
o webadmin addition, since F17 only comes with CLI by default.
# wget http://ovirt.org/releases/beta/ovirt-engine.repo -O /etc/yum.repos.d=
/ovirt- engine_beta.repo
# sed -i -- 's/fedora\/16/fedora\/17/' /etc/yum.repos.d/ovirt-engine_beta.r=
epo
# yum install -y ovirt-engine
# rpm -qa | grep ovirt
ovirt-engine-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
ovirt-engine-genericapi-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
ovirt-engine-webadmin-portal-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
ovirt-iso-uploader-3.1.0-0.git1841d9.fc17.noarch
ovirt-engine-sdk-3.1.0.2-gita89f4e.fc17.noarch
ovirt-engine-restapi-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
ovirt-engine-userportal-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
ovirt-engine-tools-common-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
ovirt-log-collector-3.1.0-0.fc17.noarch
ovirt-engine-dbscripts-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
ovirt-engine-setup-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
ovirt-engine-notification-service-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
ovirt-image-uploader-3.1.0-0.git9c42c8.fc17.noarch
ovirt-engine-config-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
ovirt-engine-backend-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
I need to get oVirt-engine- and host up and running on the machine because =
I want it to be able to configure and execute power management on the rest =
of the hosts in the cluster.
Source: http://lists.ovirt.org/pipermail/users/2012-February/000361.html
"Yes, the ovirt backend does not shut down or power up any hosts directly, =
it can work only through vdsm. Therefore you need one running host per data=
center to be able to manage the rest of the hosts."
I am then following this article:
http://blog.jebpages.com/archives/how-to-get-up-and-running-with-ovirt/
Where I get to adding itself as a host in it=B4s own cluster and then:
(This is a "Re-Install" from the WUI)
2012-06-28 12:25:00,000 INFO [org.ovirt.engine.core.bll.AutoRecoveryManage=
r] (QuartzScheduler_Worker-70) Checking autorecoverable hosts
2012-06-28 12:25:00,004 INFO [org.ovirt.engine.core.bll.AutoRecoveryManage=
r] (QuartzScheduler_Worker-70) Autorecovering 0 hosts
2012-06-28 12:25:00,005 INFO [org.ovirt.engine.core.bll.AutoRecoveryManage=
r] (QuartzScheduler_Worker-70) Checking autorecoverable hosts done
2012-06-28 12:25:00,006 INFO [org.ovirt.engine.core.bll.AutoRecoveryManage=
r] (QuartzScheduler_Worker-70) Checking autorecoverable storage domains
2012-06-28 12:25:00,009 INFO [org.ovirt.engine.core.bll.AutoRecoveryManage=
r] (QuartzScheduler_Worker-70) Autorecovering 0 storage domains
2012-06-28 12:25:00,009 INFO [org.ovirt.engine.core.bll.AutoRecoveryManage=
r] (QuartzScheduler_Worker-70) Checking autorecoverable storage domains don=
e
2012-06-28 12:25:05,875 INFO [org.ovirt.engine.core.bll.UpdateVdsCommand] =
(ajp--0.0.0.0-8009-2) [13427c0] Running command: UpdateVdsCommand internal:=
false. Entities affected : ID: 105460c0-c0ea-11e1-b737-9b694eb255f6 Type:=
VDS
2012-06-28 12:25:05,889 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatus=
VDSCommand] (ajp--0.0.0.0-8009-2) [13427c0] START, SetVdsStatusVDSCommand(v=
dsId =3D 105460c0-c0ea-11e1-b737-9b694eb255f6, status=3DInstalling, nonOper=
ationalReason=3DNONE), log id: 5ef5ea33
2012-06-28 12:25:05,895 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatus=
VDSCommand] (ajp--0.0.0.0-8009-2) [13427c0] FINISH, SetVdsStatusVDSCommand,=
log id: 5ef5ea33
2012-06-28 12:25:05,910 INFO [org.ovirt.engine.core.bll.InstallVdsCommand]=
(pool-3-thread-7) [3a635c45] Running command: InstallVdsCommand internal: =
true. Entities affected : ID: 105460c0-c0ea-11e1-b737-9b694eb255f6 Type: V=
DS
2012-06-28 12:25:05,914 INFO [org.ovirt.engine.core.bll.InstallVdsCommand]=
(pool-3-thread-7) [3a635c45] Before Installation pool-3-thread-7
2012-06-28 12:25:05,915 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Executing installatio=
n stage. (Stage: Starting Host installation)
2012-06-28 12:25:05,916 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Executing installatio=
n stage. (Stage: Connecting to Host)
2012-06-28 12:25:05,959 INFO [org.ovirt.engine.core.utils.hostinstall.Host=
KeyVerifier] (NioProcessor-15) SSH key fingerprint f6:4d:81:ba:a0:f6:c4:09:=
85:18:10:5f:6f:47:09:58 for host njord.sto.slu.se<http://njord.sto.slu.se> =
(172.22.8.14) has been successfully verified.
2012-06-28 12:25:06,032 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BS=
TRAP component=3D'RHEV_INSTALL' status=3D'OK' message=3D'Connected to Host =
172.22.8.14 with SSH key fingerprint: f6:4d:81:ba:a0:f6:c4:09:85:18:10:5f:6=
f:47:09:58'/>. FYI. (Stage: Connecting to Host)
2012-06-28 12:25:06,044 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Successfully connecte=
d to server ssh. (Stage: Connecting to Host)
2012-06-28 12:25:06,048 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Executing installatio=
n stage. (Stage: Get the unique vds id)
2012-06-28 12:25:06,052 INFO [org.ovirt.engine.core.utils.hostinstall.Mina=
InstallWrapper] (pool-3-thread-7) Invoking /bin/echo -e `/bin/bash -c /usr=
/sbin/dmidecode|/bin/awk ' /UUID/{ print $2; } ' | /usr/bin/tr '
' '_' && cat /sys/class/net/*/address | /bin/grep -v '00:00:00:00' | /bin/s=
ort -u | /usr/bin/head --lines=3D1` on 172.22.8.14
2012-06-28 12:25:06,145 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: 444=
54C4C-3200-1052-8050-B7C04F354431_00:15:17:36:60:4c
. FYI. (Stage: Get the unique vds id)
2012-06-28 12:25:06,149 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Assigning unique id 4=
4454C4C-3200-1052-8050-B7C04F354431_00:15:17:36:60:4c to Host. (Stage: Get =
the unique vds id)
2012-06-28 12:25:06,162 INFO [org.ovirt.engine.core.utils.hostinstall.Mina=
InstallWrapper] (pool-3-thread-7) RunSSHCommand returns true
2012-06-28 12:25:06,164 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Executing installatio=
n stage. (Stage: Upload Installation script to Host)
2012-06-28 12:25:06,166 INFO [org.ovirt.engine.core.utils.hostinstall.Mina=
InstallWrapper] (pool-3-thread-7) Uploading file /usr/share/ovirt-engine/sc=
ripts/vds_installer.py to /tmp/vds_installer_ca67f0a5-115c-4943-a9ef-157654=
586da5.py on 172.22.8.14
2012-06-28 12:25:06,170 INFO [org.ovirt.engine.core.utils.hostinstall.Mina=
InstallWrapper] (pool-3-thread-7) Uploading file /usr/share/ovirt-engine/sc=
ripts/vds_installer.py to /tmp/vds_installer_ca67f0a5-115c-4943-a9ef-157654=
586da5.py on 172.22.8.14
2012-06-28 12:25:09,363 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. successfully done sft=
p operation ( Stage: Upload Installation script to Host)
2012-06-28 12:25:09,364 INFO [org.ovirt.engine.core.utils.hostinstall.Mina=
InstallWrapper] (pool-3-thread-7) return true
2012-06-28 12:25:09,365 INFO [org.ovirt.engine.core.utils.hostinstall.Mina=
InstallWrapper] (pool-3-thread-7) Uploading file /tmp/firewall.conf52725617=
99347733512.tmp to /tmp/firewall.conf.ca67f0a5-115c-4943-a9ef-157654586da5 =
on 172.22.8.14
2012-06-28 12:25:09,366 INFO [org.ovirt.engine.core.utils.hostinstall.Mina=
InstallWrapper] (pool-3-thread-7) Uploading file /tmp/firewall.conf52725617=
99347733512.tmp to /tmp/firewall.conf.ca67f0a5-115c-4943-a9ef-157654586da5 =
on 172.22.8.14
2012-06-28 12:25:12,504 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. successfully done sft=
p operation ( Stage: Upload Installation script to Host)
2012-06-28 12:25:12,509 INFO [org.ovirt.engine.core.utils.hostinstall.Mina=
InstallWrapper] (pool-3-thread-7) return true
2012-06-28 12:25:12,512 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Executing installatio=
n stage. (Stage: Running first installation script on Host)
2012-06-28 12:25:12,516 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Sending SSH Command c=
hmod +x /tmp/vds_installer_ca67f0a5-115c-4943-a9ef-157654586da5.py; /tmp/vd=
s_installer_ca67f0a5-115c-4943-a9ef-157654586da5.py -c 'ssl=3Dtrue;manageme=
nt_port=3D54321' -O 'slu' -t 2012-06-28T10:25:05 -f /tmp/firewall.conf.ca67=
f0a5-115c-4943-a9ef-157654586da5 -p 80 -b http://xcp-cms.data.slu.se:80/C=
omponents/vds/ http://xcp-cms.data.slu.se:80/Components/vds/ 172.22.8.14 ca=
67f0a5-115c-4943-a9ef-157654586da5 False. (Stage: Running first installatio=
n script on Host)
2012-06-28 12:25:12,530 INFO [org.ovirt.engine.core.utils.hostinstall.Mina=
InstallWrapper] (pool-3-thread-7) Invoking chmod +x /tmp/vds_installer_ca67=
f0a5-115c-4943-a9ef-157654586da5.py; /tmp/vds_installer_ca67f0a5-115c-4943-=
a9ef-157654586da5.py -c 'ssl=3Dtrue;management_port=3D54321' -O 'slu' -t 20=
12-06-28T10:25:05 -f /tmp/firewall.conf.ca67f0a5-115c-4943-a9ef-157654586da=
5 -p 80 -b http://xcp-cms.data.slu.se:80/Components/vds/ http://xcp-cms.d=
ata.slu.se:80/Components/vds/ 172.22.8.14 ca67f0a5-115c-4943-a9ef-157654586=
da5 False on 172.22.8.14
2012-06-28 12:25:13,545 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BS=
TRAP component=3D'INSTALLER' status=3D'OK' message=3D'Test platform succeed=
ed'/>
<BSTRAP component=3D'INSTALLER LIB' status=3D'OK' message=3D'Install librar=
y already exists'/>
<BSTRAP component=3D'INSTALLER' status=3D'OK' message=3D'vds_bootstrap.py d=
ownload succeeded'/>
<BSTRAP component=3D'RHN_REGISTRATION' status=3D'OK' message=3D'Host proper=
ly registered with RHN/Satellite.'/>
. FYI. (Stage: Running first installation script on Host)
2012-06-28 12:25:14,615 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BS=
TRAP component=3D'VDSM_MAJOR_VER' status=3D'OK' message=3D'Available VDSM m=
atches requirements'/>
<BSTRAP component=3D'VT_SVM' status=3D'OK' processor=3D'Intel' message=3D'S=
erver supports virtualization'/>
<BSTRAP component=3D'OS' status=3D'OK' type=3D'FEDORA' message=3D'Supported=
platform version'/>
<BSTRAP component=3D'KERNEL' status=3D'OK' version=3D'0' message=3D'Skipped=
kernel version check'/>
<BSTRAP component=3D'CONFLICTING PACKAGES' status=3D'OK' result=3D'cman.x86=
_64' message=3D'package cman.x86_64 is not installed '/>
<BSTRAP component=3D'REQ PACKAGES' status=3D'OK' result=3D'SDL.x86_64' mess=
age=3D'SDL-1.2.14-16.fc17.x86_64 '/>
<BSTRAP component=3D'REQ PACKAGES' status=3D'OK' result=3D'bridge-utils.x86=
_64' message=3D'bridge-utils-1.5-3.fc17.x86_64 '/>
<BSTRAP component=3D'REQ PACKAGES' status=3D'OK' result=3D'mesa-libGLU.x86_=
64' message=3D'mesa-libGLU-8.0.3-1.fc17.x86_64 '/>
<BSTRAP component=3D'REQ PACKAGES' status=3D'OK' result=3D'openssl.x86_64' =
message=3D'openssl-1.0.0j-1.fc17.x86_64 '/>
<BSTRAP component=3D'REQ PACKAGES' status=3D'OK' result=3D'm2crypto.x86_64'=
message=3D'm2crypto-0.21.1-8.fc17.x86_64 '/>
. FYI. (Stage: Running first installation script on Host)
2012-06-28 12:25:15,705 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BS=
TRAP component=3D'REQ PACKAGES' status=3D'OK' result=3D'rsync.x86_64' messa=
ge=3D'rsync-3.0.9-2.fc17.x86_64 '/>
<BSTRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'qemu-kvm' messag=
e=3D'qemu-kvm-1.0-17.fc17.x86_64 '/>
<BSTRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'qemu-kvm-tools' =
message=3D'qemu-kvm-tools-1.0-17.fc17.x86_64 '/>
<BSTRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'vdsm' message=3D=
'vdsm-4.10.0-2.fc17.x86_64 '/>
<BSTRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'vdsm-cli' messag=
e=3D'vdsm-cli-4.10.0-2.fc17.noarch '/>
<BSTRAP component=3D'VDS PACKAGES' status=3D'WARN' result=3D'libjpeg' messa=
ge=3D'package libjpeg is not installed '/>
<BSTRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'spice-server' me=
ssage=3D'spice-server-0.10.1-2.fc17.x86_64 '/>
<BSTRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'pixman' message=
=3D'pixman-0.24.4-2.fc17.x86_64 '/>
<BSTRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'seabios' message=
=3D'seabios-1.7.0-1.fc17.x86_64 '/>
<BSTRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'qemu-img' messag=
e=3D'qemu-img-1.0-17.fc17.x86_64 '/>
. FYI. (Stage: Running first installation script on Host)
2012-06-28 12:25:16,751 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BS=
TRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'fence-agents' messa=
ge=3D'fence-agents-3.1.8-1.fc17.x86_64 '/>
<BSTRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'libselinux-pytho=
n' message=3D'libselinux-python-2.1.10-3.fc17.x86_64 '/>
. FYI. (Stage: Running first installation script on Host)
2012-06-28 12:25:27,770 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BS=
TRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'qemu-kvm' message=
=3D'qemu-kvm-1.0-17.fc17.x86_64 '/>
. FYI. (Stage: Running first installation script on Host)
2012-06-28 12:25:29,778 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BS=
TRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'qemu-kvm-tools' mes=
sage=3D'qemu-kvm-tools-1.0-17.fc17.x86_64 '/>
. FYI. (Stage: Running first installation script on Host)
2012-06-28 12:25:32,785 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BS=
TRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'vdsm' message=3D'vd=
sm-4.10.0-2.fc17.x86_64 '/>
. FYI. (Stage: Running first installation script on Host)
2012-06-28 12:25:35,794 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BS=
TRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'vdsm-cli' message=
=3D'vdsm-cli-4.10.0-2.fc17.noarch '/>
. FYI. (Stage: Running first installation script on Host)
2012-06-28 12:25:38,805 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BS=
TRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'spice-server' messa=
ge=3D'spice-server-0.10.1-2.fc17.x86_64 '/>
. FYI. (Stage: Running first installation script on Host)
2012-06-28 12:25:40,819 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BS=
TRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'pixman' message=3D'=
pixman-0.24.4-2.fc17.x86_64 '/>
. FYI. (Stage: Running first installation script on Host)
2012-06-28 12:25:43,826 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BS=
TRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'seabios' message=3D=
'seabios-1.7.0-1.fc17.x86_64 '/>
. FYI. (Stage: Running first installation script on Host)
2012-06-28 12:25:46,833 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BS=
TRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'qemu-img' message=
=3D'qemu-img-1.0-17.fc17.x86_64 '/>
. FYI. (Stage: Running first installation script on Host)
2012-06-28 12:25:49,856 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BS=
TRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'fence-agents' messa=
ge=3D'fence-agents-3.1.8-1.fc17.x86_64 '/>
. FYI. (Stage: Running first installation script on Host)
2012-06-28 12:25:52,898 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BS=
TRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'libselinux-python' =
message=3D'libselinux-python-2.1.10-3.fc17.x86_64 '/>
. FYI. (Stage: Running first installation script on Host)
2012-06-28 12:25:53,223 ERROR [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BS=
TRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'libjpeg' message=3D=
'package libjpeg is not installed '/>
<BSTRAP component=3D'CreateConf' status=3D'FAIL' message=3D'Basic configura=
tion failed to import default values'/>
<BSTRAP component=3D'RHEV_INSTALL' status=3D'FAIL'/>
. Error occured. (Stage: Running first installation script on Host)
2012-06-28 12:25:53,236 INFO [org.ovirt.engine.core.utils.hostinstall.Mina=
InstallWrapper] (pool-3-thread-7) RunSSHCommand returns true
2012-06-28 12:25:53,236 INFO [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] RunScript ended:true
2012-06-28 12:25:53,237 ERROR [org.ovirt.engine.core.bll.VdsInstaller] (poo=
l-3-thread-7) [3a635c45] Installation of 172.22.8.14. Operation failure. (S=
tage: Running first installation script on Host)
2012-06-28 12:25:53,246 INFO [org.ovirt.engine.core.bll.InstallVdsCommand]=
(pool-3-thread-7) [3a635c45] After Installation pool-3-thread-7
2012-06-28 12:25:53,248 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatus=
VDSCommand] (pool-3-thread-7) [3a635c45] START, SetVdsStatusVDSCommand(vdsI=
d =3D 105460c0-c0ea-11e1-b737-9b694eb255f6, status=3DInstallFailed, nonOper=
ationalReason=3DNONE), log id: 3cf757b4
2012-06-28 12:25:53,264 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatus=
VDSCommand] (pool-3-thread-7) [3a635c45] FINISH, SetVdsStatusVDSCommand, lo=
g id: 3cf757b4
The action in question "CreateConf" looks like:
/usr/share/vdsm-bootstrap/vds_bootstrap.py
def _makeConfig(self):
import datetime
from config import config
if not os.path.exists(VDSM_CONF):
logging.debug("makeConfig: generating conf.")
lines =3D []
lines.append ("# Auto-generated by vds_bootstrap at:" + str(dat=
etime.datetime.now()) + "\n")
lines.append ("\n")
lines.append ("[vars]\n") #Adding ts for the coming scirpts.
lines.append ("trust_store_path =3D " + config.get('vars', 'tru=
st_store_path') + "\n")
lines.append ("ssl =3D " + config.get('vars', 'ssl') + "\n")
lines.append ("\n")
lines.append ("[addresses]\n") #Adding mgt port for the coming =
scirpts.
lines.append ("management_port =3D " + config.get('addresses', =
'management_port') + "\n")
logging.debug("makeConfig: writing the following to " + VDSM_CO=
NF)
logging.debug(lines)
fd, tmpName =3D tempfile.mkstemp()
f =3D os.fdopen(fd, 'w')
f.writelines(lines)
f.close()
os.chmod(tmpName, 0644)
shutil.move(tmpName, VDSM_CONF)
else:
self.message =3D 'Basic configuration found, skipping this step=
'
logging.debug(self.message)
def createConf(self):
"""
Generate initial configuration file for VDSM. Must run after pa=
ckage installation!
"""
self.message =3D 'Basic configuration set'
self.rc =3D True
self.status =3D 'OK'
try:
self._makeConfig()
except Exception, e:
logging.error('', exc_info=3DTrue)
self.message =3D 'Basic configuration failed'
if isinstance(e, ImportError):
self.message =3D self.message + ' to import default values'
self.rc =3D False
self.status =3D 'FAIL'
self._xmlOutput('CreateConf', self.status, None, None, self.message=
)
return self.rc
What now? Can anyone tell me why it fails? Besides the obvious "it=B4s beta=
" of course:)
Med V=E4nliga H=E4lsningar
---------------------------------------------------------------------------=
----
Karli Sj=F6berg
Swedish University of Agricultural Sciences
Box 7079 (Visiting Address Kron=E5sv=E4gen 8)
S-750 07 Uppsala, Sweden
Phone: +46-(0)18-67 15 66
karli.sjoberg(a)slu.se<mailto:karli.sjoberg@adm.slu.se>
--_000_A973225C083E47A1A05CBFC2393B24E6sluse_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html><head></head><body style=3D"word-wrap: break-word; -webkit-nbsp-mode:=
space; -webkit-line-break: after-white-space; ">Hi,<div><br></div><div>I a=
m running Fedora 17 and added the ovirt beta repository to have access to w=
ebadmin addition, since F17 only comes with CLI by default.</div><div><br><=
/div><div># wget <a href=3D"http://ovirt.org/releases/beta/ovirt-engin=
e.repo">http://ovirt.org/releases/beta/ovirt-engine.repo</a> -O /etc/yum.re=
pos.d/ovirt- engine_beta.repo</div><div># sed -i -- 's/fedora\/16/fedora\/1=
7/' /etc/yum.repos.d/ovirt-engine_beta.repo</div><div># yum install -y ovir=
t-engine</div><div><div># rpm -qa | grep ovirt</div><div>ovirt-engine-3.1.0=
-0.1.20120620git6ef9f8.fc17.noarch</div><div>ovirt-engine-genericapi-3.1.0-=
0.1.20120620git6ef9f8.fc17.noarch</div><div>ovirt-engine-webadmin-portal-3.=
1.0-0.1.20120620git6ef9f8.fc17.noarch</div><div>ovirt-iso-uploader-3.1.0-0.=
git1841d9.fc17.noarch</div><div>ovirt-engine-sdk-3.1.0.2-gita89f4e.fc17.noa=
rch</div><div>ovirt-engine-restapi-3.1.0-0.1.20120620git6ef9f8.fc17.noarch<=
/div><div>ovirt-engine-userportal-3.1.0-0.1.20120620git6ef9f8.fc17.noarch</=
div><div>ovirt-engine-tools-common-3.1.0-0.1.20120620git6ef9f8.fc17.noarch<=
/div><div>ovirt-log-collector-3.1.0-0.fc17.noarch</div><div>ovirt-engine-db=
scripts-3.1.0-0.1.20120620git6ef9f8.fc17.noarch</div><div>ovirt-engine-setu=
p-3.1.0-0.1.20120620git6ef9f8.fc17.noarch</div><div>ovirt-engine-notificati=
on-service-3.1.0-0.1.20120620git6ef9f8.fc17.noarch</div><div>ovirt-image-up=
loader-3.1.0-0.git9c42c8.fc17.noarch</div><div>ovirt-engine-config-3.1.0-0.=
1.20120620git6ef9f8.fc17.noarch</div><div>ovirt-engine-backend-3.1.0-0.1.20=
120620git6ef9f8.fc17.noarch</div></div><div><br></div><div><div>I need to g=
et oVirt-engine- and host up and running on the machine because I want it t=
o be able to configure and execute power management on the rest of the host=
s in the cluster.</div><div>Source: <a href=3D"http://lists.ovirt.org/=
pipermail/users/2012-February/000361.html">http://lists.ovirt.org/pipermail=
/users/2012-February/000361.html</a></div><div>"<span class=3D"Apple-style-=
span" style=3D"white-space: pre; ">Yes, the ovirt backend does not shut dow=
n or power up any hosts directly, it can work only through vdsm. Therefore =
you need one running host per datacenter to be able to manage the rest of t=
he hosts."</span></div></div><div><span class=3D"Apple-style-span" style=3D=
"white-space: pre; "><br></span></div><div>I am then following this article=
:</div><div><a href=3D"http://blog.jebpages.com/archives/how-to-get-up-and-=
running-with-ovirt/">http://blog.jebpages.com/archives/how-to-get-up-and-ru=
nning-with-ovirt/</a></div><div><br></div><div>Where I get to adding itself=
as a host in it=B4s own cluster and then:</div><div>(This is a "Re-Install=
" from the WUI)</div><div><div>2012-06-28 12:25:00,000 INFO [org.ovir=
t.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-70) Checking=
autorecoverable hosts</div><div>2012-06-28 12:25:00,004 INFO [org.ov=
irt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-70) Autore=
covering 0 hosts</div><div>2012-06-28 12:25:00,005 INFO [org.ovirt.en=
gine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-70) Checking aut=
orecoverable hosts done</div><div>2012-06-28 12:25:00,006 INFO [org.o=
virt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-70) Check=
ing autorecoverable storage domains</div><div>2012-06-28 12:25:00,009 INFO =
[org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Work=
er-70) Autorecovering 0 storage domains</div><div>2012-06-28 12:25:00,009 I=
NFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_=
Worker-70) Checking autorecoverable storage domains done</div><div>2012-06-=
28 12:25:05,875 INFO [org.ovirt.engine.core.bll.UpdateVdsCommand] (aj=
p--0.0.0.0-8009-2) [13427c0] Running command: UpdateVdsCommand internal: fa=
lse. Entities affected : ID: 105460c0-c0ea-11e1-b737-9b694eb255f6 Typ=
e: VDS</div><div>2012-06-28 12:25:05,889 INFO [org.ovirt.engine.core.=
vdsbroker.SetVdsStatusVDSCommand] (ajp--0.0.0.0-8009-2) [13427c0] START, Se=
tVdsStatusVDSCommand(vdsId =3D 105460c0-c0ea-11e1-b737-9b694eb255f6, status=
=3DInstalling, nonOperationalReason=3DNONE), log id: 5ef5ea33</div><div>201=
2-06-28 12:25:05,895 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStat=
usVDSCommand] (ajp--0.0.0.0-8009-2) [13427c0] FINISH, SetVdsStatusVDSComman=
d, log id: 5ef5ea33</div><div>2012-06-28 12:25:05,910 INFO [org.ovirt=
.engine.core.bll.InstallVdsCommand] (pool-3-thread-7) [3a635c45] Running co=
mmand: InstallVdsCommand internal: true. Entities affected : ID: 1054=
60c0-c0ea-11e1-b737-9b694eb255f6 Type: VDS</div><div>2012-06-28 12:25:05,91=
4 INFO [org.ovirt.engine.core.bll.InstallVdsCommand] (pool-3-thread-7=
) [3a635c45] Before Installation pool-3-thread-7</div><div>2012-06-28 12:25=
:05,915 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-=
7) [3a635c45] Installation of 172.22.8.14. Executing installation stage. (S=
tage: Starting Host installation)</div><div>2012-06-28 12:25:05,916 INFO &n=
bsp;[org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] I=
nstallation of 172.22.8.14. Executing installation stage. (Stage: Connectin=
g to Host)</div><div>2012-06-28 12:25:05,959 INFO [org.ovirt.engine.c=
ore.utils.hostinstall.HostKeyVerifier] (NioProcessor-15) SSH key fingerprin=
t f6:4d:81:ba:a0:f6:c4:09:85:18:10:5f:6f:47:09:58 for host <a href=3D"http:=
//njord.sto.slu.se">njord.sto.slu.se</a> (172.22.8.14) has been successfull=
y verified.</div><div>2012-06-28 12:25:06,032 INFO [org.ovirt.engine.=
core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.=
8.14. Received message: <BSTRAP component=3D'RHEV_INSTALL' status=3D'OK'=
message=3D'Connected to Host 172.22.8.14 with SSH key fingerprint: f6:4d:8=
1:ba:a0:f6:c4:09:85:18:10:5f:6f:47:09:58'/>. FYI. (Stage: Connecting to =
Host)</div><div>2012-06-28 12:25:06,044 INFO [org.ovirt.engine.core.b=
ll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. =
Successfully connected to server ssh. (Stage: Connecting to Host)</div><div=
>2012-06-28 12:25:06,048 INFO [org.ovirt.engine.core.bll.VdsInstaller=
] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Executing insta=
llation stage. (Stage: Get the unique vds id)</div><div>2012-06-28 12:25:06=
,052 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper=
] (pool-3-thread-7) Invoking /bin/echo -e `/bin/bash -c /usr/sbin/dmi=
decode|/bin/awk ' /UUID/{ print $2; } ' | /usr/bin/tr '</div><div>' '_' &am=
p;& cat /sys/class/net/*/address | /bin/grep -v '00:00:00:00' | /bin/so=
rt -u | /usr/bin/head --lines=3D1` on 172.22.8.14</div><div>2012-06-28 12:2=
5:06,145 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread=
-7) [3a635c45] Installation of 172.22.8.14. Received message: 44454C4C-3200=
-1052-8050-B7C04F354431_00:15:17:36:60:4c</div><div>. FYI. (Stage: Get the =
unique vds id)</div><div>2012-06-28 12:25:06,149 INFO [org.ovirt.engi=
ne.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.=
22.8.14. Assigning unique id 44454C4C-3200-1052-8050-B7C04F354431_00:15:17:=
36:60:4c to Host. (Stage: Get the unique vds id)</div><div>2012-06-28 12:25=
:06,162 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrap=
per] (pool-3-thread-7) RunSSHCommand returns true</div><div>2012-06-28 12:2=
5:06,164 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread=
-7) [3a635c45] Installation of 172.22.8.14. Executing installation stage. (=
Stage: Upload Installation script to Host)</div><div>2012-06-28 12:25:06,16=
6 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (=
pool-3-thread-7) Uploading file /usr/share/ovirt-engine/scripts/vds_install=
er.py to /tmp/vds_installer_ca67f0a5-115c-4943-a9ef-157654586da5.py on 172.=
22.8.14</div><div>2012-06-28 12:25:06,170 INFO [org.ovirt.engine.core=
.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-7) Uploading file /us=
r/share/ovirt-engine/scripts/vds_installer.py to /tmp/vds_installer_ca67f0a=
5-115c-4943-a9ef-157654586da5.py on 172.22.8.14</div><div>2012-06-28 12:25:=
09,363 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7=
) [3a635c45] Installation of 172.22.8.14. successfully done sftp operation =
( Stage: Upload Installation script to Host)</div><div>2012-06-28 12:25:09,=
364 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper]=
(pool-3-thread-7) return true</div><div>2012-06-28 12:25:09,365 INFO  =
;[org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-threa=
d-7) Uploading file /tmp/firewall.conf5272561799347733512.tmp to /tmp/firew=
all.conf.ca67f0a5-115c-4943-a9ef-157654586da5 on 172.22.8.14</div><div>2012=
-06-28 12:25:09,366 INFO [org.ovirt.engine.core.utils.hostinstall.Min=
aInstallWrapper] (pool-3-thread-7) Uploading file /tmp/firewall.conf5272561=
799347733512.tmp to /tmp/firewall.conf.ca67f0a5-115c-4943-a9ef-157654586da5=
on 172.22.8.14</div><div>2012-06-28 12:25:12,504 INFO [org.ovirt.eng=
ine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172=
.22.8.14. successfully done sftp operation ( Stage: Upload Installation scr=
ipt to Host)</div><div>2012-06-28 12:25:12,509 INFO [org.ovirt.engine=
.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-7) return true</=
div><div>2012-06-28 12:25:12,512 INFO [org.ovirt.engine.core.bll.VdsI=
nstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Executi=
ng installation stage. (Stage: Running first installation script on Host)</=
div><div>2012-06-28 12:25:12,516 INFO [org.ovirt.engine.core.bll.VdsI=
nstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Sending=
SSH Command chmod +x /tmp/vds_installer_ca67f0a5-115c-4943-a9ef-157654586d=
a5.py; /tmp/vds_installer_ca67f0a5-115c-4943-a9ef-157654586da5.py -c 'ssl=
=3Dtrue;management_port=3D54321' -O 'slu' -t 2012-06-28T10:25:05 -f /tmp/fi=
rewall.conf.ca67f0a5-115c-4943-a9ef-157654586da5 -p 80 -b <a href=3D=
"http://xcp-cms.data.slu.se:80/Components/vds/">http://xcp-cms.data.slu.se:=
80/Components/vds/</a> <a href=3D"http://xcp-cms.data.slu.se:80/Components/=
vds/">http://xcp-cms.data.slu.se:80/Components/vds/</a> 172.22.8.14 ca67f0a=
5-115c-4943-a9ef-157654586da5 False. (Stage: Running first installation scr=
ipt on Host)</div><div>2012-06-28 12:25:12,530 INFO [org.ovirt.engine=
.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-7) Invoking chmo=
d +x /tmp/vds_installer_ca67f0a5-115c-4943-a9ef-157654586da5.py; /tmp/vds_i=
nstaller_ca67f0a5-115c-4943-a9ef-157654586da5.py -c 'ssl=3Dtrue;management_=
port=3D54321' -O 'slu' -t 2012-06-28T10:25:05 -f /tmp/firewall.conf.ca67f0a=
5-115c-4943-a9ef-157654586da5 -p 80 -b <a href=3D"http://xcp-cms.dat=
a.slu.se:80/Components/vds/">http://xcp-cms.data.slu.se:80/Components/vds/<=
/a> <a href=3D"http://xcp-cms.data.slu.se:80/Components/vds/">http://xcp-cm=
s.data.slu.se:80/Components/vds/</a> 172.22.8.14 ca67f0a5-115c-4943-a9ef-15=
7654586da5 False on 172.22.8.14</div><div>2012-06-28 12:25:13,545 INFO &nbs=
p;[org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Ins=
tallation of 172.22.8.14. Received message: <BSTRAP component=3D'INSTALL=
ER' status=3D'OK' message=3D'Test platform succeeded'/></div><div><BS=
TRAP component=3D'INSTALLER LIB' status=3D'OK' message=3D'Install library a=
lready exists'/></div><div><BSTRAP component=3D'INSTALLER' status=3D'=
OK' message=3D'vds_bootstrap.py download succeeded'/></div><div><BSTR=
AP component=3D'RHN_REGISTRATION' status=3D'OK' message=3D'Host properly re=
gistered with RHN/Satellite.'/></div><div>. FYI. (Stage: Running first i=
nstallation script on Host)</div><div>2012-06-28 12:25:14,615 INFO [o=
rg.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Install=
ation of 172.22.8.14. Received message: <BSTRAP component=3D'VDSM_MAJOR_=
VER' status=3D'OK' message=3D'Available VDSM matches requirements'/></di=
v><div><BSTRAP component=3D'VT_SVM' status=3D'OK' processor=3D'Intel' me=
ssage=3D'Server supports virtualization'/></div><div><BSTRAP componen=
t=3D'OS' status=3D'OK' type=3D'FEDORA' message=3D'Supported platform versio=
n'/></div><div><BSTRAP component=3D'KERNEL' status=3D'OK' version=3D'=
0' message=3D'Skipped kernel version check'/></div><div><BSTRAP compo=
nent=3D'CONFLICTING PACKAGES' status=3D'OK' result=3D'cman.x86_64' message=
=3D'package cman.x86_64 is not installed '/></div><div><BSTRAP compon=
ent=3D'REQ PACKAGES' status=3D'OK' result=3D'SDL.x86_64' message=3D'SDL-1.2=
.14-16.fc17.x86_64 '/></div><div><BSTRAP component=3D'REQ PACKAGES' s=
tatus=3D'OK' result=3D'bridge-utils.x86_64' message=3D'bridge-utils-1.5-3.f=
c17.x86_64 '/></div><div><BSTRAP component=3D'REQ PACKAGES' status=3D=
'OK' result=3D'mesa-libGLU.x86_64' message=3D'mesa-libGLU-8.0.3-1.fc17.x86_=
64 '/></div><div><BSTRAP component=3D'REQ PACKAGES' status=3D'OK' res=
ult=3D'openssl.x86_64' message=3D'openssl-1.0.0j-1.fc17.x86_64 '/></div>=
<div><BSTRAP component=3D'REQ PACKAGES' status=3D'OK' result=3D'm2crypto=
.x86_64' message=3D'm2crypto-0.21.1-8.fc17.x86_64 '/></div><div>. FYI. (=
Stage: Running first installation script on Host)</div><div>2012-06-28 12:2=
5:15,705 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread=
-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP co=
mponent=3D'REQ PACKAGES' status=3D'OK' result=3D'rsync.x86_64' message=3D'r=
sync-3.0.9-2.fc17.x86_64 '/></div><div><BSTRAP component=3D'VDS PACKA=
GES' status=3D'OK' result=3D'qemu-kvm' message=3D'qemu-kvm-1.0-17.fc17.x86_=
64 '/></div><div><BSTRAP component=3D'VDS PACKAGES' status=3D'OK' res=
ult=3D'qemu-kvm-tools' message=3D'qemu-kvm-tools-1.0-17.fc17.x86_64 '/><=
/div><div><BSTRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'vds=
m' message=3D'vdsm-4.10.0-2.fc17.x86_64 '/></div><div><BSTRAP compone=
nt=3D'VDS PACKAGES' status=3D'OK' result=3D'vdsm-cli' message=3D'vdsm-cli-4=
.10.0-2.fc17.noarch '/></div><div><BSTRAP component=3D'VDS PACKAGES' =
status=3D'WARN' result=3D'libjpeg' message=3D'package libjpeg is not instal=
led '/></div><div><BSTRAP component=3D'VDS PACKAGES' status=3D'OK' re=
sult=3D'spice-server' message=3D'spice-server-0.10.1-2.fc17.x86_64 '/></=
div><div><BSTRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'pixm=
an' message=3D'pixman-0.24.4-2.fc17.x86_64 '/></div><div><BSTRAP comp=
onent=3D'VDS PACKAGES' status=3D'OK' result=3D'seabios' message=3D'seabios-=
1.7.0-1.fc17.x86_64 '/></div><div><BSTRAP component=3D'VDS PACKAGES' =
status=3D'OK' result=3D'qemu-img' message=3D'qemu-img-1.0-17.fc17.x86_64 '/=
></div><div>. FYI. (Stage: Running first installation script on Host)</d=
iv><div>2012-06-28 12:25:16,751 INFO [org.ovirt.engine.core.bll.VdsIn=
staller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received=
message: <BSTRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'fen=
ce-agents' message=3D'fence-agents-3.1.8-1.fc17.x86_64 '/></div><div><=
;BSTRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'libselinux-pytho=
n' message=3D'libselinux-python-2.1.10-3.fc17.x86_64 '/></div><div>. FYI=
. (Stage: Running first installation script on Host)</div><div>2012-06-28 1=
2:25:27,770 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thr=
ead-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP=
component=3D'VDS PACKAGES' status=3D'OK' result=3D'qemu-kvm' message=3D'qe=
mu-kvm-1.0-17.fc17.x86_64 '/></div><div>. FYI. (Stage: Running first ins=
tallation script on Host)</div><div>2012-06-28 12:25:29,778 INFO [org=
.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installat=
ion of 172.22.8.14. Received message: <BSTRAP component=3D'VDS PACKAGES'=
status=3D'OK' result=3D'qemu-kvm-tools' message=3D'qemu-kvm-tools-1.0-17.f=
c17.x86_64 '/></div><div>. FYI. (Stage: Running first installation scrip=
t on Host)</div><div>2012-06-28 12:25:32,785 INFO [org.ovirt.engine.c=
ore.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8=
.14. Received message: <BSTRAP component=3D'VDS PACKAGES' status=3D'OK' =
result=3D'vdsm' message=3D'vdsm-4.10.0-2.fc17.x86_64 '/></div><div>. FYI=
. (Stage: Running first installation script on Host)</div><div>2012-06-28 1=
2:25:35,794 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thr=
ead-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP=
component=3D'VDS PACKAGES' status=3D'OK' result=3D'vdsm-cli' message=3D'vd=
sm-cli-4.10.0-2.fc17.noarch '/></div><div>. FYI. (Stage: Running first i=
nstallation script on Host)</div><div>2012-06-28 12:25:38,805 INFO [o=
rg.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Install=
ation of 172.22.8.14. Received message: <BSTRAP component=3D'VDS PACKAGE=
S' status=3D'OK' result=3D'spice-server' message=3D'spice-server-0.10.1-2.f=
c17.x86_64 '/></div><div>. FYI. (Stage: Running first installation scrip=
t on Host)</div><div>2012-06-28 12:25:40,819 INFO [org.ovirt.engine.c=
ore.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8=
.14. Received message: <BSTRAP component=3D'VDS PACKAGES' status=3D'OK' =
result=3D'pixman' message=3D'pixman-0.24.4-2.fc17.x86_64 '/></div><div>.=
FYI. (Stage: Running first installation script on Host)</div><div>2012-06-=
28 12:25:43,826 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3=
-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BS=
TRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'seabios' message=3D=
'seabios-1.7.0-1.fc17.x86_64 '/></div><div>. FYI. (Stage: Running first =
installation script on Host)</div><div>2012-06-28 12:25:46,833 INFO [=
org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Instal=
lation of 172.22.8.14. Received message: <BSTRAP component=3D'VDS PACKAG=
ES' status=3D'OK' result=3D'qemu-img' message=3D'qemu-img-1.0-17.fc17.x86_6=
4 '/></div><div>. FYI. (Stage: Running first installation script on Host=
)</div><div>2012-06-28 12:25:49,856 INFO [org.ovirt.engine.core.bll.V=
dsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Rece=
ived message: <BSTRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D=
'fence-agents' message=3D'fence-agents-3.1.8-1.fc17.x86_64 '/></div><div=
>. FYI. (Stage: Running first installation script on Host)</div><div>2012-0=
6-28 12:25:52,898 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool=
-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <=
BSTRAP component=3D'VDS PACKAGES' status=3D'OK' result=3D'libselinux-python=
' message=3D'libselinux-python-2.1.10-3.fc17.x86_64 '/></div><div>. FYI.=
(Stage: Running first installation script on Host)</div><div>2012-06-28 12=
:25:53,223 ERROR [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7)=
[3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP compo=
nent=3D'VDS PACKAGES' status=3D'OK' result=3D'libjpeg' message=3D'package l=
ibjpeg is not installed '/></div><div><BSTRAP component=3D'CreateConf=
' status=3D'FAIL' message=3D'Basic configuration failed to import default v=
alues'/></div><div><BSTRAP component=3D'RHEV_INSTALL' status=3D'FAIL'=
/></div><div>. Error occured. (Stage: Running first installation script =
on Host)</div><div>2012-06-28 12:25:53,236 INFO [org.ovirt.engine.cor=
e.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-7) RunSSHCommand ret=
urns true</div><div>2012-06-28 12:25:53,236 INFO [org.ovirt.engine.co=
re.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] RunScript ended:tru=
e</div><div>2012-06-28 12:25:53,237 ERROR [org.ovirt.engine.core.bll.VdsIns=
taller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Operation=
failure. (Stage: Running first installation script on Host)</div><div>2012=
-06-28 12:25:53,246 INFO [org.ovirt.engine.core.bll.InstallVdsCommand=
] (pool-3-thread-7) [3a635c45] After Installation pool-3-thread-7</div><div=
>2012-06-28 12:25:53,248 INFO [org.ovirt.engine.core.vdsbroker.SetVds=
StatusVDSCommand] (pool-3-thread-7) [3a635c45] START, SetVdsStatusVDSComman=
d(vdsId =3D 105460c0-c0ea-11e1-b737-9b694eb255f6, status=3DInstallFailed, n=
onOperationalReason=3DNONE), log id: 3cf757b4</div><div>2012-06-28 12:25:53=
,264 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (p=
ool-3-thread-7) [3a635c45] FINISH, SetVdsStatusVDSCommand, log id: 3cf757b4=
</div></div><div><br></div><div><br></div><div>The action in question "Crea=
teConf" looks like:</div><div><br></div><div>/usr/share/vdsm-bootstrap/vds_=
bootstrap.py</div><div><br></div><div><div> def _makeConfig(se=
lf):</div><div> import datetime</div><div> =
from config import config</div><div><br></div><div>&n=
bsp; if not os.path.exists(VDSM_CONF):</div><div> =
; logging.debug("makeConfig: generating =
conf.")</div><div> lines =3D []</d=
iv><div> lines.append ("# Auto-gen=
erated by vds_bootstrap at:" + str(datetime.datetime.now()) + "\n")</div><d=
iv> lines.append ("\n")</div><div>=
<br></div><div> lines.append ("[va=
rs]\n") #Adding ts for the coming scirpts.</div><div> &=
nbsp; lines.append ("trust_store_path =3D " + config.get('var=
s', 'trust_store_path') + "\n")</div><div>  =
; lines.append ("ssl =3D " + config.get('vars', 'ssl') + "\n")</div>=
<div> lines.append ("\n")</div><di=
v><br></div><div> lines.append ("[=
addresses]\n") #Adding mgt port for the coming scirpts.</div><div> &n=
bsp; lines.append ("management_port =3D " + con=
fig.get('addresses', 'management_port') + "\n")</div><div><br></div><div>&n=
bsp; logging.debug("makeConfig: writing =
the following to " + VDSM_CONF)</div><div>  =
; logging.debug(lines)</div><div> =
fd, tmpName =3D tempfile.mkstemp()</div><div> &n=
bsp; f =3D os.fdopen(fd, 'w')</div><div> =
f.writelines(lines)</div><div> &nb=
sp; f.close()</div><div> &n=
bsp; os.chmod(tmpName, 0644)</div><div> &=
nbsp; shutil.move(tmpName, VDSM_CONF)</div><div> =
else:</div><div> self.message =3D=
'Basic configuration found, skipping this step'</div><div> &n=
bsp; logging.debug(self.message)</div><div><br></div><=
div> def createConf(self):</div><div> &nbs=
p; """</div><div> Generate initial=
configuration file for VDSM. Must run after package installation!</div><di=
v> """</div><div> sel=
f.message =3D 'Basic configuration set'</div><div> &nbs=
p; self.rc =3D True</div><div> self.status =3D '=
OK'</div><div><br></div><div> try:</div><div>&nb=
sp; self._makeConfig()</div><div> =
except Exception, e:</div><div> &n=
bsp; logging.error('', exc_info=3DTrue)</div><div> &nbs=
p; self.message =3D 'Basic configuration failed=
'</div><div> if isinstance(e, Impo=
rtError):</div><div> =
self.message =3D self.message + ' to import default values'</div><div>&nbs=
p; self.rc =3D False</div><div> &n=
bsp; self.status =3D 'FAIL'</div><div><br></div=
><div> self._xmlOutput('CreateConf', self.status=
, None, None, self.message)</div><div> return se=
lf.rc</div><div><br></div><div><br></div><div>What now? Can anyone tell me =
why it fails? Besides the obvious "it=B4s beta" of course:)</div><div>
<div><br class=3D"Apple-interchange-newline"><br></div><div>Med V=E4nliga H=
=E4lsningar<br>------------------------------------------------------------=
-------------------<br>Karli Sj=F6berg<br>Swedish University of Agricultura=
l Sciences<br>Box 7079 (Visiting Address Kron=E5sv=E4gen 8)<br>S-750 07 Upp=
sala, Sweden<br>Phone: +46-(0)18-67 15 66</div><div><a href=3D"mailto=
:karli.sjoberg@adm.slu.se">karli.sjoberg(a)slu.se</a></div>
</div>
<br></div></body></html>=
--_000_A973225C083E47A1A05CBFC2393B24E6sluse_--
4
9
Re: [Users] I am tiring to manually create the RPM but getting the following errors.
by Robert Middleawarth 30 Jun '12
by Robert Middleawarth 30 Jun '12
30 Jun '12
----_com.android.email_325480453548655
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: base64
CgoKU2VudCBmcm9tIG15IFNhbXN1bmcgRXBpY+KEoiA0RyBUb3VjaAoKLS0tLS0tLS0gT3JpZ2lu
YWwgbWVzc2FnZSAtLS0tLS0tLQpTdWJqZWN0OiBSZTogW1VzZXJzXSBJIGFtIHRpcmluZyB0byBt
YW51YWxseSBjcmVhdGUgdGhlIFJQTSBidXQgZ2V0dGluZyB0aGUgZm9sbG93aW5nIGVycm9ycy4g
CkZyb206IEl0YW1hciBIZWltIDxpaGVpbUByZWRoYXQuY29tPiAKVG86IFJvYmVydCBNaWRkbGVz
d2FydGggPHJvYmVydEBtaWRkbGVzd2FydGgubmV0PiAKQ0M6IEp1YW4gSGVybmFuZGV6IDxqaGVy
bmFuZEByZWRoYXQuY29tPix1c2VycyA8dXNlcnNAb3ZpcnQub3JnPiAKCk9uIDA2LzI5LzIwMTIg
MDk6NTAgUE0sIFJvYmVydCBNaWRkbGVzd2FydGggd3JvdGU6Cj4gT24gMDYvMjkvMjAxMiAwNTo0
NCBBTSwgSnVhbiBIZXJuYW5kZXogd3JvdGU6Cj4+IE9uIDA2LzI5LzIwMTIgMDQ6NTIgQU0sIFJv
YmVydCBNaWRkbGVzd2FydGggd3JvdGU6Cj4+PiBOb3RlIEkgYW0gYXR0ZW1wdGluZyB0byBidWls
ZCBvbiBDZW50T1MgYWZ0ZXIgYXBwbHlpbmcKPj4+IGh0dHA6Ly93d3cxLmRyZXlvdS5vcmcvb3Zp
cnQvIGVuZ2luZSBwYXRjaC4KPj4+Cj4+PiBJIGhhdmUgdGlyaWVkIHdpdGggYm90aCBtYXN0ZXIg
YW5kIGVuZ2luZV8zLjEgYnJhbmNoIGJ1dCBJIGFtIHByZXR0eQo+Pj4gY2VydGFpbiB0aGF0IGl0
IGlzIGEgbWlzc2luZyBkZXBlbmQgaW4gbXkgYnVpbGQgZW52aXJvbm1lbnQ/IEFueSBoaW50cz8K
Pj4+Cj4+PiAjIEhpYmVybmF0ZSB2YWxpZGF0b3IgbW9kdWxlOgo+Pj4gbG4gLXMgL3Vzci9zaGFy
ZS9qYXZhL2hpYmVybmF0ZS12YWxpZGF0b3IuamFyCj4+PiAvcm9vdC9jZW50b3NfZW5naW5lXzMu
MS9ycG1idWlsZC9CVUlMRFJPT1Qvb3ZpcnQtZW5naW5lLTMuMS4wLTMuZWw2Lng4Nl82NC91c3Iv
c2hhcmUvb3ZpcnQtZW5naW5lL21vZHVsZXMvb3JnL2hpYmVybmF0ZS92YWxpZGF0b3IvbWFpbi8u
Cj4+Pgo+Pj4gbG4gLXMgL3Vzci9zaGFyZS9qYXZhL2p0eXBlLmphcgo+Pj4gL3Jvb3QvY2VudG9z
X2VuZ2luZV8zLjEvcnBtYnVpbGQvQlVJTERST09UL292aXJ0LWVuZ2luZS0zLjEuMC0zLmVsNi54
ODZfNjQvdXNyL3NoYXJlL292aXJ0LWVuZ2luZS9tb2R1bGVzL29yZy9oaWJlcm5hdGUvdmFsaWRh
dG9yL21haW4vLgo+Pj4KPj4+ICoqKiBEZXBsb3lpbmcgc2VydmljZQo+Pj4gIyBJbnN0YWxsIHRo
ZSBmaWxlczoKPj4+IGluc3RhbGwgLW0gNjQ0IHBhY2thZ2luZy9mZWRvcmEvZW5naW5lLXNlcnZp
Y2UueG1sCj4+PiAvcm9vdC9jZW50b3NfZW5naW5lXzMuMS9ycG1idWlsZC9CVUlMRFJPT1Qvb3Zp
cnQtZW5naW5lLTMuMS4wLTMuZWw2Lng4Nl82NC9ldGMvb3ZpcnQtZW5naW5lCj4+Pgo+Pj4gaW5z
dGFsbCAtbSA2NDQgcGFja2FnaW5nL2ZlZG9yYS9lbmdpbmUtc2VydmljZS1sb2dnaW5nLnByb3Bl
cnRpZXMKPj4+IC9yb290L2NlbnRvc19lbmdpbmVfMy4xL3JwbWJ1aWxkL0JVSUxEUk9PVC9vdmly
dC1lbmdpbmUtMy4xLjAtMy5lbDYueDg2XzY0L2V0Yy9vdmlydC1lbmdpbmUKPj4+Cj4+PiBpbnN0
YWxsIC1tIDY0NCBwYWNrYWdpbmcvZmVkb3JhL2VuZ2luZS1zZXJ2aWNlLXVzZXJzLnByb3BlcnRp
ZXMKPj4+IC9yb290L2NlbnRvc19lbmdpbmVfMy4xL3JwbWJ1aWxkL0JVSUxEUk9PVC9vdmlydC1l
bmdpbmUtMy4xLjAtMy5lbDYueDg2XzY0L2V0Yy9vdmlydC1lbmdpbmUKPj4+Cj4+PiBpbnN0YWxs
IC1tIDY0NCBwYWNrYWdpbmcvZmVkb3JhL2VuZ2luZS1zZXJ2aWNlLnN5c2NvbmZpZwo+Pj4gL3Jv
b3QvY2VudG9zX2VuZ2luZV8zLjEvcnBtYnVpbGQvQlVJTERST09UL292aXJ0LWVuZ2luZS0zLjEu
MC0zLmVsNi54ODZfNjQvZXRjL3N5c2NvbmZpZy9vdmlydC1lbmdpbmUKPj4+Cj4+PiBpbnN0YWxs
IC1tIDY0NCBwYWNrYWdpbmcvZmVkb3JhL2VuZ2luZS1zZXJ2aWNlLmxpbWl0cwo+Pj4gL3Jvb3Qv
Y2VudG9zX2VuZ2luZV8zLjEvcnBtYnVpbGQvQlVJTERST09UL292aXJ0LWVuZ2luZS0zLjEuMC0z
LmVsNi54ODZfNjQvZXRjL3NlY3VyaXR5L2xpbWl0cy5kLzEwLW92aXJ0LWVuZ2luZS5jb25mCj4+
Pgo+Pj4gaW5zdGFsbCAtbSA3NTUgcGFja2FnaW5nL2ZlZG9yYS9lbmdpbmUtc2VydmljZS5weQo+
Pj4gL3Jvb3QvY2VudG9zX2VuZ2luZV8zLjEvcnBtYnVpbGQvQlVJTERST09UL292aXJ0LWVuZ2lu
ZS0zLjEuMC0zLmVsNi54ODZfNjQvdXNyL3NoYXJlL292aXJ0LWVuZ2luZS9zY3JpcHRzCj4+Pgo+
Pj4gaW5zdGFsbCAtbSA3NTUgcGFja2FnaW5nL2ZlZG9yYS9lbmdpbmUtc2VydmljZS5zeXN0ZW12
Cj4+PiAvcm9vdC9jZW50b3NfZW5naW5lXzMuMS9ycG1idWlsZC9CVUlMRFJPT1Qvb3ZpcnQtZW5n
aW5lLTMuMS4wLTMuZWw2Lng4Nl82NC9ldGMvcmMuZC9pbml0LmQvb3ZpcnQtZW5naW5lCj4+Pgo+
Pj4gbWFrZVsxXTogTGVhdmluZyBkaXJlY3RvcnkKPj4+IGAvcm9vdC9jZW50b3NfZW5naW5lXzMu
MS9ycG1idWlsZC9CVUlMRC9vdmlydC1lbmdpbmUtMy4xLjAnCj4+PiArIGluc3RhbGwgLWQgLW0g
NzU1Cj4+PiAvcm9vdC9jZW50b3NfZW5naW5lXzMuMS9ycG1idWlsZC9CVUlMRFJPT1Qvb3ZpcnQt
ZW5naW5lLTMuMS4wLTMuZWw2Lng4Nl82NC91c3Ivc2hhcmUvamF2YS9vdmlydC1lbmdpbmUKPj4+
Cj4+PiArIGluc3RhbGwgLWQgLW0gNzU1Cj4+PiAvcm9vdC9jZW50b3NfZW5naW5lXzMuMS9ycG1i
dWlsZC9CVUlMRFJPT1Qvb3ZpcnQtZW5naW5lLTMuMS4wLTMuZWw2Lng4Nl82NC91c3Ivc2hhcmUv
bWF2ZW4yL3BvbXMKPj4+Cj4+PiArIGluc3RhbGwgLWQgLW0gNzU1Cj4+PiAvcm9vdC9jZW50b3Nf
ZW5naW5lXzMuMS9ycG1idWlsZC9CVUlMRFJPT1Qvb3ZpcnQtZW5naW5lLTMuMS4wLTMuZWw2Lng4
Nl82NC91c3Ivc2hhcmUvamF2YWRvYy9vdmlydC1lbmdpbmUKPj4+Cj4+PiArIHJlYWQgbW9kdWxl
X3BhdGggYXJ0aWZhY3RfaWQKPj4+ICsgcG9tX2ZpbGU9Li9wb20ueG1sCj4+PiArIGphcl9maWxl
PS4vdGFyZ2V0L3BhcmVudC0zLjEuMC5qYXIKPj4+ICsgaW5zdGFsbCAtcCAtbSA2NDQgLi9wb20u
eG1sCj4+PiAvcm9vdC9jZW50b3NfZW5naW5lXzMuMS9ycG1idWlsZC9CVUlMRFJPT1Qvb3ZpcnQt
ZW5naW5lLTMuMS4wLTMuZWw2Lng4Nl82NC91c3Ivc2hhcmUvbWF2ZW4yL3BvbXMvSlBQLm92aXJ0
LWVuZ2luZS1wYXJlbnQucG9tCj4+Pgo+Pj4gKyAnWycgLWYgLi90YXJnZXQvcGFyZW50LTMuMS4w
LmphciAnXScKPj4+ICsgJWFkZF9tYXZlbl9kZXBtYXAgSlBQLm92aXJ0LWVuZ2luZS1wYXJlbnQu
cG9tCj4+PiAvdmFyL3RtcC9ycG0tdG1wLmhrT0FXTjogbGluZSA1ODogZmc6IG5vIGpvYiBjb250
cm9sCj4+PiBlcnJvcjogQmFkIGV4aXQgc3RhdHVzIGZyb20gL3Zhci90bXAvcnBtLXRtcC5oa09B
V04gKCVpbnN0YWxsKQo+Pj4KPj4+Cj4+PiBSUE0gYnVpbGQgZXJyb3JzOgo+Pj4gQmFkIGV4aXQg
c3RhdHVzIGZyb20gL3Zhci90bXAvcnBtLXRtcC5oa09BV04gKCVpbnN0YWxsKQo+Pj4gbWFrZTog
KioqIFtycG1dIEVycm9yIDEKPj4gQ2FuIHlvdSBzaGFyZSB0aGUgdGVtcCBmaWxlIHdoZXJlIHlv
dSBnZXQgdGhhdCBlcnJvcj8gSSBpcwo+PiAvdmFyL3RtcC9ycG0tdG1wLmhrT0FXTiBpbiB5b3Vy
IGxhdGVzdHMgbWVzc2FnZSwgYnV0IHdpbGwgYmUgZGlmZmVyZW50Cj4+IGlmIHlvdSByZXBlYXQg
dGhlIGJ1aWxkLgo+Pgo+Pgo+IEkgaGFkIGFscmVhZHkgc2h1dGRvd24gdGhlIFZNIHNvIHRoZSB0
bXAgZm9sZGVyIHdhcyBjbGVhcmVkLiBJIGFtCj4gcnVubmluZyB0aGUgYnVpbGQgYWdhaW4gYW5k
IGdvaW5nIHRvIGdyYWIgYSBmdWxsIGxvZyBhbmQgdGhhdCBmaWxlIGFuZAo+IGZwYXN0ZSB0aGVt
LiBCdXQgdGhlIGJ1aWxkIHByb2Nlc3MgaXMgdGFraW5nIG1lIDEyIGhvdXJzIHRvIGRvIHNvIGl0
Cj4gd2lsbCBiZSBhIGRheSBvciB0d28gYmVmb3JlIEkgY2FuIHJlcGx5LgoKMTIgaG91cnMsIHdo
eT8hCgpUaGF0IGlzIGhvdyBsb25nIG1ha2UgcnBtIGlzIHRha2luZz8gwqBJIHN1c3BlY3QgdGhl
IGZhY3QgdGhlIHN5c3RlbSBvbmx5IGhhcyAzZyBvZiByYW0gbWlnaHQgYmUgcGFydCBvZiB0aGUg
bG9uZyBjb21waWxlIHRpbWVzCgpUaGFua3MKUm9iZXJ0
----_com.android.email_325480453548655
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: base64
PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0
L2h0bWw7IGNoYXJzZXQ9VVRGLTgiPjwvaGVhZD48Ym9keT48YnI+PGJyPjxicj48c3BhbiBzdHls
ZT0iZm9udC1zaXplOjg3JSI+U2VudCBmcm9tIG15IFNhbXN1bmcgRXBpY+KEoiA0RyBUb3VjaDwv
c3Bhbj4gPGJyPjxicj48YnI+LS0tLS0tLS0gT3JpZ2luYWwgbWVzc2FnZSAtLS0tLS0tLTxicj5T
dWJqZWN0OiBSZTogW1VzZXJzXSBJIGFtIHRpcmluZyB0byBtYW51YWxseSBjcmVhdGUgdGhlIFJQ
TSBidXQgZ2V0dGluZyB0aGUgZm9sbG93aW5nIGVycm9ycy4gPGJyPkZyb206IEl0YW1hciBIZWlt
ICZsdDtpaGVpbUByZWRoYXQuY29tJmd0OyA8YnI+VG86IFJvYmVydCBNaWRkbGVzd2FydGggJmx0
O3JvYmVydEBtaWRkbGVzd2FydGgubmV0Jmd0OyA8YnI+Q0M6IEp1YW4gSGVybmFuZGV6ICZsdDtq
aGVybmFuZEByZWRoYXQuY29tJmd0Oyx1c2VycyAmbHQ7dXNlcnNAb3ZpcnQub3JnJmd0OyA8YnI+
PGJyPjxicj48ZGl2IHN0eWxlPSJ3b3JkLWJyZWFrOmJyZWFrLWFsbDsiPk9uIDA2LzI5LzIwMTIg
MDk6NTAgUE0sIFJvYmVydCBNaWRkbGVzd2FydGggd3JvdGU6PGJyPiZndDsgT24gMDYvMjkvMjAx
MiAwNTo0NCBBTSwgSnVhbiBIZXJuYW5kZXogd3JvdGU6PGJyPiZndDsmZ3Q7IE9uIDA2LzI5LzIw
MTIgMDQ6NTIgQU0sIFJvYmVydCBNaWRkbGVzd2FydGggd3JvdGU6PGJyPiZndDsmZ3Q7Jmd0OyBO
b3RlIEkgYW0gYXR0ZW1wdGluZyB0byBidWlsZCBvbiBDZW50T1MgYWZ0ZXIgYXBwbHlpbmc8YnI+
Jmd0OyZndDsmZ3Q7IGh0dHA6Ly93d3cxLmRyZXlvdS5vcmcvb3ZpcnQvIGVuZ2luZSBwYXRjaC48
YnI+Jmd0OyZndDsmZ3Q7PGJyPiZndDsmZ3Q7Jmd0OyBJIGhhdmUgdGlyaWVkIHdpdGggYm90aCBt
YXN0ZXIgYW5kIGVuZ2luZV8zLjEgYnJhbmNoIGJ1dCBJIGFtIHByZXR0eTxicj4mZ3Q7Jmd0OyZn
dDsgY2VydGFpbiB0aGF0IGl0IGlzIGEgbWlzc2luZyBkZXBlbmQgaW4gbXkgYnVpbGQgZW52aXJv
bm1lbnQ/IEFueSBoaW50cz88YnI+Jmd0OyZndDsmZ3Q7PGJyPiZndDsmZ3Q7Jmd0OyAjIEhpYmVy
bmF0ZSB2YWxpZGF0b3IgbW9kdWxlOjxicj4mZ3Q7Jmd0OyZndDsgbG4gLXMgL3Vzci9zaGFyZS9q
YXZhL2hpYmVybmF0ZS12YWxpZGF0b3IuamFyPGJyPiZndDsmZ3Q7Jmd0OyAvcm9vdC9jZW50b3Nf
ZW5naW5lXzMuMS9ycG1idWlsZC9CVUlMRFJPT1Qvb3ZpcnQtZW5naW5lLTMuMS4wLTMuZWw2Lng4
Nl82NC91c3Ivc2hhcmUvb3ZpcnQtZW5naW5lL21vZHVsZXMvb3JnL2hpYmVybmF0ZS92YWxpZGF0
b3IvbWFpbi8uPGJyPiZndDsmZ3Q7Jmd0Ozxicj4mZ3Q7Jmd0OyZndDsgbG4gLXMgL3Vzci9zaGFy
ZS9qYXZhL2p0eXBlLmphcjxicj4mZ3Q7Jmd0OyZndDsgL3Jvb3QvY2VudG9zX2VuZ2luZV8zLjEv
cnBtYnVpbGQvQlVJTERST09UL292aXJ0LWVuZ2luZS0zLjEuMC0zLmVsNi54ODZfNjQvdXNyL3No
YXJlL292aXJ0LWVuZ2luZS9tb2R1bGVzL29yZy9oaWJlcm5hdGUvdmFsaWRhdG9yL21haW4vLjxi
cj4mZ3Q7Jmd0OyZndDs8YnI+Jmd0OyZndDsmZ3Q7ICoqKiBEZXBsb3lpbmcgc2VydmljZTxicj4m
Z3Q7Jmd0OyZndDsgIyBJbnN0YWxsIHRoZSBmaWxlczo8YnI+Jmd0OyZndDsmZ3Q7IGluc3RhbGwg
LW0gNjQ0IHBhY2thZ2luZy9mZWRvcmEvZW5naW5lLXNlcnZpY2UueG1sPGJyPiZndDsmZ3Q7Jmd0
OyAvcm9vdC9jZW50b3NfZW5naW5lXzMuMS9ycG1idWlsZC9CVUlMRFJPT1Qvb3ZpcnQtZW5naW5l
LTMuMS4wLTMuZWw2Lng4Nl82NC9ldGMvb3ZpcnQtZW5naW5lPGJyPiZndDsmZ3Q7Jmd0Ozxicj4m
Z3Q7Jmd0OyZndDsgaW5zdGFsbCAtbSA2NDQgcGFja2FnaW5nL2ZlZG9yYS9lbmdpbmUtc2Vydmlj
ZS1sb2dnaW5nLnByb3BlcnRpZXM8YnI+Jmd0OyZndDsmZ3Q7IC9yb290L2NlbnRvc19lbmdpbmVf
My4xL3JwbWJ1aWxkL0JVSUxEUk9PVC9vdmlydC1lbmdpbmUtMy4xLjAtMy5lbDYueDg2XzY0L2V0
Yy9vdmlydC1lbmdpbmU8YnI+Jmd0OyZndDsmZ3Q7PGJyPiZndDsmZ3Q7Jmd0OyBpbnN0YWxsIC1t
IDY0NCBwYWNrYWdpbmcvZmVkb3JhL2VuZ2luZS1zZXJ2aWNlLXVzZXJzLnByb3BlcnRpZXM8YnI+
Jmd0OyZndDsmZ3Q7IC9yb290L2NlbnRvc19lbmdpbmVfMy4xL3JwbWJ1aWxkL0JVSUxEUk9PVC9v
dmlydC1lbmdpbmUtMy4xLjAtMy5lbDYueDg2XzY0L2V0Yy9vdmlydC1lbmdpbmU8YnI+Jmd0OyZn
dDsmZ3Q7PGJyPiZndDsmZ3Q7Jmd0OyBpbnN0YWxsIC1tIDY0NCBwYWNrYWdpbmcvZmVkb3JhL2Vu
Z2luZS1zZXJ2aWNlLnN5c2NvbmZpZzxicj4mZ3Q7Jmd0OyZndDsgL3Jvb3QvY2VudG9zX2VuZ2lu
ZV8zLjEvcnBtYnVpbGQvQlVJTERST09UL292aXJ0LWVuZ2luZS0zLjEuMC0zLmVsNi54ODZfNjQv
ZXRjL3N5c2NvbmZpZy9vdmlydC1lbmdpbmU8YnI+Jmd0OyZndDsmZ3Q7PGJyPiZndDsmZ3Q7Jmd0
OyBpbnN0YWxsIC1tIDY0NCBwYWNrYWdpbmcvZmVkb3JhL2VuZ2luZS1zZXJ2aWNlLmxpbWl0czxi
cj4mZ3Q7Jmd0OyZndDsgL3Jvb3QvY2VudG9zX2VuZ2luZV8zLjEvcnBtYnVpbGQvQlVJTERST09U
L292aXJ0LWVuZ2luZS0zLjEuMC0zLmVsNi54ODZfNjQvZXRjL3NlY3VyaXR5L2xpbWl0cy5kLzEw
LW92aXJ0LWVuZ2luZS5jb25mPGJyPiZndDsmZ3Q7Jmd0Ozxicj4mZ3Q7Jmd0OyZndDsgaW5zdGFs
bCAtbSA3NTUgcGFja2FnaW5nL2ZlZG9yYS9lbmdpbmUtc2VydmljZS5weTxicj4mZ3Q7Jmd0OyZn
dDsgL3Jvb3QvY2VudG9zX2VuZ2luZV8zLjEvcnBtYnVpbGQvQlVJTERST09UL292aXJ0LWVuZ2lu
ZS0zLjEuMC0zLmVsNi54ODZfNjQvdXNyL3NoYXJlL292aXJ0LWVuZ2luZS9zY3JpcHRzPGJyPiZn
dDsmZ3Q7Jmd0Ozxicj4mZ3Q7Jmd0OyZndDsgaW5zdGFsbCAtbSA3NTUgcGFja2FnaW5nL2ZlZG9y
YS9lbmdpbmUtc2VydmljZS5zeXN0ZW12PGJyPiZndDsmZ3Q7Jmd0OyAvcm9vdC9jZW50b3NfZW5n
aW5lXzMuMS9ycG1idWlsZC9CVUlMRFJPT1Qvb3ZpcnQtZW5naW5lLTMuMS4wLTMuZWw2Lng4Nl82
NC9ldGMvcmMuZC9pbml0LmQvb3ZpcnQtZW5naW5lPGJyPiZndDsmZ3Q7Jmd0Ozxicj4mZ3Q7Jmd0
OyZndDsgbWFrZVsxXTogTGVhdmluZyBkaXJlY3Rvcnk8YnI+Jmd0OyZndDsmZ3Q7IGAvcm9vdC9j
ZW50b3NfZW5naW5lXzMuMS9ycG1idWlsZC9CVUlMRC9vdmlydC1lbmdpbmUtMy4xLjAnPGJyPiZn
dDsmZ3Q7Jmd0OyArIGluc3RhbGwgLWQgLW0gNzU1PGJyPiZndDsmZ3Q7Jmd0OyAvcm9vdC9jZW50
b3NfZW5naW5lXzMuMS9ycG1idWlsZC9CVUlMRFJPT1Qvb3ZpcnQtZW5naW5lLTMuMS4wLTMuZWw2
Lng4Nl82NC91c3Ivc2hhcmUvamF2YS9vdmlydC1lbmdpbmU8YnI+Jmd0OyZndDsmZ3Q7PGJyPiZn
dDsmZ3Q7Jmd0OyArIGluc3RhbGwgLWQgLW0gNzU1PGJyPiZndDsmZ3Q7Jmd0OyAvcm9vdC9jZW50
b3NfZW5naW5lXzMuMS9ycG1idWlsZC9CVUlMRFJPT1Qvb3ZpcnQtZW5naW5lLTMuMS4wLTMuZWw2
Lng4Nl82NC91c3Ivc2hhcmUvbWF2ZW4yL3BvbXM8YnI+Jmd0OyZndDsmZ3Q7PGJyPiZndDsmZ3Q7
Jmd0OyArIGluc3RhbGwgLWQgLW0gNzU1PGJyPiZndDsmZ3Q7Jmd0OyAvcm9vdC9jZW50b3NfZW5n
aW5lXzMuMS9ycG1idWlsZC9CVUlMRFJPT1Qvb3ZpcnQtZW5naW5lLTMuMS4wLTMuZWw2Lng4Nl82
NC91c3Ivc2hhcmUvamF2YWRvYy9vdmlydC1lbmdpbmU8YnI+Jmd0OyZndDsmZ3Q7PGJyPiZndDsm
Z3Q7Jmd0OyArIHJlYWQgbW9kdWxlX3BhdGggYXJ0aWZhY3RfaWQ8YnI+Jmd0OyZndDsmZ3Q7ICsg
cG9tX2ZpbGU9Li9wb20ueG1sPGJyPiZndDsmZ3Q7Jmd0OyArIGphcl9maWxlPS4vdGFyZ2V0L3Bh
cmVudC0zLjEuMC5qYXI8YnI+Jmd0OyZndDsmZ3Q7ICsgaW5zdGFsbCAtcCAtbSA2NDQgLi9wb20u
eG1sPGJyPiZndDsmZ3Q7Jmd0OyAvcm9vdC9jZW50b3NfZW5naW5lXzMuMS9ycG1idWlsZC9CVUlM
RFJPT1Qvb3ZpcnQtZW5naW5lLTMuMS4wLTMuZWw2Lng4Nl82NC91c3Ivc2hhcmUvbWF2ZW4yL3Bv
bXMvSlBQLm92aXJ0LWVuZ2luZS1wYXJlbnQucG9tPGJyPiZndDsmZ3Q7Jmd0Ozxicj4mZ3Q7Jmd0
OyZndDsgKyAnWycgLWYgLi90YXJnZXQvcGFyZW50LTMuMS4wLmphciAnXSc8YnI+Jmd0OyZndDsm
Z3Q7ICsgJWFkZF9tYXZlbl9kZXBtYXAgSlBQLm92aXJ0LWVuZ2luZS1wYXJlbnQucG9tPGJyPiZn
dDsmZ3Q7Jmd0OyAvdmFyL3RtcC9ycG0tdG1wLmhrT0FXTjogbGluZSA1ODogZmc6IG5vIGpvYiBj
b250cm9sPGJyPiZndDsmZ3Q7Jmd0OyBlcnJvcjogQmFkIGV4aXQgc3RhdHVzIGZyb20gL3Zhci90
bXAvcnBtLXRtcC5oa09BV04gKCVpbnN0YWxsKTxicj4mZ3Q7Jmd0OyZndDs8YnI+Jmd0OyZndDsm
Z3Q7PGJyPiZndDsmZ3Q7Jmd0OyBSUE0gYnVpbGQgZXJyb3JzOjxicj4mZ3Q7Jmd0OyZndDsgQmFk
IGV4aXQgc3RhdHVzIGZyb20gL3Zhci90bXAvcnBtLXRtcC5oa09BV04gKCVpbnN0YWxsKTxicj4m
Z3Q7Jmd0OyZndDsgbWFrZTogKioqIFtycG1dIEVycm9yIDE8YnI+Jmd0OyZndDsgQ2FuIHlvdSBz
aGFyZSB0aGUgdGVtcCBmaWxlIHdoZXJlIHlvdSBnZXQgdGhhdCBlcnJvcj8gSSBpczxicj4mZ3Q7
Jmd0OyAvdmFyL3RtcC9ycG0tdG1wLmhrT0FXTiBpbiB5b3VyIGxhdGVzdHMgbWVzc2FnZSwgYnV0
IHdpbGwgYmUgZGlmZmVyZW50PGJyPiZndDsmZ3Q7IGlmIHlvdSByZXBlYXQgdGhlIGJ1aWxkLjxi
cj4mZ3Q7Jmd0Ozxicj4mZ3Q7Jmd0Ozxicj4mZ3Q7IEkgaGFkIGFscmVhZHkgc2h1dGRvd24gdGhl
IFZNIHNvIHRoZSB0bXAgZm9sZGVyIHdhcyBjbGVhcmVkLiBJIGFtPGJyPiZndDsgcnVubmluZyB0
aGUgYnVpbGQgYWdhaW4gYW5kIGdvaW5nIHRvIGdyYWIgYSBmdWxsIGxvZyBhbmQgdGhhdCBmaWxl
IGFuZDxicj4mZ3Q7IGZwYXN0ZSB0aGVtLiBCdXQgdGhlIGJ1aWxkIHByb2Nlc3MgaXMgdGFraW5n
IG1lIDEyIGhvdXJzIHRvIGRvIHNvIGl0PGJyPiZndDsgd2lsbCBiZSBhIGRheSBvciB0d28gYmVm
b3JlIEkgY2FuIHJlcGx5Ljxicj48YnI+MTIgaG91cnMsIHdoeT8hPGJyPjwvZGl2PjxkaXYgc3R5
bGU9IndvcmQtYnJlYWs6YnJlYWstYWxsOyI+PGJyPjwvZGl2PjxkaXYgc3R5bGU9IndvcmQtYnJl
YWs6YnJlYWstYWxsOyI+VGhhdCBpcyBob3cgbG9uZyBtYWtlIHJwbSBpcyB0YWtpbmc/ICZuYnNw
O0kgc3VzcGVjdCB0aGUgZmFjdCB0aGUgc3lzdGVtIG9ubHkgaGFzIDNnIG9mIHJhbSBtaWdodCBi
ZSBwYXJ0IG9mIHRoZSBsb25nIGNvbXBpbGUgdGltZXM8L2Rpdj48ZGl2IHN0eWxlPSJ3b3JkLWJy
ZWFrOmJyZWFrLWFsbDsiPjxicj48L2Rpdj48ZGl2IHN0eWxlPSJ3b3JkLWJyZWFrOmJyZWFrLWFs
bDsiPlRoYW5rczwvZGl2PjxkaXYgc3R5bGU9IndvcmQtYnJlYWs6YnJlYWstYWxsOyI+Um9iZXJ0
PC9kaXY+IDwvYm9keT4=
----_com.android.email_325480453548655--
1
0
[Users] Missing Tivoli Directory Server (ITDS) support in 3.1
by snmishra@linux.vnet.ibm.com 29 Jun '12
by snmishra@linux.vnet.ibm.com 29 Jun '12
29 Jun '12
Hi,
I am trying to test ITDS support in ovirt 3.1. We have an F17
machine that was running ovirt 3.0. We upgraded it to 3.1 and tried to
add a Tivoli DS to it by running "engine-manage-domains -action=add".
But was surprised to see the error that itds is not supported and the
only supported ones are ipa, rhds and AD. I am thinking that it is
possible that something did not go right with the upgrade from 3.0 to
3.1. I am preparing a fresh f17 with ovirt 3.1 to test again.
Thanks
Sharad Mishra
IBM
2
2
Re: [Users] [Engine-devel] Developer's All In One -Can not create an vm on iscsi disk
by Sheldon 29 Jun '12
by Sheldon 29 Jun '12
29 Jun '12
I have set up all-in-one. but I use iscsi data center, not NFS data center.
ovirt-engine create an direct Lun disk, but can not create an vm.
The status is always "wait for launch". It take a long time, but the
status is still "wait for launch"
I set up an iscsi server, as follow:
# dd if=/dev/zero of=$SCPATH/fs.iscsi.disk2 bs=1M count=15360
# dd if=/dev/zero of=$SCPATH/fs.iscsi.disk3 bs=1M count=15360
# sudo tgtadm --lld iscsi --op new --mode target --tid 1 -T
iqn.2012-02.com.example.data1
# sudo tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 2
-b $SCPATH/fs.iscsi.disk2
# sudo tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 3
-b $SCPATH/fs.iscsi.disk3
# sudo tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL
On 06/18/2012 10:01 PM, Allon Mureinik wrote:
> Cross-posting to engine-devel
>
>> ----- Forwarded Message -----
>> From: "Yeela Kaplan" <ykaplan(a)redhat.com>
>> To: vdsm-devel(a)lists.fedorahosted.org,
>> engine-devel(a)lists.fedorahosted.org
>> Cc: "Ayal Baron" <abaron(a)redhat.com>
>> Sent: Monday, June 18, 2012 2:58:55 PM
>> Subject: Developer's All In One
>>
>> Enclosed is the link to a wiki containing a detailed explanation for
>> installing a developer's All-In-One environment:
>>
>> http://www.ovirt.org/wiki/Developers_All_In_One
>>
>> Regards,
>> Yeela
>>
> _______________________________________________
> Engine-devel mailing list
> Engine-devel(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/engine-devel
>
2
2
following article on http://www.ovirt.org/wiki/OVirt_-_disable_SSL_in_VDSM
i managed to get vdsmd started on fedora 16 but failed running
[root@ovirt01 ~]# psql engine -U postgres -c "UPDATE vdc_options set
option_value = 'false' where option_name = 'SSLEnabled'"
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
[root@ovirt01 ~]# psql
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
is this step necessary to start ovirt on fedora 16?
i supposed postgres was installed with ovirt
[root@ovirt01 ~]# rpm -q postgresql
postgresql-9.1.4-1.fc16.x86_64
and it is not on startup list too
[root@ovirt01 ~]# chkconfig --list
Note: This output shows SysV services only and does not include native
systemd services. SysV configuration data might be overridden by native
systemd configuration.
ceph 0:off 1:off 2:off 3:off 4:off 5:off 6:off
ebtables 0:off 1:off 2:off 3:off 4:off 5:off 6:off
hsqldb 0:off 1:off 2:off 3:on 4:on 5:on 6:off
iscsi 0:off 1:off 2:off 3:on 4:on 5:on 6:off
iscsid 0:off 1:off 2:off 3:on 4:on 5:on 6:off
jboss-as 0:off 1:off 2:on 3:on 4:on 5:on 6:off
libvirt-guests 0:off 1:off 2:off 3:off 4:off 5:off 6:off
libvirtd 0:off 1:off 2:off 3:on 4:on 5:on 6:off
livesys 0:off 1:off 2:off 3:on 4:on 5:on 6:off
livesys-late 0:off 1:off 2:off 3:on 4:on 5:on 6:off
netconsole 0:off 1:off 2:off 3:off 4:off 5:off 6:off
netfs 0:off 1:off 2:off 3:on 4:on 5:on 6:off
network 0:off 1:off 2:off 3:off 4:off 5:off 6:off
sandbox 0:off 1:off 2:off 3:off 4:off 5:on 6:off
tcsd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
thanks for any tips/hints
the last fedora i was using was version 12, systemd is quite new to me
--
Regards,
Umarzuki Mochlis
http://debmal.my
2
1
Good evening to all,
I have added two nodes of Fedora 17 with vdsm 4.10 installed to oVirt
3.1 engine. I was having problems
with SSL, so I have disabled it. When I added the nodes there was no
attempt of installation as it was
the case with oVirt 3.0, but the nodes get activated, provided that
ovirtmgmt bridge is present.
I have been told that there is a configuration in the database that make
this happen. I just recreated the
database via the script /dbscripts/create_db_devel.sh and run
engine-setup, after removing all packages from oVirt 3.0 and jboss and
installing ovirt 3.1 basic packages.
My question is: What would be the 'standard' procedure to get oVirt 3.1
running?
Regards,
Jose Garcia
3
8
Re: [Users] BSTRAP component='CreateConf' status='FAIL' message='Basic configuration failed to import default values'
by Michal Skrivanek 28 Jun '12
by Michal Skrivanek 28 Jun '12
28 Jun '12
On Jun 28, 2012, at 15:55 , Karli Sjöberg wrote:
>
> 28 jun 2012 kl. 15.45 skrev Michal Skrivanek:
>
>> Hi,
>> well, I'd check whatever the error is saying…i.e. do you have libjpeg installed?
>>
>> 2012-06-28 12:25:53,223 ERROR [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='libjpeg' message='package libjpeg is not installed '/>
>
> Yes, and also earlier in the log is:
>> <BSTRAP component='VDS PACKAGES' status='WARN' result='libjpeg' message='package libjpeg is not installed '/>
>
> But it is installed:
> # rpm -qa | grep libjpeg
> libjpeg-turbo-1.2.0-1.fc17.x86_64
libjpeg is not libjpeg-turbo
Seems that is the default libjpeg library in Fedora 17 now…
The bootstrap code would need a fix, I suppose...
> # yum install -y libjpeg
> Loaded plugins: langpacks, presto, refresh-packagekit, versionlock
> Package libjpeg-turbo-1.2.0-1.fc17.x86_64 already installed and latest version
> Nothing to do
>
>
>>
>> Thanks,
>> michal
>>
>> On Jun 28, 2012, at 12:32 , Karli Sjöberg wrote:
>>
>>> Hi,
>>>
>>> I am running Fedora 17 and added the ovirt beta repository to have access to webadmin addition, since F17 only comes with CLI by default.
>>>
>>> # wget http://ovirt.org/releases/beta/ovirt-engine.repo -O /etc/yum.repos.d/ovirt- engine_beta.repo
>>> # sed -i -- 's/fedora\/16/fedora\/17/' /etc/yum.repos.d/ovirt-engine_beta.repo
>>> # yum install -y ovirt-engine
>>> # rpm -qa | grep ovirt
>>> ovirt-engine-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
>>> ovirt-engine-genericapi-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
>>> ovirt-engine-webadmin-portal-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
>>> ovirt-iso-uploader-3.1.0-0.git1841d9.fc17.noarch
>>> ovirt-engine-sdk-3.1.0.2-gita89f4e.fc17.noarch
>>> ovirt-engine-restapi-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
>>> ovirt-engine-userportal-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
>>> ovirt-engine-tools-common-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
>>> ovirt-log-collector-3.1.0-0.fc17.noarch
>>> ovirt-engine-dbscripts-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
>>> ovirt-engine-setup-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
>>> ovirt-engine-notification-service-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
>>> ovirt-image-uploader-3.1.0-0.git9c42c8.fc17.noarch
>>> ovirt-engine-config-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
>>> ovirt-engine-backend-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
>>>
>>> I need to get oVirt-engine- and host up and running on the machine because I want it to be able to configure and execute power management on the rest of the hosts in the cluster.
>>> Source: http://lists.ovirt.org/pipermail/users/2012-February/000361.html
>>> "Yes, the ovirt backend does not shut down or power up any hosts directly, it can work only through vdsm. Therefore you need one running host per datacenter to be able to manage the rest of the hosts."
>>>
>>> I am then following this article:
>>> http://blog.jebpages.com/archives/how-to-get-up-and-running-with-ovirt/
>>>
>>> Where I get to adding itself as a host in it´s own cluster and then:
>>> (This is a "Re-Install" from the WUI)
>>> 2012-06-28 12:25:00,000 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-70) Checking autorecoverable hosts
>>> 2012-06-28 12:25:00,004 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-70) Autorecovering 0 hosts
>>> 2012-06-28 12:25:00,005 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-70) Checking autorecoverable hosts done
>>> 2012-06-28 12:25:00,006 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-70) Checking autorecoverable storage domains
>>> 2012-06-28 12:25:00,009 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-70) Autorecovering 0 storage domains
>>> 2012-06-28 12:25:00,009 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-70) Checking autorecoverable storage domains done
>>> 2012-06-28 12:25:05,875 INFO [org.ovirt.engine.core.bll.UpdateVdsCommand] (ajp--0.0.0.0-8009-2) [13427c0] Running command: UpdateVdsCommand internal: false. Entities affected : ID: 105460c0-c0ea-11e1-b737-9b694eb255f6 Type: VDS
>>> 2012-06-28 12:25:05,889 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (ajp--0.0.0.0-8009-2) [13427c0] START, SetVdsStatusVDSCommand(vdsId = 105460c0-c0ea-11e1-b737-9b694eb255f6, status=Installing, nonOperationalReason=NONE), log id: 5ef5ea33
>>> 2012-06-28 12:25:05,895 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (ajp--0.0.0.0-8009-2) [13427c0] FINISH, SetVdsStatusVDSCommand, log id: 5ef5ea33
>>> 2012-06-28 12:25:05,910 INFO [org.ovirt.engine.core.bll.InstallVdsCommand] (pool-3-thread-7) [3a635c45] Running command: InstallVdsCommand internal: true. Entities affected : ID: 105460c0-c0ea-11e1-b737-9b694eb255f6 Type: VDS
>>> 2012-06-28 12:25:05,914 INFO [org.ovirt.engine.core.bll.InstallVdsCommand] (pool-3-thread-7) [3a635c45] Before Installation pool-3-thread-7
>>> 2012-06-28 12:25:05,915 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Executing installation stage. (Stage: Starting Host installation)
>>> 2012-06-28 12:25:05,916 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Executing installation stage. (Stage: Connecting to Host)
>>> 2012-06-28 12:25:05,959 INFO [org.ovirt.engine.core.utils.hostinstall.HostKeyVerifier] (NioProcessor-15) SSH key fingerprint f6:4d:81:ba:a0:f6:c4:09:85:18:10:5f:6f:47:09:58 for host njord.sto.slu.se (172.22.8.14) has been successfully verified.
>>> 2012-06-28 12:25:06,032 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='RHEV_INSTALL' status='OK' message='Connected to Host 172.22.8.14 with SSH key fingerprint: f6:4d:81:ba:a0:f6:c4:09:85:18:10:5f:6f:47:09:58'/>. FYI. (Stage: Connecting to Host)
>>> 2012-06-28 12:25:06,044 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Successfully connected to server ssh. (Stage: Connecting to Host)
>>> 2012-06-28 12:25:06,048 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Executing installation stage. (Stage: Get the unique vds id)
>>> 2012-06-28 12:25:06,052 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-7) Invoking /bin/echo -e `/bin/bash -c /usr/sbin/dmidecode|/bin/awk ' /UUID/{ print $2; } ' | /usr/bin/tr '
>>> ' '_' && cat /sys/class/net/*/address | /bin/grep -v '00:00:00:00' | /bin/sort -u | /usr/bin/head --lines=1` on 172.22.8.14
>>> 2012-06-28 12:25:06,145 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: 44454C4C-3200-1052-8050-B7C04F354431_00:15:17:36:60:4c
>>> . FYI. (Stage: Get the unique vds id)
>>> 2012-06-28 12:25:06,149 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Assigning unique id 44454C4C-3200-1052-8050-B7C04F354431_00:15:17:36:60:4c to Host. (Stage: Get the unique vds id)
>>> 2012-06-28 12:25:06,162 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-7) RunSSHCommand returns true
>>> 2012-06-28 12:25:06,164 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Executing installation stage. (Stage: Upload Installation script to Host)
>>> 2012-06-28 12:25:06,166 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-7) Uploading file /usr/share/ovirt-engine/scripts/vds_installer.py to /tmp/vds_installer_ca67f0a5-115c-4943-a9ef-157654586da5.py on 172.22.8.14
>>> 2012-06-28 12:25:06,170 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-7) Uploading file /usr/share/ovirt-engine/scripts/vds_installer.py to /tmp/vds_installer_ca67f0a5-115c-4943-a9ef-157654586da5.py on 172.22.8.14
>>> 2012-06-28 12:25:09,363 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. successfully done sftp operation ( Stage: Upload Installation script to Host)
>>> 2012-06-28 12:25:09,364 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-7) return true
>>> 2012-06-28 12:25:09,365 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-7) Uploading file /tmp/firewall.conf5272561799347733512.tmp to /tmp/firewall.conf.ca67f0a5-115c-4943-a9ef-157654586da5 on 172.22.8.14
>>> 2012-06-28 12:25:09,366 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-7) Uploading file /tmp/firewall.conf5272561799347733512.tmp to /tmp/firewall.conf.ca67f0a5-115c-4943-a9ef-157654586da5 on 172.22.8.14
>>> 2012-06-28 12:25:12,504 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. successfully done sftp operation ( Stage: Upload Installation script to Host)
>>> 2012-06-28 12:25:12,509 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-7) return true
>>> 2012-06-28 12:25:12,512 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Executing installation stage. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:12,516 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Sending SSH Command chmod +x /tmp/vds_installer_ca67f0a5-115c-4943-a9ef-157654586da5.py; /tmp/vds_installer_ca67f0a5-115c-4943-a9ef-157654586da5.py -c 'ssl=true;management_port=54321' -O 'slu' -t 2012-06-28T10:25:05 -f /tmp/firewall.conf.ca67f0a5-115c-4943-a9ef-157654586da5 -p 80 -b http://xcp-cms.data.slu.se:80/Components/vds/ http://xcp-cms.data.slu.se:80/Components/vds/ 172.22.8.14 ca67f0a5-115c-4943-a9ef-157654586da5 False. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:12,530 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-7) Invoking chmod +x /tmp/vds_installer_ca67f0a5-115c-4943-a9ef-157654586da5.py; /tmp/vds_installer_ca67f0a5-115c-4943-a9ef-157654586da5.py -c 'ssl=true;management_port=54321' -O 'slu' -t 2012-06-28T10:25:05 -f /tmp/firewall.conf.ca67f0a5-115c-4943-a9ef-157654586da5 -p 80 -b http://xcp-cms.data.slu.se:80/Components/vds/ http://xcp-cms.data.slu.se:80/Components/vds/ 172.22.8.14 ca67f0a5-115c-4943-a9ef-157654586da5 False on 172.22.8.14
>>> 2012-06-28 12:25:13,545 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='INSTALLER' status='OK' message='Test platform succeeded'/>
>>> <BSTRAP component='INSTALLER LIB' status='OK' message='Install library already exists'/>
>>> <BSTRAP component='INSTALLER' status='OK' message='vds_bootstrap.py download succeeded'/>
>>> <BSTRAP component='RHN_REGISTRATION' status='OK' message='Host properly registered with RHN/Satellite.'/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:14,615 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDSM_MAJOR_VER' status='OK' message='Available VDSM matches requirements'/>
>>> <BSTRAP component='VT_SVM' status='OK' processor='Intel' message='Server supports virtualization'/>
>>> <BSTRAP component='OS' status='OK' type='FEDORA' message='Supported platform version'/>
>>> <BSTRAP component='KERNEL' status='OK' version='0' message='Skipped kernel version check'/>
>>> <BSTRAP component='CONFLICTING PACKAGES' status='OK' result='cman.x86_64' message='package cman.x86_64 is not installed '/>
>>> <BSTRAP component='REQ PACKAGES' status='OK' result='SDL.x86_64' message='SDL-1.2.14-16.fc17.x86_64 '/>
>>> <BSTRAP component='REQ PACKAGES' status='OK' result='bridge-utils.x86_64' message='bridge-utils-1.5-3.fc17.x86_64 '/>
>>> <BSTRAP component='REQ PACKAGES' status='OK' result='mesa-libGLU.x86_64' message='mesa-libGLU-8.0.3-1.fc17.x86_64 '/>
>>> <BSTRAP component='REQ PACKAGES' status='OK' result='openssl.x86_64' message='openssl-1.0.0j-1.fc17.x86_64 '/>
>>> <BSTRAP component='REQ PACKAGES' status='OK' result='m2crypto.x86_64' message='m2crypto-0.21.1-8.fc17.x86_64 '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:15,705 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='REQ PACKAGES' status='OK' result='rsync.x86_64' message='rsync-3.0.9-2.fc17.x86_64 '/>
>>> <BSTRAP component='VDS PACKAGES' status='OK' result='qemu-kvm' message='qemu-kvm-1.0-17.fc17.x86_64 '/>
>>> <BSTRAP component='VDS PACKAGES' status='OK' result='qemu-kvm-tools' message='qemu-kvm-tools-1.0-17.fc17.x86_64 '/>
>>> <BSTRAP component='VDS PACKAGES' status='OK' result='vdsm' message='vdsm-4.10.0-2.fc17.x86_64 '/>
>>> <BSTRAP component='VDS PACKAGES' status='OK' result='vdsm-cli' message='vdsm-cli-4.10.0-2.fc17.noarch '/>
>>> <BSTRAP component='VDS PACKAGES' status='WARN' result='libjpeg' message='package libjpeg is not installed '/>
>>> <BSTRAP component='VDS PACKAGES' status='OK' result='spice-server' message='spice-server-0.10.1-2.fc17.x86_64 '/>
>>> <BSTRAP component='VDS PACKAGES' status='OK' result='pixman' message='pixman-0.24.4-2.fc17.x86_64 '/>
>>> <BSTRAP component='VDS PACKAGES' status='OK' result='seabios' message='seabios-1.7.0-1.fc17.x86_64 '/>
>>> <BSTRAP component='VDS PACKAGES' status='OK' result='qemu-img' message='qemu-img-1.0-17.fc17.x86_64 '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:16,751 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='fence-agents' message='fence-agents-3.1.8-1.fc17.x86_64 '/>
>>> <BSTRAP component='VDS PACKAGES' status='OK' result='libselinux-python' message='libselinux-python-2.1.10-3.fc17.x86_64 '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:27,770 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='qemu-kvm' message='qemu-kvm-1.0-17.fc17.x86_64 '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:29,778 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='qemu-kvm-tools' message='qemu-kvm-tools-1.0-17.fc17.x86_64 '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:32,785 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='vdsm' message='vdsm-4.10.0-2.fc17.x86_64 '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:35,794 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='vdsm-cli' message='vdsm-cli-4.10.0-2.fc17.noarch '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:38,805 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='spice-server' message='spice-server-0.10.1-2.fc17.x86_64 '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:40,819 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='pixman' message='pixman-0.24.4-2.fc17.x86_64 '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:43,826 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='seabios' message='seabios-1.7.0-1.fc17.x86_64 '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:46,833 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='qemu-img' message='qemu-img-1.0-17.fc17.x86_64 '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:49,856 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='fence-agents' message='fence-agents-3.1.8-1.fc17.x86_64 '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:52,898 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='libselinux-python' message='libselinux-python-2.1.10-3.fc17.x86_64 '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:53,223 ERROR [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='libjpeg' message='package libjpeg is not installed '/>
>>> <BSTRAP component='CreateConf' status='FAIL' message='Basic configuration failed to import default values'/>
>>> <BSTRAP component='RHEV_INSTALL' status='FAIL'/>
>>> . Error occured. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:53,236 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-7) RunSSHCommand returns true
>>> 2012-06-28 12:25:53,236 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] RunScript ended:true
>>> 2012-06-28 12:25:53,237 ERROR [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Operation failure. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:53,246 INFO [org.ovirt.engine.core.bll.InstallVdsCommand] (pool-3-thread-7) [3a635c45] After Installation pool-3-thread-7
>>> 2012-06-28 12:25:53,248 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (pool-3-thread-7) [3a635c45] START, SetVdsStatusVDSCommand(vdsId = 105460c0-c0ea-11e1-b737-9b694eb255f6, status=InstallFailed, nonOperationalReason=NONE), log id: 3cf757b4
>>> 2012-06-28 12:25:53,264 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (pool-3-thread-7) [3a635c45] FINISH, SetVdsStatusVDSCommand, log id: 3cf757b4
>>>
>>>
>>> The action in question "CreateConf" looks like:
>>>
>>> /usr/share/vdsm-bootstrap/vds_bootstrap.py
>>>
>>> def _makeConfig(self):
>>> import datetime
>>> from config import config
>>>
>>> if not os.path.exists(VDSM_CONF):
>>> logging.debug("makeConfig: generating conf.")
>>> lines = []
>>> lines.append ("# Auto-generated by vds_bootstrap at:" + str(datetime.datetime.now()) + "\n")
>>> lines.append ("\n")
>>>
>>> lines.append ("[vars]\n") #Adding ts for the coming scirpts.
>>> lines.append ("trust_store_path = " + config.get('vars', 'trust_store_path') + "\n")
>>> lines.append ("ssl = " + config.get('vars', 'ssl') + "\n")
>>> lines.append ("\n")
>>>
>>> lines.append ("[addresses]\n") #Adding mgt port for the coming scirpts.
>>> lines.append ("management_port = " + config.get('addresses', 'management_port') + "\n")
>>>
>>> logging.debug("makeConfig: writing the following to " + VDSM_CONF)
>>> logging.debug(lines)
>>> fd, tmpName = tempfile.mkstemp()
>>> f = os.fdopen(fd, 'w')
>>> f.writelines(lines)
>>> f.close()
>>> os.chmod(tmpName, 0644)
>>> shutil.move(tmpName, VDSM_CONF)
>>> else:
>>> self.message = 'Basic configuration found, skipping this step'
>>> logging.debug(self.message)
>>>
>>> def createConf(self):
>>> """
>>> Generate initial configuration file for VDSM. Must run after package installation!
>>> """
>>> self.message = 'Basic configuration set'
>>> self.rc = True
>>> self.status = 'OK'
>>>
>>> try:
>>> self._makeConfig()
>>> except Exception, e:
>>> logging.error('', exc_info=True)
>>> self.message = 'Basic configuration failed'
>>> if isinstance(e, ImportError):
>>> self.message = self.message + ' to import default values'
>>> self.rc = False
>>> self.status = 'FAIL'
>>>
>>> self._xmlOutput('CreateConf', self.status, None, None, self.message)
>>> return self.rc
>>>
>>>
>>> What now? Can anyone tell me why it fails? Besides the obvious "it´s beta" of course:)
>>>
>>>
>>> Med Vänliga Hälsningar
>>> -------------------------------------------------------------------------------
>>> Karli Sjöberg
>>> Swedish University of Agricultural Sciences
>>> Box 7079 (Visiting Address Kronåsvägen 8)
>>> S-750 07 Uppsala, Sweden
>>> Phone: +46-(0)18-67 15 66
>>> karli.sjoberg(a)slu.se
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
> Med Vänliga Hälsningar
> -------------------------------------------------------------------------------
> Karli Sjöberg
> Swedish University of Agricultural Sciences
> Box 7079 (Visiting Address Kronåsvägen 8)
> S-750 07 Uppsala, Sweden
> Phone: +46-(0)18-67 15 66
> karli.sjoberg(a)slu.se
>
1
0
On 2012-6-27 23:30, Michal Skrivanek wrote:
> That would make sense only when you are powered off, wouldn't it?
> Otherwise what's the point of disk snapshot with open transactions, buffers in mem, etc, without saving the state of the VM too?
I understand syncing virtual disk state with the OS cache is a quite
challenge to a snapshot tool developers. However, it is a handy feature
for the users just want to back up their data in the disks without
stopping VM. As other virtualization solution support this feature,
we should narrow the gap to attract more users to oVirt.
>
> Thanks,
> michal
>
> On 27 Jun 2012, at 17:26, Shu Ming <shuming(a)linux.vnet.ibm.com> wrote:
>
>> On 2012-6-27 23:19, Dafna Ron wrote:
>>> Hi,
>>>
>>> no. the snapshot is for the entire vm and it's disks.
>>>
>>>
>>> On 06/27/2012 05:31 PM, Shu Ming wrote:
>>>> Hi,
>>>>
>>>> I am testing oVirt engine 3.1 beta release 06-07. I tried
>>>> "snapshots-->create" button for a VM and it looked like that the
>>>> snapshot to be created was for both the running state of the system
>>>> and the virtual disks. I am wondering if there is any way to create
>>>> the snapshot only for the virtual disks of the VM in engine?
>>>>
>> Do we have future plan to make snapshot for it's disks?
>>
>> --
>> Shu Ming <shuming(a)linux.vnet.ibm.com>
>> IBM China Systems and Technology Laboratory
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
--
Shu Ming <shuming(a)linux.vnet.ibm.com>
IBM China Systems and Technology Laboratory
4
9
Hi,
I am testing oVirt engine 3.1 beta release 06-07. I tried
"snapshots-->create" button for a VM and it looked like that the
snapshot to be created was for both the running state of the system and
the virtual disks. I am wondering if there is any way to create the
snapshot only for the virtual disks of the VM in engine?
--
Shu Ming <shuming(a)linux.vnet.ibm.com>
IBM China Systems and Technology Laboratory
2
2
Minutes: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-06-27-14.00.html
Minutes (text): http://ovirt.org/meetings/ovirt/2012/ovirt.2012-06-27-14.00.txt
Log: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-06-27-14.00.log.html
=========================
#ovirt: oVirt Weekly Sync
=========================
Meeting started by oschreib at 14:00:19 UTC. The full logs are available
at http://ovirt.org/meetings/ovirt/2012/ovirt.2012-06-27-14.00.log.html
.
Meeting summary
---------------
* agenda and roll call (oschreib, 14:00:29)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=822145
(oschreib, 14:12:20)
* Status of next release (oschreib, 14:12:41)
* GA date is July 9th (oschreib, 14:12:53)
* 11 blocker currently in the 3.1 tracker (oschreib, 14:13:12)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=822145
(oschreib, 14:13:32)
* Sanlock locking failed for readonly devices (component: libvirt,
status: ASSIGNED) (oschreib, 14:16:17)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=828633
(oschreib, 14:16:32)
* IDEA: the feature owner can put the wiki page in a different
category e.g. [[Category:Feature complete]] (quaid, 14:16:56)
* patch waiting for a review (oschreib, 14:18:12)
* It's impossible to create bond with setupNetworks (component: vdsm,
status: ASSIGNED) (oschreib, 14:19:13)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=831998
(oschreib, 14:19:23)
* merged upstream, waiting for the Fedora backport (oschreib,
14:23:01)
* vdsmd init script times out due to lengthy semanage operation
(component: vdsm, status: POST) (oschreib, 14:23:48)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=832199
(oschreib, 14:23:57)
* patch in review (oschreib, 14:26:55)
* 3.1: sshd daemon is not starting correctly after complete the
installation of oVirt Node (component: node, status: MODIFIED)
(oschreib, 14:27:26)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=832517
(oschreib, 14:27:35)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=832199 (danken,
14:29:05)
* bug fixed, new node build today if possible (oschreib, 14:30:01)
* ACTION: mburns to build and upload new node version (oschreib,
14:30:14)
* 3.1: iptables blocking communication between node and engine
(component: node, status: MODIFIED) (oschreib, 14:31:30)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=832539
(oschreib, 14:31:36)
* should be in next build (oschreib, 14:33:07)
* ovirt-node can't be approved due to missing /rhev/data-center
(component: vdsm, status: ON_QA) (oschreib, 14:33:45)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=832577
(oschreib, 14:33:51)
* waiting for a verification (oschreib, 14:35:29)
* 3.1 Allow to create VLANed network on top of existing bond
(component: vdsm, status: POST) (oschreib, 14:36:05)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=833119
(oschreib, 14:36:20)
* pushed upstream, should be backported tomorrow to Fedora (oschreib,
14:40:56)
* Failed to add host - does not accept vdsm 4.10 (component: vdsm,
status: ON_QA) (oschreib, 14:41:24)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=833201
(oschreib, 14:41:33)
* waiting for a verification (oschreib, 14:43:51)
* [vdsm][bridgeless] BOOTPROTO/IPADDR/NETMASK options are not set on
interface (component: vdsm, status: NEW) (oschreib, 14:44:18)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=834281
(oschreib, 14:44:26)
* not a 3.1 blocker (oschreib, 14:51:06)
* Leak of keystore file descriptors in ovirt-engine (component:
engine, status: MODIFIED) (oschreib, 14:52:00)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=834399
(oschreib, 14:52:08)
* pushed and backported, waiting for a build (oschreib, 14:52:22)
* Sub-project reports (oschreib, 14:55:08)
* status covered in BZ overview (oschreib, 14:57:17)
* Upcoming workshops (oschreib, 14:57:29)
* no updates about upcoming workshops (oschreib, 14:59:25)
* Release notes (oschreib, 14:59:56)
* 3.1 release notes should be given to sgordon by mail (oschreib,
15:04:51)
* ACTION: oschreib to nag maintainers about RN (oschreib, 15:06:04)
Meeting ended at 15:08:51 UTC.
Action Items
------------
* mburns to build and upload new node version
* oschreib to nag maintainers about RN
Action Items, by person
-----------------------
* mburns
* mburns to build and upload new node version
* oschreib
* oschreib to nag maintainers about RN
* **UNASSIGNED**
* (none)
People Present (lines said)
---------------------------
* oschreib (119)
* sgordon (24)
* danken (18)
* RobertM (10)
* ilvovsky (8)
* quaid (6)
* fabiand (6)
* mburns (4)
* ofrenkel (3)
* READ10 (2)
* ovirtbot (2)
* Guest1297 (1)
* crobinso (1)
* dustins (1)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
1
0
Hi guys,
I'm struggling with creating disk via api. I tried to POST this body
<disk>
<name>my_cool_disk</name>
<provisioned_size>1073741824</provisioned_size>
<storage_domains>
<storage_domain>
<name>master_sd</name>
</storage_domain>
</storage_domains>
<size>1073741824</size>
<interface>virtio</interface>
<format>cow</format>
</disk>
but getting error from CanDoAction:
2012-06-25 17:37:14,497 WARN [org.ovirt.engine.core.bll.AddDiskCommand]
(ajp--0.0.0.0-8009-11) [26a7e908] CanDoAction of action AddDisk failed.
Reasons:VAR__ACTION__ADD,VAR__TYPE__VM_DISK,ACTION_TYPE_FAILED_STORAGE_DOMAIN_NOT_EXIST
2012-06-25 17:37:14,502 ERROR
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource]
(ajp--0.0.0.0-8009-11) Operation Failed: [Cannot add Virtual Machine
Disk. Storage Domain doesn't exist.]
The storage domain 'master_sd' is operational and I can create a disk
from webadmin. According rsdl the provisioned_size is not child of disk
element
<parameter required="true" type="xs:int">
<name>provisioned_size</name>
</parameter>
<parameter required="true" type="xs:string">
<name>disk.interface</name>
</parameter>
<parameter required="true" type="xs:string">
<name>disk.format</name>
</parameter>
but in api/disks it is.
Any ideas what am I doing wrong?
Thanks,
Kuba
3
5
I installed the hook smbios but it doesn't seem to be working. I was
told earlier I needed to create a custom property to pass the data but
that doesn't seem to be working. Not sure what I am doing wrong. I am
looking for debugging hints on how to show what is being passed to the
hook to confirm it is grabbing the correct info and what is being passed
back to vdsm?
http://gerrit.ovirt.org/gitweb?p=vdsm.git;a=tree;f=vdsm_hooks/smbios;h=c61c…
Thanks
Robert
2
7
Hey everyone,
There are two updates I'd like to share with you:
1. oVirt 3.1 release date has been delayed to the 9th of July.
As you all know, We're in the middle of creating a new build of oVirt.
With your help, we found multiple issues which we consider as release blockers. Some of these issues require a few days to be solved properly, and as a result, we had to delay the general availability of oVirt.
2. New ovirt-engine rpms are available for download at http://ovirt.org/releases/beta/fedora/17
This build contain multiple bug fixes, as well as a new versioning schema, which will ensure future updates will be done correctly.
Please note, that due to the new versions, we don't support in-beta upgrade. please make sure you clean up you environment (using engine-cleanup, and yum remove) before installing the new rpms (version 3.1.0-0.1.20120620git6ef9f8.fc17)
New VDSM rpms should be available in the beginning of next week).
Regards,
--
Ofer Schreiber
oVirt Release Manager
1
1
Steps:
1) Installed ovirt-engine, configured Cluster for GlusterFS
2) Installed 8 ovirt 3.1 nodes
3) Joined all 8 notes with ovirt-engine (all up and happy)
4) Manually added all 8 peers for GlusterFS on hosts (all peers happy)
5) Create Volume errors:
2012-06-25 16:44:42,412 WARN
[org.ovirt.engine.core.dal.job.ExecutionMessageDirector]
(ajp--0.0.0.0-8009-4) [6c02c1f5] The message key CreateGlusterVolume is
missing from bundles/ExecutionMessages
2012-06-25 16:44:42,483 INFO
[org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand]
(ajp--0.0.0.0-8009-4) [6c02c1f5] Running command:
CreateGlusterVolumeCommand internal: false. Entities affected : ID:
99408929-82cf-4dc7-a532-9d998063fa95 Type: VdsGroups
2012-06-25 16:44:42,486 INFO
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
(ajp--0.0.0.0-8009-4) [6c02c1f5] START,
CreateGlusterVolumeVDSCommand(vdsId =
8324ff12-bf1c-11e1-b235-43d3f71a81d8), log id: 15757d4c
2012-06-25 16:44:42,593 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--0.0.0.0-8009-4) [6c02c1f5] Failed in CreateGlusterVolumeVDS method
2012-06-25 16:44:42,594 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--0.0.0.0-8009-4) [6c02c1f5] Error code unexpected and error message
VDSGenericException: VDSErrorException: Failed to CreateGlusterVolumeVDS,
error = Unexpected exception
2012-06-25 16:44:42,594 ERROR
[org.ovirt.engine.core.vdsbroker.VDSCommandBase] (ajp--0.0.0.0-8009-4)
[6c02c1f5] Command CreateGlusterVolumeVDS execution failed. Exception:
VDSErrorException: VDSGenericException: VDSErrorException: Failed to
CreateGlusterVolumeVDS, error = Unexpected exception
2012-06-25 16:44:42,595 INFO
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
(ajp--0.0.0.0-8009-4) [6c02c1f5] FINISH, CreateGlusterVolumeVDSCommand,
log id: 15757d4c
2012-06-25 16:44:42,596 ERROR
[org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand]
(ajp--0.0.0.0-8009-4) [6c02c1f5] Command
org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand throw Vdc Bll
exception. With error message VdcBLLException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to CreateGlusterVolumeVDS,
error = Unexpected exception
><>
Nathan Stratton
nathan at robotics.net
http://www.robotics.net
3
14
I'm attempting to add iSCSI storage to a data center via web
interface, and while I"m able to see the target ISCSI device and
login, clicking the "OK" button doesn't submit the storage selection.
I noticed from looking at the oVirt manual that somewhere in this
interface I have to select the LUN. However once logged into the
target I never see LUNs to select.
I've attached two screenshots that show the UI and what could be
errors in how it's being displayed. I also attached logs from vdsm
host and engine that were captured while attempting to add the storage
domain.
After logging into the iSCSI target via web interface, this is the
output on target host, showing that my ovirt node (10.20.1.240) is
connected.
# tgtadm --lld iscsi --op show --mode target
Target 1: iqn.2012-06.edu.tamu.dc-store:lv_kvm_pool
System information:
Driver: iscsi
node dc-store0.tamu.edu
State: ready
I_T nexus information:
I_T nexus: 3
Initiator: iqn.1994-05.com.redhat:ad499aa5f37e
Connection: 0
IP Address: 10.20.1.240
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Readonly: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: p_lu_coraid23_lu
SCSI SN: (stdin)=
Size: 4398047 MB, Block size: 512
Online: Yes
Removable media: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/vg_coraid23/lv_kvm_pool
Backing store flags:
Account information:
ACL information:
ALL
Based on some archived emails I've included previously mentioned
outputs that may help
>From the ovirt node:
# iscsiadm -m discovery -p 10.20.1.250 -t sendtargets --login
10.20.1.250:3260,1 iqn.2012-06.edu.tamu.dc-store:lv_kvm_pool
Logging in to [iface: default, target:
iqn.2012-06.edu.tamu.dc-store:lv_kvm_pool, portal: 10.20.1.250,3260]
(multiple)
Login to [iface: default, target:
iqn.2012-06.edu.tamu.dc-store:lv_kvm_pool, portal: 10.20.1.250,3260]
successful.
# multipath -r
Jun 25 14:19:22 | 3600508e0000000004ecf45ecb0c2ec0a: ignoring map
reload: 1p_lu_coraid23_lu undef IET,VIRTUAL-DISK
size=4.0T features='0' hwhandler='0' wp=undef
`-+- policy='round-robin 0' prio=1 status=undef
`- 8:0:0:1 sdc 8:32 active ready running
# iscsiadm -m session
tcp: [4] 10.20.1.250:3260,1 iqn.2012-06.edu.tamu.dc-store:lv_kvm_pool
# vdsClient -s 0 getDeviceList
[{'GUID': '1p_lu_coraid23_lu',
'capacity': '4398046511104',
'devtype': 'iSCSI',
'fwrev': '0001',
'logicalblocksize': '512',
'partitioned': False,
'pathlist': [{'connection': '10.20.1.250',
'initiatorname': 'default',
'iqn': 'iqn.2012-06.edu.tamu.dc-store:lv_kvm_pool',
'port': '3260',
'portal': '1'}],
'pathstatus': [{'lun': '1',
'physdev': 'sdc',
'state': 'active',
'type': 'iSCSI'}],
'physicalblocksize': '512',
'productID': 'VIRTUAL-DISK',
'pvUUID': '7MHtfm-352q-zWrT-KfU8-sJ1l-e4rP-aKsuhu',
'serial': 'SIET_VIRTUAL-DISK',
'vendorID': 'IET',
'vgUUID': 'YCso68-YCo8-w817-I5Nf-mFx3-aoPe-LWkYco'}]
# vgs -o+pv_name
VG #PV #LV #SN Attr VSize VFree
PV
f18b2342-4713-4319-a878-3025b38556a4 1 6 0 wz--n- 4.00t 4.00t
/dev/mapper/1p_lu_coraid23_lu
vg_dc-kvm0 1 2 0 wz--n- 67.50g 0
/dev/sda2
Thanks
- Trey
1
0
Good monday morning,
Installed Fedora 17 and tried to install the node to a 3.1 engine.
I'm getting an VDS Network exception in the engine side:
in /var/log/ovirt-engine/engine:
2012-06-25 10:15:34,132 WARN
[org.ovirt.engine.core.vdsbroker.VdsManager] (QuartzScheduler_Worker-96)
ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds =
2e9929c6-bea6-11e1-bfdd-ff11f39c80eb : ovirt-node2.smb.eurotux.local,
VDS Network Error, continuing.
VDSNetworkException:
2012-06-25 10:15:36,143 ERROR
[org.ovirt.engine.core.vdsbroker.VdsManager] (QuartzScheduler_Worker-20)
VDS::handleNetworkException Server failed to respond, vds_id =
2e9929c6-bea6-11e1-bfdd-ff11f39c80eb, vds_name =
ovirt-node2.smb.eurotux.local, error = VDSNetworkException:
2012-06-25 10:15:36,181 INFO
[org.ovirt.engine.core.bll.VdsEventListener] (pool-3-thread-49)
ResourceManager::vdsNotResponding entered for Host
2e9929c6-bea6-11e1-bfdd-ff11f39c80eb, 10.10.30.177
2012-06-25 10:15:36,214 ERROR
[org.ovirt.engine.core.bll.VdsNotRespondingTreatmentCommand]
(pool-3-thread-49) [1afd4b89] Failed to run Fence script on
vds:ovirt-node2.smb.eurotux.local, VMs moved to UnKnown instead.
While in the node, vdsmd does fail to sample nics:
in /var/log/vdsm/vdsm.log:
nf = netinfo.NetInfo()
File "/usr/share/vdsm/netinfo.py", line 268, in __init__
_netinfo = get()
File "/usr/share/vdsm/netinfo.py", line 220, in get
for nic in nics() ])
KeyError: 'p36p1'
MainThread::INFO::2012-06-25 10:45:09,110::vdsm::76::vds::(run) VDSM
main thread ended. Waiting for 1 other threads...
MainThread::INFO::2012-06-25 10:45:09,111::vdsm::79::vds::(run)
<_MainThread(MainThread, started 140567823243072)>
MainThread::INFO::2012-06-25 10:45:09,111::vdsm::79::vds::(run)
<Thread(libvirtEventLoop, started daemon 140567752681216)>
in /etc/var/log/messages there is a lot of vdsmd died too quickly:
Jun 25 10:45:08 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm' died
too quickly, respawning slave
Jun 25 10:45:08 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm' died
too quickly, respawning slave
Jun 25 10:45:09 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm' died
too quickly for more than 30 seconds, master sleeping for 900 seconds
I don't know why Fedora 17 calls p36p1 to what was eth0 in Fedora 16,
but tried to configure a bridge ovirtmgmt and the only difference is
that KeyError becomes 'ovirtmgmt'.
Regards,
Jose Garcia
3
9
Hello everyone,
I'd like to collect feedback from everyone on last week's oVirt workshop
at LinuxCon Japan. [0] Please reply back with comments by close of
business this Friday, 15 June.
Once we have gathered feedback on list, I'll capture it on the oVirt
wiki. We can then use what we've learned to help construct the agenda
and other plans for other upcoming oVirt workshops at LinuxCons. [1]
If you know that you would like to volunteer as an instructor for future
workshops or would like to suggest alternate content for the workshop,
please include that in your feedback.
- Course Material
What sessions were most well received? Which ones require improvement?
Any additional sessions we'd suggest?
- Audience Participation
How many attendees? How did the Q&A periods go? Would we like to prepare
a post-event attendee survey? (I recommend we do the survey and can
prepare some questions for the group if that's useful.)
- Developer/user traction resulting from engaging at workshop
Did this workshop help us to gain new developers or users? Reinforce
relationships with existing community members?
- Promotion of the event, both before and after
What could be done to more effectively promote the event prior to the
workshops? Videos and slides from the workshop should be posted on the
LC Japan site tomorrow; what action would the community like to take to
promote this content?
- A/V and Room Set up
Did the seating arrangements work well for the workshop format? Did the
A/V work well, including the videotaping process?
- Food and beverage
Did the catered in lunch help to keep the flow of the workshop
productive? Was the food of good quality and in keeping with the needs
of attendee dietary constraints?
- Give aways
We did not produce attendee gifts for the oVirt workshop. Thoughts on
whether this would be a welcome addition in the future? Suggestions for
type of gift also welcome.
- Any other feedback
If it is preferable to discuss this feedback real-time, I will ask Mike
Burns to give us more time for this topic during next week's oVirt IRC
meeting.
[0] -
https://events.linuxfoundation.org/events/linuxcon-japan/ovirt-gluster-work…
[1] - http://www.ovirt.org/wiki/OVirt_Global_Workshops
Cheers,
LH
--
Leslie Hawthorn
Community Action and Impact
Open Source and Standards @ Red Hat
identi.ca/lh
twitter.com/lhawthorn
4
12
25 Jun '12
The next oVirt Weekly Meeting is scheduled for 2012-06-20 at 15:00 UTC
(10:00 AM EDT).
Current Agenda Items (as of this email)
* Status of Next Release
* Sub-project reports (engine, vdsm, node)
* Upcoming workshops
Latest Agenda is listed here:
http://www.ovirt.org/wiki/Meetings#Weekly_project_sync_meeting
Reminder: If you propose an additional agenda item, please be present
for the meeting to lead the discussion on it.
Thanks
Mike
1
1
This is a message in Mime Format. If you see this, your mail reader does not support this format.
--=_33f249a1e3e46297f30fd6c611b989da
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
VM with attached direct lun not started with error in vm.log=0A=0Aqemu-k=
vm: -drive file=3D/dev/mapper/1p_ISCSI_0_lun1,if=3Dnone,id=3Ddrive-virti=
o-disk1,format=3Draw,serial=3D,cache=3Dnone,werror=3Dstop,rerror=3Dstop,=
aio=3Dnative: could not open disk image /dev/mapper/1p_ISCSI_0_lun1: Per=
mission denied=0A=0Ais this a bug or feature is not implemented yet?=0A=
=0Aenv:=0Afedora17=0Aengine from http://www.ovirt.org/releases/beta/fedo=
ra/17/=0Avdsm-4.10.0-0.57.git2987ee3.fc17.x86_64=0A=0A[root@kvm04 /]# ls=
-l /dev/mapper/ | grep lun1=0Alrwxrwxrwx. 1 root root 7 Jun 25 14=
:05 1p_ISCSI_0_lun1 -> ../dm-7=0A[root@kvm04 /]# ls -l /dev/ | grep dm-7=
=0Abrw-rw----. 1 root disk 253, 7 Jun 25 14:05 dm-7=0A=0Ausermod -=
a -G disk qemu solved the problem, but is it correct way to solve it?=0A=
=0A--
--=_33f249a1e3e46297f30fd6c611b989da
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
VM with attached direct <span id=3D"nicSpell_0" class=3D"nicSpellWord"><=
span id=3D"nicSpell_1" class=3D"nicSpellWord">lun</span></span> not star=
ted with error in vm.log <br><br><br>qemu-kvm: -drive file=3D/<span id=
=3D"nicSpell_2" class=3D"nicSpellWord"><span id=3D"nicSpell_3" class=3D"=
nicSpellWord">dev</span></span>/mapper/1p_<span id=3D"nicSpell_4" class=
=3D"nicSpellWord"><span id=3D"nicSpell_5" class=3D"nicSpellWord">ISCSI</=
span></span>_0_<span id=3D"nicSpell_6" class=3D"nicSpellWord"><span id=
=3D"nicSpell_7" class=3D"nicSpellWord">lun</span></span>1,if=3Dnone,id=
=3Ddrive-<span id=3D"nicSpell_8" class=3D"nicSpellWord"><span id=3D"nicS=
pell_9" class=3D"nicSpellWord">virtio</span></span>-disk1,format=3Draw,s=
erial=3D,cache=3Dnone,<span id=3D"nicSpell_10" class=3D"nicSpellWord"><s=
pan id=3D"nicSpell_11" class=3D"nicSpellWord">werror</span></span>=3Dsto=
p,<span id=3D"nicSpell_12" class=3D"nicSpellWord"><span id=3D"nicSpell_1=
3" class=3D"nicSpellWord">rerror</span></span>=3Dstop,<span id=3D"nicSpe=
ll_14" class=3D"nicSpellWord"><span id=3D"nicSpell_15" class=3D"nicSpell=
Word">aio</span></span>=3Dnative: could not open disk image /<span id=3D=
"nicSpell_16" class=3D"nicSpellWord"><span id=3D"nicSpell_17" class=3D"n=
icSpellWord">dev</span></span>/mapper/1p_<span id=3D"nicSpell_18" class=
=3D"nicSpellWord"><span id=3D"nicSpell_19" class=3D"nicSpellWord">ISCSI<=
/span></span>_0_<span id=3D"nicSpell_20" class=3D"nicSpellWord"><span id=
=3D"nicSpell_21" class=3D"nicSpellWord">lun</span></span>1: Permission d=
enied<br><br><br>is this a bug or feature is not implemented yet?<br><br=
>env:<br>fedora17<br>engine from http://<span id=3D"nicSpell_22" class=
=3D"nicSpellWord">www</span>.<span id=3D"nicSpell_23" class=3D"nicSpellW=
ord">ovirt</span>.org/releases/beta/fedora/17/<br><span id=3D"nicSpell_2=
4" class=3D"nicSpellWord">vdsm</span>-4.10.0-0.57.git2987ee3.fc17.x86_64=
<br><br>[root@kvm04 /]# ls -l /<span id=3D"nicSpell_25" class=3D"nicSpel=
lWord">dev</span>/mapper/ | grep <span id=3D"nicSpell_26" class=3D"nicSp=
ellWord">lun</span>1<br>l<span id=3D"nicSpell_27" class=3D"nicSpellWord"=
>rw</span>x<span id=3D"nicSpell_28" class=3D"nicSpellWord">rw</span>x<sp=
an id=3D"nicSpell_29" class=3D"nicSpellWord">rw</span>x. 1 root root&nbs=
p; 7 Jun 25 14:05 1p_<span id=3D"nicSpell_=
30" class=3D"nicSpellWord">ISCSI</span>_0_<span id=3D"nicSpell_31" class=
=3D"nicSpellWord">lun</span>1 -> ../<span id=3D"nicSpell_32" class=3D=
"nicSpellWord">dm</span>-7<br>[root@kvm04 /]# ls -l /<span id=3D"nicSpel=
l_33" class=3D"nicSpellWord">dev</span>/ | grep <span id=3D"nicSpell_34"=
class=3D"nicSpellWord">dm</span>-7<br><span id=3D"nicSpell_35" class=3D=
"nicSpellWord">b<span id=3D"nicSpell_36" class=3D"nicSpellWord">rw</span=
></span>-<span id=3D"nicSpell_37" class=3D"nicSpellWord">rw</span>----.&=
nbsp; 1 root disk 253, 7 Jun 25 14:05 <spa=
n id=3D"nicSpell_38" class=3D"nicSpellWord">dm</span>-7<br><br><br><span=
id=3D"nicSpell_39" class=3D"nicSpellWord">usermod</span> -a -G disk qem=
u solved the problem, but is it correct way to solve it?<br><br><br>--<b=
r><br>
--=_33f249a1e3e46297f30fd6c611b989da--
1
0
Dear,all
Can you help me?
# tail /var/log/ovirt-engine/engine.log -f
2012-06-25 17:50:42,796 INFO [org.ovirt.engine.core.utils.hostinstall.HostKeyVerifier] (NioProcessor-21) SSH key fingerprint b4:f3:b9:4b:93:40:e8:9e:57:fc:b3:fe:b3:af:13:b4 for host 172.30.1.63 (172.30.1.63) has been successfully verified.
2012-06-25 17:50:42,872 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (ajp--0.0.0.0-8009-3) Invoking /bin/echo -e `/bin/bash -c /usr/sbin/dmidecode|/bin/awk ' /UUID/{ print $2; } ' | /usr/bin/tr '
' '_' && cat /sys/class/net/*/address | /bin/grep -v '00:00:00:00' | /bin/sort -u | /usr/bin/head --lines=1` on 172.30.1.63
2012-06-25 17:50:42,959 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (ajp--0.0.0.0-8009-3) RunSSHCommand returns true
2012-06-25 17:50:42,985 INFO [org.ovirt.engine.core.bll.AddVdsCommand] (ajp--0.0.0.0-8009-3) [7d335777] Running command: AddVdsCommand internal: false. Entities affected : ID: 99408929-82cf-4dc7-a532-9d998063fa95 Type: VdsGroups
2012-06-25 17:50:43,017 INFO [org.ovirt.engine.core.bll.AddVdsSpmIdCommand] (ajp--0.0.0.0-8009-3) [318b963a] Running command: AddVdsSpmIdCommand internal: true. Entities affected : ID: 3a9d9a40-beab-11e1-9366-525400fe2d56 Type: VDS
2012-06-25 17:50:43,048 ERROR [org.ovirt.engine.core.vdsbroker.ResourceManager] (ajp--0.0.0.0-8009-3) [318b963a] Cannot get vdsManager for vdsid=3a9d9a40-beab-11e1-9366-525400fe2d56
2012-06-25 17:50:43,049 INFO [org.ovirt.engine.core.vdsbroker.RemoveVdsVDSCommand] (ajp--0.0.0.0-8009-3) [318b963a] START, RemoveVdsVDSCommand(vdsId = 3a9d9a40-beab-11e1-9366-525400fe2d56), log id: 7bd4b2c7
2012-06-25 17:50:43,051 ERROR [org.ovirt.engine.core.vdsbroker.ResourceManager] (ajp--0.0.0.0-8009-3) [318b963a] Cannot get vdsManager for vdsid=3a9d9a40-beab-11e1-9366-525400fe2d56
2012-06-25 17:50:43,052 INFO [org.ovirt.engine.core.vdsbroker.RemoveVdsVDSCommand] (ajp--0.0.0.0-8009-3) [318b963a] FINISH, RemoveVdsVDSCommand, log id: 7bd4b2c7
2012-06-25 17:50:43,053 ERROR [org.ovirt.engine.core.vdsbroker.ResourceManager] (ajp--0.0.0.0-8009-3) [318b963a] Cannot get vdsManager for vdsid=3a9d9a40-beab-11e1-9366-525400fe2d56
2012-06-25 17:50:43,054 INFO [org.ovirt.engine.core.vdsbroker.AddVdsVDSCommand] (ajp--0.0.0.0-8009-3) [318b963a] START, AddVdsVDSCommand(vdsId = 3a9d9a40-beab-11e1-9366-525400fe2d56), log id: 49256654
2012-06-25 17:50:43,056 INFO [org.ovirt.engine.core.vdsbroker.AddVdsVDSCommand] (ajp--0.0.0.0-8009-3) [318b963a] AddVds - entered , starting logic to add VDS 3a9d9a40-beab-11e1-9366-525400fe2d56
2012-06-25 17:50:43,059 INFO [org.ovirt.engine.core.vdsbroker.AddVdsVDSCommand] (ajp--0.0.0.0-8009-3) [318b963a] AddVds - VDS 3a9d9a40-beab-11e1-9366-525400fe2d56 was added, will try to add it to the resource manager
2012-06-25 17:50:43,062 INFO [org.ovirt.engine.core.vdsbroker.VdsManager] (ajp--0.0.0.0-8009-3) [318b963a] Eneterd VdsManager:constructor
2012-06-25 17:50:43,063 INFO [org.ovirt.engine.core.vdsbroker.VdsManager] (ajp--0.0.0.0-8009-3) [318b963a] vdsBroker(172.30.1.63,54,321)
2012-06-25 17:50:43,065 INFO [org.ovirt.engine.core.vdsbroker.ResourceManager] (ajp--0.0.0.0-8009-3) [318b963a] ResourceManager::AddVds - VDS 3a9d9a40-beab-11e1-9366-525400fe2d56 was added to the Resource Manager
2012-06-25 17:50:43,067 INFO [org.ovirt.engine.core.vdsbroker.AddVdsVDSCommand] (ajp--0.0.0.0-8009-3) [318b963a] FINISH, AddVdsVDSCommand, log id: 49256654
2012-06-25 17:50:43,098 INFO [org.ovirt.engine.core.bll.InstallVdsCommand] (pool-3-thread-49) [44a5dc80] Running command: InstallVdsCommand internal: true. Entities affected : ID: 3a9d9a40-beab-11e1-9366-525400fe2d56 Type: VDS
2012-06-25 17:50:43,102 INFO [org.ovirt.engine.core.bll.InstallVdsCommand] (pool-3-thread-49) [44a5dc80] Before Installation pool-3-thread-49
2012-06-25 17:50:43,103 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-49) [44a5dc80] Installation of 172.30.1.63. Executing installation stage. (Stage: Starting Host installation)
2012-06-25 17:50:43,105 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-49) [44a5dc80] Installation of 172.30.1.63. Executing installation stage. (Stage: Connecting to Host)
2012-06-25 17:50:43,134 INFO [org.ovirt.engine.core.utils.hostinstall.HostKeyVerifier] (NioProcessor-27) SSH key fingerprint b4:f3:b9:4b:93:40:e8:9e:57:fc:b3:fe:b3:af:13:b4 for host 172.30.1.63 (172.30.1.63) has been successfully verified.
2012-06-25 17:50:43,206 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-49) [44a5dc80] Installation of 172.30.1.63. Recieved message: <BSTRAP component='RHEV_INSTALL' status='OK' message='Connected to Host 172.30.1.63 with SSH key fingerprint: b4:f3:b9:4b:93:40:e8:9e:57:fc:b3:fe:b3:af:13:b4'/>. FYI. (Stage: Connecting to Host)
2012-06-25 17:50:43,223 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-49) [44a5dc80] Installation of 172.30.1.63. Successfully connected to server ssh. (Stage: Connecting to Host)
2012-06-25 17:50:43,224 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-49) [44a5dc80] Installation of 172.30.1.63. Executing installation stage. (Stage: Get the unique vds id)
2012-06-25 17:50:43,225 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-49) Invoking /bin/echo -e `/bin/bash -c /usr/sbin/dmidecode|/bin/awk ' /UUID/{ print $2; } ' | /usr/bin/tr '
' '_' && cat /sys/class/net/*/address | /bin/grep -v '00:00:00:00' | /bin/sort -u | /usr/bin/head --lines=1` on 172.30.1.63
2012-06-25 17:50:43,292 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-49) [44a5dc80] Installation of 172.30.1.63. Recieved message: 00010000-FE01-0300-00C4-43504C4400C7_00:23:8b:65:08:90
. FYI. (Stage: Get the unique vds id)
2012-06-25 17:50:43,306 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-49) [44a5dc80] Installation of 172.30.1.63. Assigning unique id 00010000-FE01-0300-00C4-43504C4400C7_00:23:8b:65:08:90 to Host. (Stage: Get the unique vds id)
2012-06-25 17:50:43,311 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-49) RunSSHCommand returns true
2012-06-25 17:50:43,316 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-49) [44a5dc80] Installation of 172.30.1.63. Executing installation stage. (Stage: Upload Installation script to Host)
2012-06-25 17:50:43,325 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-49) Uploading file /usr/share/ovirt-engine/scripts/vds_installer.py to /tmp/vds_installer_2ea9f19e-7e9d-46a5-bdfb-dc86532c1fe6.py on 172.30.1.63
2012-06-25 17:50:43,335 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-49) Uploading file /usr/share/ovirt-engine/scripts/vds_installer.py to /tmp/vds_installer_2ea9f19e-7e9d-46a5-bdfb-dc86532c1fe6.py on 172.30.1.63
2012-06-25 17:50:43,668 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-49) [44a5dc80] Installation of 172.30.1.63. successfully done sftp operation ( Stage: Upload Installation script to Host)
2012-06-25 17:50:43,674 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-49) return true
2012-06-25 17:50:43,676 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-49) Uploading file /tmp/firewall.conf4248057098344742688.tmp to /tmp/firewall.conf.2ea9f19e-7e9d-46a5-bdfb-dc86532c1fe6 on 172.30.1.63
2012-06-25 17:50:43,678 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-49) Uploading file /tmp/firewall.conf4248057098344742688.tmp to /tmp/firewall.conf.2ea9f19e-7e9d-46a5-bdfb-dc86532c1fe6 on 172.30.1.63
2012-06-25 17:50:43,965 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-49) [44a5dc80] Installation of 172.30.1.63. successfully done sftp operation ( Stage: Upload Installation script to Host)
2012-06-25 17:50:43,966 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-49) return true
2012-06-25 17:50:43,967 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-49) [44a5dc80] Installation of 172.30.1.63. Executing installation stage. (Stage: Running first installation script on Host)
2012-06-25 17:50:43,968 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-49) [44a5dc80] Installation of 172.30.1.63. Sending SSH Command chmod +x /tmp/vds_installer_2ea9f19e-7e9d-46a5-bdfb-dc86532c1fe6.py; /tmp/vds_installer_2ea9f19e-7e9d-46a5-bdfb-dc86532c1fe6.py -c 'ssl=true;management_port=54321' -O 'oVirt' -t 2012-06-25T09:50:43 -f /tmp/firewall.conf.2ea9f19e-7e9d-46a5-bdfb-dc86532c1fe6 -p 80 -b http://fedora17.kvm.com:80/Components/vds/ http://fedora17.kvm.com:80/Components/vds/ 172.30.1.63 2ea9f19e-7e9d-46a5-bdfb-dc86532c1fe6 False. (Stage: Running first installation script on Host)
2012-06-25 17:50:43,971 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-49) Invoking chmod +x /tmp/vds_installer_2ea9f19e-7e9d-46a5-bdfb-dc86532c1fe6.py; /tmp/vds_installer_2ea9f19e-7e9d-46a5-bdfb-dc86532c1fe6.py -c 'ssl=true;management_port=54321' -O 'oVirt' -t 2012-06-25T09:50:43 -f /tmp/firewall.conf.2ea9f19e-7e9d-46a5-bdfb-dc86532c1fe6 -p 80 -b http://fedora17.kvm.com:80/Components/vds/ http://fedora17.kvm.com:80/Components/vds/ 172.30.1.63 2ea9f19e-7e9d-46a5-bdfb-dc86532c1fe6 False on 172.30.1.63
2012-06-25 17:50:44,040 WARN [org.ovirt.engine.core.ServletUtils] (ajp--0.0.0.0-8009-5) File "/usr/share/vdsm-bootstrap/vds_bootstrap.py is 33883 bytes long. Please reconsider using this servlet for files larger than 8192 bytes.
2012-06-25 17:50:44,976 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-49) [44a5dc80] Installation of 172.30.1.63. Recieved message: <BSTRAP component='RHEV_INSTALL' status='OK' message='oVirt Node DETECTED'/>
<BSTRAP component='INSTALLER LIB' status='OK' message='Install library already exists'/>
<BSTRAP component='INSTALLER' status='OK' message='vds_bootstrap.py download succeeded'/>
. FYI. (Stage: Running first installation script on Host)
2012-06-25 17:50:45,067 INFO [org.ovirt.engine.core.vdsbroker.VdsManager] (QuartzScheduler_Worker-55) Initializing Host: node3.kvm.com
2012-06-25 17:50:45,345 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-49) [44a5dc80] Installation of 172.30.1.63. Recieved message: <BSTRAP component='RHEV_INSTALL' status='OK' message='RHEV-H ACCESSIBLE'/>
. Stage completed. (Stage: Running first installation script on Host)
2012-06-25 17:50:45,362 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-49) RunSSHCommand returns true
2012-06-25 17:50:45,369 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-49) [44a5dc80] RunScript ended:true
2012-06-25 17:50:45,369 INFO [org.ovirt.engine.core.bll.RegisterVdsQuery] (ajp--0.0.0.0-8009-1) Running Command: RegisterVds
2012-06-25 17:50:45,378 INFO [org.ovirt.engine.core.bll.InstallVdsCommand] (pool-3-thread-49) [44a5dc80] After Installation pool-3-thread-49
2012-06-25 17:50:45,388 INFO [org.ovirt.engine.core.bll.ApproveVdsCommand] (pool-3-thread-50) [40044463] Running command: ApproveVdsCommand internal: true. Entities affected : ID: 3a9d9a40-beab-11e1-9366-525400fe2d56 Type: VDS
2012-06-25 17:50:45,390 INFO [org.ovirt.engine.core.bll.InstallVdsCommand] (pool-3-thread-50) [40044463] Before Installation pool-3-thread-50, Powerclient/oVirtNode case: setting status to installing
2012-06-25 17:50:45,393 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (pool-3-thread-50) [40044463] START, SetVdsStatusVDSCommand(vdsId = 3a9d9a40-beab-11e1-9366-525400fe2d56, status=Installing, nonOperationalReason=NONE), log id: ee6f19
2012-06-25 17:50:45,395 INFO [org.ovirt.engine.core.register.RegisterServlet] (ajp--0.0.0.0-8009-1) Succeeded to run RegisterVds.
2012-06-25 17:50:45,407 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (pool-3-thread-50) [40044463] FINISH, SetVdsStatusVDSCommand, log id: ee6f19
2012-06-25 17:50:45,418 INFO [org.ovirt.engine.core.bll.InstallVdsCommand] (pool-3-thread-50) [40044463] Before Installation pool-3-thread-50
2012-06-25 17:50:45,424 INFO [org.ovirt.engine.core.bll.CBCInstaller] (pool-3-thread-50) [40044463] Installation of 172.30.1.63. Executing oVirt installation stage. (Stage: Starting Host installation)
2012-06-25 17:50:45,432 INFO [org.ovirt.engine.core.bll.CBCInstaller] (pool-3-thread-50) [40044463] Installation of 172.30.1.63. Executing oVirt installation stage. (Stage: Connecting to Host)
2012-06-25 17:50:45,471 INFO [org.ovirt.engine.core.utils.hostinstall.HostKeyVerifier] (NioProcessor-33) SSH key fingerprint b4:f3:b9:4b:93:40:e8:9e:57:fc:b3:fe:b3:af:13:b4 for host 172.30.1.63 (172.30.1.63) has been successfully verified.
2012-06-25 17:50:45,556 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-50) [40044463] Installation of 172.30.1.63. Recieved message: <BSTRAP component='RHEV_INSTALL' status='OK' message='Connected to Host 172.30.1.63 with SSH key fingerprint: b4:f3:b9:4b:93:40:e8:9e:57:fc:b3:fe:b3:af:13:b4'/>. FYI. (Stage: Connecting to Host)
2012-06-25 17:50:45,580 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-50) [40044463] Installation of 172.30.1.63. Successfully connected to server ssh. (Stage: Connecting to Host)
2012-06-25 17:50:45,588 INFO [org.ovirt.engine.core.bll.CBCInstaller] (pool-3-thread-50) [40044463] Installation of 172.30.1.63. Executing oVirt installation stage. (Stage: Running first installation script on Host)
2012-06-25 17:50:45,590 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-50) Invoking /usr/share/vdsm-reg/vdsm-gen-cert -O "oVirt" 172.30.1.63 97174773-9fa8-46af-9e31-46307330aeac on 172.30.1.63
2012-06-25 17:50:56,379 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-50) [40044463] Installation of 172.30.1.63. Recieved message: <BSTRAP component='Encryption setup' status='OK'/>
<BSTRAP component='RHEV_INSTALL' status='OK'/>
. Stage completed. (Stage: Running first installation script on Host)
2012-06-25 17:50:56,393 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-50) RunSSHCommand returns true
2012-06-25 17:50:56,394 INFO [org.ovirt.engine.core.bll.CBCInstaller] (pool-3-thread-50) [40044463] Installation of 172.30.1.63. Executing oVirt installation stage. (Stage: Downloading certificate request from Host)
2012-06-25 17:50:56,395 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-50) Downloading file /tmp/cert_97174773-9fa8-46af-9e31-46307330aeac.req from 172.30.1.63 to /etc/pki/ovirt-engine/requests/cert_97174773-9fa8-46af-9e31-46307330aeac.req
2012-06-25 17:50:56,680 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-50) [40044463] Installation of 172.30.1.63. successfully done sftp operation ( Stage: Downloading certificate request from Host)
2012-06-25 17:50:56,688 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-50) return true
2012-06-25 17:50:56,693 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-50) [40044463] DownloadCertificateRequest ended:true
2012-06-25 17:50:56,699 INFO [org.ovirt.engine.core.bll.CBCInstaller] (pool-3-thread-50) [40044463] Installation of 172.30.1.63. Executing oVirt installation stage. (Stage: Sign certificate request and generate certificate)
2012-06-25 17:50:57,713 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-50) [40044463] SignCertificateRequest ended:true
2012-06-25 17:50:57,718 INFO [org.ovirt.engine.core.bll.CBCInstaller] (pool-3-thread-50) [40044463] Installation of 172.30.1.63. Executing oVirt installation stage. (Stage: Upload signed sertificate to Host)
2012-06-25 17:50:57,724 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-50) Uploading file /etc/pki/ovirt-engine/certs/172.30.1.63cert.pem to /tmp/cert_97174773-9fa8-46af-9e31-46307330aeac.pem on 172.30.1.63
2012-06-25 17:50:57,726 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-50) Uploading file /etc/pki/ovirt-engine/certs/172.30.1.63cert.pem to /tmp/cert_97174773-9fa8-46af-9e31-46307330aeac.pem on 172.30.1.63
2012-06-25 17:50:57,988 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-50) [40044463] Installation of 172.30.1.63. successfully done sftp operation ( Stage: Upload signed sertificate to Host)
2012-06-25 17:50:57,989 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-50) return true
2012-06-25 17:50:57,990 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-50) [40044463] UploadSignedCertificate ended:true
2012-06-25 17:50:57,991 INFO [org.ovirt.engine.core.bll.CBCInstaller] (pool-3-thread-50) [40044463] Installation of 172.30.1.63. Executing oVirt installation stage. (Stage: Upload Cerficate Autority to Host)
2012-06-25 17:50:57,992 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-50) Uploading file /etc/pki/ovirt-engine/ca.pem to /tmp/CA_97174773-9fa8-46af-9e31-46307330aeac.pem on 172.30.1.63
2012-06-25 17:50:57,994 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-50) Uploading file /etc/pki/ovirt-engine/ca.pem to /tmp/CA_97174773-9fa8-46af-9e31-46307330aeac.pem on 172.30.1.63
2012-06-25 17:50:58,249 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-50) [40044463] Installation of 172.30.1.63. successfully done sftp operation ( Stage: Upload Cerficate Autority to Host)
2012-06-25 17:50:58,258 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-50) return true
2012-06-25 17:50:58,263 INFO [org.ovirt.engine.core.bll.CBCInstaller] (pool-3-thread-50) [40044463] Installation of 172.30.1.63. Executing oVirt installation stage, sending SSH Command /usr/share/vdsm-reg/vdsm-complete -c 'ssl=true' 97174773-9fa8-46af-9e31-46307330aeac 0. (Stage: Running second installation script on Host)
2012-06-25 17:50:58,267 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-50) Invoking /usr/share/vdsm-reg/vdsm-complete -c 'ssl=true' 97174773-9fa8-46af-9e31-46307330aeac 0 on 172.30.1.63
2012-06-25 17:51:02,269 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-50) [40044463] Installation of 172.30.1.63. Recieved message: <BSTRAP component='instCert' status='OK'/>
<BSTRAP component='CoreDump' status='OK'/>
<BSTRAP component='cleanAll' status='OK'/>
<BSTRAP component='VDS Configuration' status='OK'/>
<BSTRAP component='Restart' status='OK' message='Restarting vdsmd service' />
<BSTRAP component='RHEV_INSTALL' status='OK'/>
. Stage completed. (Stage: Running second installation script on Host)
2012-06-25 17:51:02,317 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-50) RunSSHCommand returns true
2012-06-25 17:51:02,318 INFO [org.ovirt.engine.core.bll.InstallVdsCommand] (pool-3-thread-50) [40044463] After Installation pool-3-thread-50
2012-06-25 17:51:02,319 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (pool-3-thread-50) [40044463] START, SetVdsStatusVDSCommand(vdsId = 3a9d9a40-beab-11e1-9366-525400fe2d56, status=NonResponsive, nonOperationalReason=NONE), log id: 513cfcb8
2012-06-25 17:51:02,343 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (pool-3-thread-50) [40044463] FINISH, SetVdsStatusVDSCommand, log id: 513cfcb8
2012-06-25 17:51:02,351 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (pool-3-thread-50) [40044463] START, SetVdsStatusVDSCommand(vdsId = 3a9d9a40-beab-11e1-9366-525400fe2d56, status=Unassigned, nonOperationalReason=NONE), log id: 36058f20
2012-06-25 17:51:02,367 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (pool-3-thread-50) [40044463] FINISH, SetVdsStatusVDSCommand, log id: 36058f20
2012-06-25 17:51:02,372 INFO [org.ovirt.engine.core.bll.RegisterVdsQuery] (pool-3-thread-50) [40044463] Approval of oVirt 3a9d9a40-beab-11e1-9366-525400fe2d56 ended successefully.
2012-06-25 17:51:03,109 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand] (QuartzScheduler_Worker-83) XML RPC error in command GetCapabilitiesVDS ( Vds: node3.kvm.com ), the error was: java.util.concurrent.ExecutionException: java.lang.reflect.InvocationTargetException, EOFException: SSL peer shut down incorrectly
2012-06-25 17:51:03,115 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (QuartzScheduler_Worker-83) ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds = 3a9d9a40-beab-11e1-9366-525400fe2d56 : node3.kvm.com, VDS Network Error, continuing.
VDSNetworkException:
2012-06-25 17:51:05,127 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand] (QuartzScheduler_Worker-95) XML RPC error in command GetCapabilitiesVDS ( Vds: node3.kvm.com ), the error was: java.util.concurrent.ExecutionException: java.lang.reflect.InvocationTargetException, EOFException: SSL peer shut down incorrectly
2012-06-25 17:51:05,140 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (QuartzScheduler_Worker-95) ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds = 3a9d9a40-beab-11e1-9366-525400fe2d56 : node3.kvm.com, VDS Network Error, continuing.
VDSNetworkException:
2012-06-25 17:51:07,162 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand] (QuartzScheduler_Worker-76) XML RPC error in command GetCapabilitiesVDS ( Vds: node3.kvm.com ), the error was: java.util.concurrent.ExecutionException: java.lang.reflect.InvocationTargetException, EOFException: SSL peer shut down incorrectly
2012-06-25 17:51:07,170 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (QuartzScheduler_Worker-76) ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds = 3a9d9a40-beab-11e1-9366-525400fe2d56 : node3.kvm.com, VDS Network Error, continuing.
VDSNetworkException:
2012-06-25 17:51:09,187 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand] (QuartzScheduler_Worker-84) XML RPC error in command GetCapabilitiesVDS ( Vds: node3.kvm.com ), the error was: java.util.concurrent.ExecutionException: java.lang.reflect.InvocationTargetException, EOFException: SSL peer shut down incorrectly
2012-06-25 17:51:09,189 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (QuartzScheduler_Worker-84) ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds = 3a9d9a40-beab-11e1-9366-525400fe2d56 : node3.kvm.com, VDS Network Error, continuing.
VDSNetworkException:
2012-06-25 17:51:11,202 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand] (QuartzScheduler_Worker-91) XML RPC error in command GetCapabilitiesVDS ( Vds: node3.kvm.com ), the error was: java.util.concurrent.ExecutionException: java.lang.reflect.InvocationTargetException, EOFException: SSL peer shut down incorrectly
2012-06-25 17:51:11,204 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (QuartzScheduler_Worker-91) ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds = 3a9d9a40-beab-11e1-9366-525400fe2d56 : node3.kvm.com, VDS Network Error, continuing.
VDSNetworkException:
2012-06-25 17:51:13,217 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand] (QuartzScheduler_Worker-88) XML RPC error in command GetCapabilitiesVDS ( Vds: node3.kvm.com ), the error was: java.util.concurrent.ExecutionException: java.lang.reflect.InvocationTargetException, EOFException: SSL peer shut down incorrectly
2012-06-25 17:51:13,219 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (QuartzScheduler_Worker-88) ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds = 3a9d9a40-beab-11e1-9366-525400fe2d56 : node3.kvm.com, VDS Network Error, continuing.
VDSNetworkException:
2012-06-25 17:51:15,240 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand] (QuartzScheduler_Worker-8) XML RPC error in command GetCapabilitiesVDS ( Vds: node3.kvm.com ), the error was: java.util.concurrent.ExecutionException: java.lang.reflect.InvocationTargetException, EOFException: SSL peer shut down incorrectly
2012-06-25 17:51:15,245 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (QuartzScheduler_Worker-8) ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds = 3a9d9a40-beab-11e1-9366-525400fe2d56 : node3.kvm.com, VDS Network Error, continuing.
VDSNetworkException:
2012-06-25 17:51:17,260 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand] (QuartzScheduler_Worker-5) XML RPC error in command GetCapabilitiesVDS ( Vds: node3.kvm.com ), the error was: java.util.concurrent.ExecutionException: java.lang.reflect.InvocationTargetException, EOFException: SSL peer shut down incorrectly
2012-06-25 17:51:17,262 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (QuartzScheduler_Worker-5) ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds = 3a9d9a40-beab-11e1-9366-525400fe2d56 : node3.kvm.com, VDS Network Error, continuing.
VDSNetworkException:
# ping node3.kvm.com
PING node3.kvm.com (172.30.1.63) 56(84) bytes of data.
64 bytes from 172.30.1.63: icmp_req=1 ttl=64 time=0.325 ms
# ping `hostname`
PING fedora17-ovirt.kvm.com (172.30.1.30) 56(84) bytes of data.
64 bytes from fedora17-ovirt.kvm.com (172.30.1.30): icmp_req=1 ttl=64 time=0.036 ms
# systemctl status jboss-as.service
jboss-as.service - The JBoss Application Server
Loaded: loaded (/usr/lib/systemd/system/jboss-as.service; enabled)
Active: active (running) since Mon, 25 Jun 2012 17:29:06 +0800; 24min ago
Main PID: 4873 (standalone.sh)
CGroup: name=systemd:/system/jboss-as.service
├ 4873 /bin/sh /usr/share/jboss-as/bin/standalone.sh -c standalone-web.xml
└ 4925 java -D[Standalone] -server -XX:+UseCompressedOops -XX:+TieredCompilation -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Dorg.j...
# systemctl status libvirtd.service
libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)
Active: active (running) since Mon, 25 Jun 2012 17:10:36 +0800; 43min ago
Main PID: 3567 (libvirtd)
CGroup: name=systemd:/system/libvirtd.service
└ 3567 /usr/sbin/libvirtd --listen
Jun 25 17:10:36 fedora17-ovirt.kvm.com libvirtd[3567]: Could not find keytab file: /etc/libvirt/krb5.tab: No such file or directory
Jun 25 17:10:36 fedora17-ovirt.kvm.com libvirtd[3567]: server add_plugin entry_point error generic failure
Jun 25 17:10:36 fedora17-ovirt.kvm.com libvirtd[3567]: _sasl_plugin_load failed on sasl_server_plug_init for plugin: gssapiv2
# systemctl status vdsmd.service
vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)
Active: active (running) since Mon, 25 Jun 2012 17:22:22 +0800; 31min ago
Main PID: 4630 (respawn)
CGroup: name=systemd:/system/vdsmd.service
├ 4630 /bin/bash -e /usr/share/vdsm/respawn --minlifetime 10 --daemon --masterpid /var/run/vdsm/respawn.pid /usr/share/vdsm/vdsm
├ 4633 /usr/bin/python /usr/share/vdsm/vdsm
├ 4653 /usr/bin/sudo -n /usr/bin/python /usr/share/vdsm/supervdsmServer.py 3eaebf05-cc78-4d61-abfe-9e675fcaf1b8 4633
└ 4654 /usr/bin/python /usr/share/vdsm/supervdsmServer.py 3eaebf05-cc78-4d61-abfe-9e675fcaf1b8 4633
Jun 25 17:22:20 fedora17-ovirt.kvm.com systemd-vdsmd[4396]: Starting iscsid:
Jun 25 17:22:20 fedora17-ovirt.kvm.com systemd-vdsmd[4396]: Starting libvirtd (via systemctl): [ OK ]
Jun 25 17:22:21 fedora17-ovirt.kvm.com systemd-vdsmd[4396]: Starting up vdsm daemon:
Jun 25 17:22:21 fedora17-ovirt.kvm.com runuser[4627]: pam_unix(runuser:session): session opened for user vdsm by (uid=0)
Jun 25 17:22:21 fedora17-ovirt.kvm.com runuser[4627]: pam_unix(runuser:session): session closed for user vdsm
Jun 25 17:22:22 fedora17-ovirt.kvm.com systemd-vdsmd[4396]: [27B blob data]
Jun 25 17:22:22 fedora17-ovirt.kvm.com python[4633]: DIGEST-MD5 client step 2
Jun 25 17:22:22 fedora17-ovirt.kvm.com python[4633]: DIGEST-MD5 client step 2
Jun 25 17:22:22 fedora17-ovirt.kvm.com python[4633]: DIGEST-MD5 client step 3
Jun 25 17:22:25 fedora17-ovirt.kvm.com vdsm[4633]: vdsm vds ERROR Unable to load the rest server module. Please make sure it is installed.
2
1
25 Jun '12
This is a multi-part message in MIME format.
------=_NextPart_000_00B5_01CD50A1.2C0D2C70
Content-Type: text/plain;
charset="koi8-r"
Content-Transfer-Encoding: 7bit
Hi.
I use a bunch of ovirt 3.1 beta and gluster storage.
The virtual machine was created successfully, but will not start.
In the logs:
Vdsm.log:
Thread-1426::DEBUG::2012-06-22
09:37:27,151::task::978::TaskManager.Task::(_decref)
Task=`9a68c120-169f-4c0e-98e3-08e3bf5c66ab`::ref 0 aborting False
Thread-1427::DEBUG::2012-06-22
09:37:27,162::BindingXMLRPC::160::vds::(wrapper) [10.1.20.2]
Thread-1427::DEBUG::2012-06-22
09:37:27,163::task::588::TaskManager.Task::(_updateState)
Task=`662a52dd-f00d-4be1-941d-eac8ec6a70f6`::moving from state init -> state
preparing
Thread-1427::INFO::2012-06-22
09:37:27,163::logUtils::37::dispatcher::(wrapper) Run and protect:
getStoragePoolInfo(spUUID='b1c7875a-964d-4633-8ea4-2b191d68c105',
options=None)
Thread-1427::DEBUG::2012-06-22
09:37:27,163::resourceManager::175::ResourceManager.Request::(__init__)
ResName=`Storage.b1c7875a-964d-4633-8ea4-2b191d68c105`ReqID=`ca9b7715-1f0b-4
225-9717-d1179193c42e`::Request was made in
'/usr/share/vdsm/storage/resourceManager.py' line '485' at
'registerResource'
Thread-1427::DEBUG::2012-06-22
09:37:27,164::resourceManager::486::ResourceManager::(registerResource)
Trying to register resource 'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105'
for lock type 'shared'
Thread-1427::DEBUG::2012-06-22
09:37:27,164::resourceManager::528::ResourceManager::(registerResource)
Resource 'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105' is free. Now locking
as 'shared' (1 active user)
Thread-1427::DEBUG::2012-06-22
09:37:27,164::resourceManager::212::ResourceManager.Request::(grant)
ResName=`Storage.b1c7875a-964d-4633-8ea4-2b191d68c105`ReqID=`ca9b7715-1f0b-4
225-9717-d1179193c42e`::Granted request
Thread-1427::DEBUG::2012-06-22
09:37:27,164::task::817::TaskManager.Task::(resourceAcquired)
Task=`662a52dd-f00d-4be1-941d-eac8ec6a70f6`::_resourcesAcquired:
Storage.b1c7875a-964d-4633-8ea4-2b191d68c105 (shared)
Thread-1427::DEBUG::2012-06-22
09:37:27,165::task::978::TaskManager.Task::(_decref)
Task=`662a52dd-f00d-4be1-941d-eac8ec6a70f6`::ref 1 aborting False
Thread-1427::INFO::2012-06-22
09:37:27,165::logUtils::39::dispatcher::(wrapper) Run and protect:
getStoragePoolInfo, Return response: {'info': {'spm_id': 1, 'master_uuid':
'68aa0dc2-9cd1-4549-8008-30b1bae667db', 'name': 'gluster', 'version': '0',
'domains': '68aa0dc2-9cd1-4549-8008-30b1bae667db:Active', 'pool_status':
'connected', 'isoprefix': '', 'type': 'SHAREDFS', 'master_ver': 1, 'lver':
0}, 'dominfo': {'68aa0dc2-9cd1-4549-8008-30b1bae667db': {'status': 'Active',
'diskfree': '27505983488', 'alerts': [], 'disktotal': '53579874304'}}}
Thread-1427::DEBUG::2012-06-22
09:37:27,165::task::1172::TaskManager.Task::(prepare)
Task=`662a52dd-f00d-4be1-941d-eac8ec6a70f6`::finished: {'info': {'spm_id':
1, 'master_uuid': '68aa0dc2-9cd1-4549-8008-30b1bae667db', 'name': 'gluster',
'version': '0', 'domains': '68aa0dc2-9cd1-4549-8008-30b1bae667db:Active',
'pool_status': 'connected', 'isoprefix': '', 'type': 'SHAREDFS',
'master_ver': 1, 'lver': 0}, 'dominfo':
{'68aa0dc2-9cd1-4549-8008-30b1bae667db': {'status': 'Active', 'diskfree':
'27505983488', 'alerts': [], 'disktotal': '53579874304'}}}
Thread-1427::DEBUG::2012-06-22
09:37:27,166::task::588::TaskManager.Task::(_updateState)
Task=`662a52dd-f00d-4be1-941d-eac8ec6a70f6`::moving from state preparing ->
state finished
Thread-1427::DEBUG::2012-06-22
09:37:27,166::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105': < ResourceRef
'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105', isValid: 'True' obj:
'None'>}
Thread-1427::DEBUG::2012-06-22
09:37:27,166::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-1427::DEBUG::2012-06-22
09:37:27,166::resourceManager::538::ResourceManager::(releaseResource)
Trying to release resource 'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105'
Thread-1427::DEBUG::2012-06-22
09:37:27,166::resourceManager::553::ResourceManager::(releaseResource)
Released resource 'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105' (0 active
users)
Thread-1427::DEBUG::2012-06-22
09:37:27,167::resourceManager::558::ResourceManager::(releaseResource)
Resource 'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105' is free, finding out
if anyone is waiting for it.
Thread-1427::DEBUG::2012-06-22
09:37:27,167::resourceManager::565::ResourceManager::(releaseResource) No
one is waiting for resource 'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105',
Clearing records.
Thread-1427::DEBUG::2012-06-22
09:37:27,167::task::978::TaskManager.Task::(_decref)
Task=`662a52dd-f00d-4be1-941d-eac8ec6a70f6`::ref 0 aborting False
Thread-1428::DEBUG::2012-06-22
09:37:27,476::BindingXMLRPC::872::vds::(wrapper) client [10.1.20.2]::call
vmCreate with ({'custom': {}, 'keyboardLayout': 'en-us', 'kvmEnable':
'true', 'acpiEnable': 'true', 'emulatedMachine': 'pc', 'tabletEnable':
'true', 'vmId': '92de99e5-067a-421b-a4b1-2a2b60e8894a', 'devices':
[{'device': 'qxl', 'specParams': {'vram': '65536'}, 'type': 'video',
'deviceId': '9780f3aa-4c0e-44eb-bc94-7ebfb63fe2f3'}, {'index': '2', 'iface':
'ide', 'specParams': {'path': ''}, 'readonly': 'true', 'deviceId':
'59b3e477-8ba9-4a09-ac4a-4d0da91708ce', 'device': 'cdrom', 'path': '',
'type': 'disk'}, {'index': 0, 'iface': 'virtio', 'format': 'raw',
'bootOrder': '1', 'volumeID': 'eb866d5a-1319-4e32-b9f3-4de3ad3272fb',
'imageID': '61180d3c-63ba-41ca-989a-8bd2acff4d7e', 'specParams': {},
'readonly': 'false', 'domainID': '68aa0dc2-9cd1-4549-8008-30b1bae667db',
'optional': 'false', 'deviceId': '61180d3c-63ba-41ca-989a-8bd2acff4d7e',
'poolID': 'b1c7875a-964d-4633-8ea4-2b191d68c105', 'device': 'disk',
'shared': 'false', 'propagateErrors': 'off', 'type': 'disk'}, {'nicModel':
'pv', 'macAddr': '00:1a:4a:01:14:00', 'network': 'ovirtmgmt', 'specParams':
{}, 'deviceId': 'c580b531-2178-4a38-bb1e-971bf300bf8a', 'device': 'bridge',
'type': 'interface'}, {'device': 'memballoon', 'specParams': {'model':
'virtio'}, 'type': 'balloon', 'deviceId':
'3df4e23d-85bd-41b3-a320-3a510a1c2e7f'}], 'smp': '1', 'vmType': 'kvm',
'timeOffset': '0', 'memSize': 512, 'spiceSslCipherSuite': 'DEFAULT',
'cpuType': 'Conroe', 'spiceSecureChannels':
'smain,sinputs,scursor,splayback,srecord,sdisplay', 'smpCoresPerSocket':
'1', 'vmName': 'fgjh', 'display': 'vnc', 'transparentHugePages': 'true',
'nice': '0'},) {} flowID [60053096]
Thread-1428::INFO::2012-06-22 09:37:27,477::API::603::vds::(_getNetworkIp)
network None: using 0
Thread-1428::INFO::2012-06-22 09:37:27,477::API::229::vds::(create)
vmContainerLock acquired by vm 92de99e5-067a-421b-a4b1-2a2b60e8894a
Thread-1429::DEBUG::2012-06-22
09:37:27,479::vm::564::vm.Vm::(_startUnderlyingVm)
vmId=`92de99e5-067a-421b-a4b1-2a2b60e8894a`::Start
Thread-1428::DEBUG::2012-06-22 09:37:27,479::API::246::vds::(create) Total
desktops after creation of 92de99e5-067a-421b-a4b1-2a2b60e8894a is 1
Thread-1428::DEBUG::2012-06-22
09:37:27,480::BindingXMLRPC::879::vds::(wrapper) return vmCreate with
{'status': {'message': 'Done', 'code': 0}, 'vmList': {'status':
'WaitForLaunch', 'acpiEnable': 'true', 'emulatedMachine': 'pc',
'tabletEnable': 'true', 'pid': '0', 'timeOffset': '0', 'displayPort': '-1',
'displaySecurePort': '-1', 'spiceSslCipherSuite': 'DEFAULT', 'cpuType':
'Conroe', 'custom': {}, 'clientIp': '', 'nicModel': 'rtl8139,pv',
'keyboardLayout': 'en-us', 'kvmEnable': 'true', 'vmId':
'92de99e5-067a-421b-a4b1-2a2b60e8894a', 'transparentHugePages': 'true',
'devices': [{'device': 'qxl', 'specParams': {'vram': '65536'}, 'type':
'video', 'deviceId': '9780f3aa-4c0e-44eb-bc94-7ebfb63fe2f3'}, {'index': '2',
'iface': 'ide', 'specParams': {'path': ''}, 'readonly': 'true', 'deviceId':
'59b3e477-8ba9-4a09-ac4a-4d0da91708ce', 'device': 'cdrom', 'path': '',
'type': 'disk'}, {'index': 0, 'iface': 'virtio', 'format': 'raw',
'bootOrder': '1', 'volumeID': 'eb866d5a-1319-4e32-b9f3-4de3ad3272fb',
'imageID': '61180d3c-63ba-41ca-989a-8bd2acff4d7e', 'specParams': {},
'readonly': 'false', 'domainID': '68aa0dc2-9cd1-4549-8008-30b1bae667db',
'optional': 'false', 'deviceId': '61180d3c-63ba-41ca-989a-8bd2acff4d7e',
'poolID': 'b1c7875a-964d-4633-8ea4-2b191d68c105', 'device': 'disk',
'shared': 'false', 'propagateErrors': 'off', 'type': 'disk'}, {'nicModel':
'pv', 'macAddr': '00:1a:4a:01:14:00', 'network': 'ovirtmgmt', 'specParams':
{}, 'deviceId': 'c580b531-2178-4a38-bb1e-971bf300bf8a', 'device': 'bridge',
'type': 'interface'}, {'device': 'memballoon', 'specParams': {'model':
'virtio'}, 'type': 'balloon', 'deviceId':
'3df4e23d-85bd-41b3-a320-3a510a1c2e7f'}], 'smp': '1', 'vmType': 'kvm',
'memSize': 512, 'displayIp': '0', 'spiceSecureChannels':
'smain,sinputs,scursor,splayback,srecord,sdisplay', 'smpCoresPerSocket':
'1', 'vmName': 'fgjh', 'display': 'vnc', 'nice': '0'}}
Thread-1429::DEBUG::2012-06-22
09:37:27,481::vm::568::vm.Vm::(_startUnderlyingVm)
vmId=`92de99e5-067a-421b-a4b1-2a2b60e8894a`::_ongoingCreations acquired
Thread-1429::INFO::2012-06-22 09:37:27,482::libvirtvm::1287::vm.Vm::(_run)
vmId=`92de99e5-067a-421b-a4b1-2a2b60e8894a`::VM wrapper has started
Thread-1429::DEBUG::2012-06-22
09:37:27,482::task::588::TaskManager.Task::(_updateState)
Task=`5922124f-6997-4a7f-a3a8-b4852afabe18`::moving from state init -> state
preparing
Thread-1429::INFO::2012-06-22
09:37:27,482::logUtils::37::dispatcher::(wrapper) Run and protect:
getVolumeSize(sdUUID='68aa0dc2-9cd1-4549-8008-30b1bae667db',
spUUID='b1c7875a-964d-4633-8ea4-2b191d68c105',
imgUUID='61180d3c-63ba-41ca-989a-8bd2acff4d7e',
volUUID='eb866d5a-1319-4e32-b9f3-4de3ad3272fb', options=None)
Thread-1429::DEBUG::2012-06-22
09:37:27,483::resourceManager::175::ResourceManager.Request::(__init__)
ResName=`Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db`ReqID=`2b25c825-b3b5-4
4f1-a41c-3e21dd2e716f`::Request was made in
'/usr/share/vdsm/storage/resourceManager.py' line '485' at
'registerResource'
Thread-1429::DEBUG::2012-06-22
09:37:27,483::resourceManager::486::ResourceManager::(registerResource)
Trying to register resource 'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db'
for lock type 'shared'
Thread-1429::DEBUG::2012-06-22
09:37:27,483::resourceManager::528::ResourceManager::(registerResource)
Resource 'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db' is free. Now locking
as 'shared' (1 active user)
Thread-1429::DEBUG::2012-06-22
09:37:27,483::resourceManager::212::ResourceManager.Request::(grant)
ResName=`Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db`ReqID=`2b25c825-b3b5-4
4f1-a41c-3e21dd2e716f`::Granted request
Thread-1429::DEBUG::2012-06-22
09:37:27,484::task::817::TaskManager.Task::(resourceAcquired)
Task=`5922124f-6997-4a7f-a3a8-b4852afabe18`::_resourcesAcquired:
Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db (shared)
Thread-1429::DEBUG::2012-06-22
09:37:27,484::task::978::TaskManager.Task::(_decref)
Task=`5922124f-6997-4a7f-a3a8-b4852afabe18`::ref 1 aborting False
Thread-1429::DEBUG::2012-06-22
09:37:27,485::fileVolume::535::Storage.Volume::(validateVolumePath) validate
path for eb866d5a-1319-4e32-b9f3-4de3ad3272fb
Thread-1429::DEBUG::2012-06-22
09:37:27,487::fileVolume::535::Storage.Volume::(validateVolumePath) validate
path for eb866d5a-1319-4e32-b9f3-4de3ad3272fb
Thread-1429::INFO::2012-06-22
09:37:27,488::logUtils::39::dispatcher::(wrapper) Run and protect:
getVolumeSize, Return response: {'truesize': '10737426432', 'apparentsize':
'10737418240'}
Thread-1429::DEBUG::2012-06-22
09:37:27,488::task::1172::TaskManager.Task::(prepare)
Task=`5922124f-6997-4a7f-a3a8-b4852afabe18`::finished: {'truesize':
'10737426432', 'apparentsize': '10737418240'}
Thread-1429::DEBUG::2012-06-22
09:37:27,489::task::588::TaskManager.Task::(_updateState)
Task=`5922124f-6997-4a7f-a3a8-b4852afabe18`::moving from state preparing ->
state finished
Thread-1429::DEBUG::2012-06-22
09:37:27,489::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db': < ResourceRef
'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db', isValid: 'True' obj:
'None'>}
Thread-1429::DEBUG::2012-06-22
09:37:27,489::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-1429::DEBUG::2012-06-22
09:37:27,489::resourceManager::538::ResourceManager::(releaseResource)
Trying to release resource 'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db'
Thread-1429::DEBUG::2012-06-22
09:37:27,489::resourceManager::553::ResourceManager::(releaseResource)
Released resource 'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db' (0 active
users)
Thread-1429::DEBUG::2012-06-22
09:37:27,490::resourceManager::558::ResourceManager::(releaseResource)
Resource 'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db' is free, finding out
if anyone is waiting for it.
Thread-1429::DEBUG::2012-06-22
09:37:27,490::resourceManager::565::ResourceManager::(releaseResource) No
one is waiting for resource 'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db',
Clearing records.
Thread-1429::DEBUG::2012-06-22
09:37:27,490::task::978::TaskManager.Task::(_decref)
Task=`5922124f-6997-4a7f-a3a8-b4852afabe18`::ref 0 aborting False
Thread-1429::INFO::2012-06-22
09:37:27,490::clientIF::279::vds::(prepareVolumePath) prepared volume path:
Thread-1429::DEBUG::2012-06-22
09:37:27,490::task::588::TaskManager.Task::(_updateState)
Task=`9eba14b9-a037-4f59-bc90-97b5ce032503`::moving from state init -> state
preparing
Thread-1429::INFO::2012-06-22
09:37:27,491::logUtils::37::dispatcher::(wrapper) Run and protect:
prepareImage(sdUUID='68aa0dc2-9cd1-4549-8008-30b1bae667db',
spUUID='b1c7875a-964d-4633-8ea4-2b191d68c105',
imgUUID='61180d3c-63ba-41ca-989a-8bd2acff4d7e',
volUUID='eb866d5a-1319-4e32-b9f3-4de3ad3272fb')
Thread-1429::DEBUG::2012-06-22
09:37:27,491::resourceManager::175::ResourceManager.Request::(__init__)
ResName=`Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db`ReqID=`95022af4-4bcc-4
503-81c6-767215c0cca5`::Request was made in
'/usr/share/vdsm/storage/resourceManager.py' line '485' at
'registerResource'
Thread-1429::DEBUG::2012-06-22
09:37:27,491::resourceManager::486::ResourceManager::(registerResource)
Trying to register resource 'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db'
for lock type 'shared'
Thread-1429::DEBUG::2012-06-22
09:37:27,491::resourceManager::528::ResourceManager::(registerResource)
Resource 'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db' is free. Now locking
as 'shared' (1 active user)
Thread-1429::DEBUG::2012-06-22
09:37:27,492::resourceManager::212::ResourceManager.Request::(grant)
ResName=`Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db`ReqID=`95022af4-4bcc-4
503-81c6-767215c0cca5`::Granted request
Thread-1429::DEBUG::2012-06-22
09:37:27,492::task::817::TaskManager.Task::(resourceAcquired)
Task=`9eba14b9-a037-4f59-bc90-97b5ce032503`::_resourcesAcquired:
Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db (shared)
Thread-1429::DEBUG::2012-06-22
09:37:27,492::task::978::TaskManager.Task::(_decref)
Task=`9eba14b9-a037-4f59-bc90-97b5ce032503`::ref 1 aborting False
Thread-1429::DEBUG::2012-06-22
09:37:27,493::fileVolume::535::Storage.Volume::(validateVolumePath) validate
path for eb866d5a-1319-4e32-b9f3-4de3ad3272fb
Thread-1429::INFO::2012-06-22
09:37:27,496::image::357::Storage.Image::(getChain)
sdUUID=68aa0dc2-9cd1-4549-8008-30b1bae667db
imgUUID=61180d3c-63ba-41ca-989a-8bd2acff4d7e
chain=[<storage.fileVolume.FileVolume instance at 0x7fe5d4540ef0>]
Thread-1429::INFO::2012-06-22
09:37:27,497::logUtils::39::dispatcher::(wrapper) Run and protect:
prepareImage, Return response: {'path':
'/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-4549-8
008-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-1319-4
e32-b9f3-4de3ad3272fb', 'chain': [{'path':
'/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-4549-8
008-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-1319-4
e32-b9f3-4de3ad3272fb', 'domainID': '68aa0dc2-9cd1-4549-8008-30b1bae667db',
'volumeID': 'eb866d5a-1319-4e32-b9f3-4de3ad3272fb', 'imageID':
'61180d3c-63ba-41ca-989a-8bd2acff4d7e'}]}
Thread-1429::DEBUG::2012-06-22
09:37:27,498::task::1172::TaskManager.Task::(prepare)
Task=`9eba14b9-a037-4f59-bc90-97b5ce032503`::finished: {'path':
'/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-4549-8
008-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-1319-4
e32-b9f3-4de3ad3272fb', 'chain': [{'path':
'/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-4549-8
008-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-1319-4
e32-b9f3-4de3ad3272fb', 'domainID': '68aa0dc2-9cd1-4549-8008-30b1bae667db',
'volumeID': 'eb866d5a-1319-4e32-b9f3-4de3ad3272fb', 'imageID':
'61180d3c-63ba-41ca-989a-8bd2acff4d7e'}]}
Thread-1429::DEBUG::2012-06-22
09:37:27,498::task::588::TaskManager.Task::(_updateState)
Task=`9eba14b9-a037-4f59-bc90-97b5ce032503`::moving from state preparing ->
state finished
Thread-1429::DEBUG::2012-06-22
09:37:27,498::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db': < ResourceRef
'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db', isValid: 'True' obj:
'None'>}
Thread-1429::DEBUG::2012-06-22
09:37:27,498::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-1429::DEBUG::2012-06-22
09:37:27,499::resourceManager::538::ResourceManager::(releaseResource)
Trying to release resource 'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db'
Thread-1429::DEBUG::2012-06-22
09:37:27,499::resourceManager::553::ResourceManager::(releaseResource)
Released resource 'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db' (0 active
users)
Thread-1429::DEBUG::2012-06-22
09:37:27,499::resourceManager::558::ResourceManager::(releaseResource)
Resource 'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db' is free, finding out
if anyone is waiting for it.
Thread-1429::DEBUG::2012-06-22
09:37:27,499::resourceManager::565::ResourceManager::(releaseResource) No
one is waiting for resource 'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db',
Clearing records.
Thread-1429::DEBUG::2012-06-22
09:37:27,500::task::978::TaskManager.Task::(_decref)
Task=`9eba14b9-a037-4f59-bc90-97b5ce032503`::ref 0 aborting False
Thread-1429::INFO::2012-06-22
09:37:27,500::clientIF::279::vds::(prepareVolumePath) prepared volume path:
/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-4549-80
08-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-1319-4e
32-b9f3-4de3ad3272fb
Thread-1429::DEBUG::2012-06-22 09:37:27,507::libvirtvm::1340::vm.Vm::(_run)
vmId=`92de99e5-067a-421b-a4b1-2a2b60e8894a`::<?xml version="1.0"
encoding="utf-8"?>
<domain type="kvm">
<name>fgjh</name>
<uuid>92de99e5-067a-421b-a4b1-2a2b60e8894a</uuid>
<memory>524288</memory>
<currentMemory>524288</currentMemory>
<vcpu>1</vcpu>
<devices>
<channel type="unix">
<target name="com.redhat.rhevm.vdsm" type="virtio"/>
<source mode="bind"
path="/var/lib/libvirt/qemu/channels/fgjh.com.redhat.rhevm.vdsm"/>
</channel>
<input bus="usb" type="tablet"/>
<graphics autoport="yes" keymap="en-us" listen="0"
passwd="*****" passwdValidTo="1970-01-01T00:00:01" port="-1" type="vnc"/>
<console type="pty">
<target port="0" type="virtio"/>
</console>
<video>
<model heads="1" type="qxl" vram="65536"/>
</video>
<interface type="bridge">
<mac address="00:1a:4a:01:14:00"/>
<model type="virtio"/>
<source bridge="ovirtmgmt"/>
</interface>
<memballoon model="virtio"/>
<disk device="cdrom" snapshot="no" type="file">
<source file=""/>
<target bus="ide" dev="hdc"/>
<readonly/>
<serial></serial>
</disk>
<disk device="disk" snapshot="no" type="file">
<source
file="/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-4
549-8008-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-1
319-4e32-b9f3-4de3ad3272fb"/>
<target bus="virtio" dev="vda"/>
<serial>61180d3c-63ba-41ca-989a-8bd2acff4d7e</serial>
<boot order="1"/>
<driver cache="none" error_policy="stop"
io="threads" name="qemu" type="raw"/>
</disk>
</devices>
<os>
<type arch="x86_64" machine="pc">hvm</type>
<smbios mode="sysinfo"/>
</os>
<sysinfo type="smbios">
<system>
<entry name="manufacturer">Red Hat</entry>
<entry name="product">RHEV Hypervisor</entry>
<entry name="version">6.2-1.1</entry>
<entry
name="serial">068FD200-06AF-7318-06AF-73180A8F5201_00:1c:c4:74:94:f0</entry>
<entry
name="uuid">92de99e5-067a-421b-a4b1-2a2b60e8894a</entry>
</system>
</sysinfo>
<clock adjustment="0" offset="variable">
<timer name="rtc" tickpolicy="catchup"/>
</clock>
<features>
<acpi/>
</features>
<cpu match="exact">
<model>Conroe</model>
<topology cores="1" sockets="1" threads="1"/>
</cpu>
</domain>
Thread-1429::DEBUG::2012-06-22
09:37:28,084::vm::580::vm.Vm::(_startUnderlyingVm)
vmId=`92de99e5-067a-421b-a4b1-2a2b60e8894a`::_ongoingCreations released
Thread-1429::ERROR::2012-06-22
09:37:28,084::vm::604::vm.Vm::(_startUnderlyingVm)
vmId=`92de99e5-067a-421b-a4b1-2a2b60e8894a`::The vm start process failed
Traceback (most recent call last):
File "/usr/share/vdsm/vm.py", line 570, in _startUnderlyingVm
self._run()
File "/usr/share/vdsm/libvirtvm.py", line 1366, in _run
self._connection.createXML(domxml, flags),
File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line
82, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2087, in
createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed',
conn=self)
libvirtError: internal error Process exited while reading console log
output: char device redirected to /dev/pts/1
qemu-kvm: -drive
file=/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-45
49-8008-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-13
19-4e32-b9f3-4de3ad3272fb,if=none,id=drive-virtio-disk0,format=raw,serial=61
180d3c-63ba-41ca-989a-8bd2acff4d7e,cache=none,werror=stop,rerror=stop,aio=th
reads: could not open disk image
/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-4549-80
08-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-1319-4e
32-b9f3-4de3ad3272fb: Permission denied
Thread-1429::DEBUG::2012-06-22 09:37:28,087::vm::920::vm.Vm::(setDownStatus)
vmId=`92de99e5-067a-421b-a4b1-2a2b60e8894a`::Changed state to Down: internal
error Process exited while reading console log output: char device
redirected to /dev/pts/1
qemu-kvm: -drive
file=/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-45
49-8008-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-13
19-4e32-b9f3-4de3ad3272fb,if=none,id=drive-virtio-disk0,format=raw,serial=61
180d3c-63ba-41ca-989a-8bd2acff4d7e,cache=none,werror=stop,rerror=stop,aio=th
reads: could not open disk image
/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-4549-80
08-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-1319-4e
32-b9f3-4de3ad3272fb: Permission denied
Thread-1432::DEBUG::2012-06-22
09:37:28,354::BindingXMLRPC::872::vds::(wrapper) client [10.1.20.2]::call
vmGetStats with ('92de99e5-067a-421b-a4b1-2a2b60e8894a',) {}
Thread-1432::DEBUG::2012-06-22
09:37:28,354::BindingXMLRPC::879::vds::(wrapper) return vmGetStats with
{'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': 'Down',
'hash': '0', 'exitMessage': 'internal error Process exited while reading
console log output: char device redirected to /dev/pts/1\nqemu-kvm: -drive
file=/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-45
49-8008-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-13
19-4e32-b9f3-4de3ad3272fb,if=none,id=drive-virtio-disk0,format=raw,serial=61
180d3c-63ba-41ca-989a-8bd2acff4d7e,cache=none,werror=stop,rerror=stop,aio=th
reads: could not open disk image
/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-4549-80
08-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-1319-4e
32-b9f3-4de3ad3272fb: Permission denied\n', 'vmId':
'92de99e5-067a-421b-a4b1-2a2b60e8894a', 'timeOffset': '0', 'exitCode': 1}]}
Thread-1433::DEBUG::2012-06-22
09:37:28,366::BindingXMLRPC::872::vds::(wrapper) client [10.1.20.2]::call
vmDestroy with ('92de99e5-067a-421b-a4b1-2a2b60e8894a',) {}
Thread-1433::INFO::2012-06-22 09:37:28,366::API::319::vds::(destroy)
vmContainerLock acquired by vm 92de99e5-067a-421b-a4b1-2a2b60e8894a
Thread-1433::DEBUG::2012-06-22
09:37:28,366::libvirtvm::2088::vm.Vm::(destroy)
vmId=`92de99e5-067a-421b-a4b1-2a2b60e8894a`::destroy Called
Thread-1433::INFO::2012-06-22
09:37:28,366::libvirtvm::2042::vm.Vm::(releaseVm)
vmId=`92de99e5-067a-421b-a4b1-2a2b60e8894a`::Release VM resources
Thread-1433::WARNING::2012-06-22
09:37:28,366::vm::328::vm.Vm::(_set_lastStatus)
vmId=`92de99e5-067a-421b-a4b1-2a2b60e8894a`::trying to set state to Powering
down when already Down
Thread-1433::DEBUG::2012-06-22
09:37:28,367::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n
/sbin/service ksmtuned retune' (cwd None)
Thread-1433::DEBUG::2012-06-22
09:37:28,413::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS: <err> =
''; <rc> = 0
Thread-1433::DEBUG::2012-06-22
09:37:28,414::task::588::TaskManager.Task::(_updateState)
Task=`f042d1c3-5c18-4cb2-89d9-87d64a560922`::moving from state init -> state
preparing
Thread-1433::INFO::2012-06-22
09:37:28,415::logUtils::37::dispatcher::(wrapper) Run and protect:
inappropriateDevices(thiefId='92de99e5-067a-421b-a4b1-2a2b60e8894a')
Thread-1433::INFO::2012-06-22
09:37:28,418::logUtils::39::dispatcher::(wrapper) Run and protect:
inappropriateDevices, Return response: None
Thread-1433::DEBUG::2012-06-22
09:37:28,418::task::1172::TaskManager.Task::(prepare)
Task=`f042d1c3-5c18-4cb2-89d9-87d64a560922`::finished: None
Thread-1433::DEBUG::2012-06-22
09:37:28,418::task::588::TaskManager.Task::(_updateState)
Task=`f042d1c3-5c18-4cb2-89d9-87d64a560922`::moving from state preparing ->
state finished
Thread-1433::DEBUG::2012-06-22
09:37:28,419::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-1433::DEBUG::2012-06-22
09:37:28,419::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-1433::DEBUG::2012-06-22
09:37:28,419::task::978::TaskManager.Task::(_decref)
Task=`f042d1c3-5c18-4cb2-89d9-87d64a560922`::ref 0 aborting False
Thread-1433::DEBUG::2012-06-22
09:37:28,419::libvirtvm::2083::vm.Vm::(deleteVm)
vmId=`92de99e5-067a-421b-a4b1-2a2b60e8894a`::Total desktops after destroy of
92de99e5-067a-421b-a4b1-2a2b60e8894a is 0
Thread-1433::DEBUG::2012-06-22
09:37:28,420::BindingXMLRPC::879::vds::(wrapper) return vmDestroy with
{'status': {'message': 'Machine destroyed', 'code': 0}}
Thread-1434::DEBUG::2012-06-22
09:37:30,459::task::588::TaskManager.Task::(_updateState)
Task=`a479829e-33ba-4c9b-987a-bc61d8bf11d6`::moving from state init -> state
preparing
Thread-1434::INFO::2012-06-22
09:37:30,459::logUtils::37::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-1434::INFO::2012-06-22
09:37:30,459::logUtils::39::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {'68aa0dc2-9cd1-4549-8008-30b1bae667db':
{'delay': '0.0014181137085', 'lastCheck': 1340372243.5057499, 'code': 0,
'valid': True}}
Thread-1434::DEBUG::2012-06-22
09:37:30,460::task::1172::TaskManager.Task::(prepare)
Task=`a479829e-33ba-4c9b-987a-bc61d8bf11d6`::finished:
{'68aa0dc2-9cd1-4549-8008-30b1bae667db': {'delay': '0.0014181137085',
'lastCheck': 1340372243.5057499, 'code': 0, 'valid': True}}
Thread-1434::DEBUG::2012-06-22
09:37:30,460::task::588::TaskManager.Task::(_updateState)
Task=`a479829e-33ba-4c9b-987a-bc61d8bf11d6`::moving from state preparing ->
state finished
Thread-1434::DEBUG::2012-06-22
09:37:30,460::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-1434::DEBUG::2012-06-22
09:37:30,460::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-1434::DEBUG::2012-06-22
09:37:30,460::task::978::TaskManager.Task::(_decref)
Task=`a479829e-33ba-4c9b-987a-bc61d8bf11d6`::ref 0 aborting False
^C
Error:
libvirtError: internal error Process exited while reading console log
output: char device redirected to /dev/pts/1
qemu-kvm: -drive
file=/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-45
49-8008-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-13
19-4e32-b9f3-4de3ad3272fb,if=none,id=drive-virtio-disk0,format=raw,serial=61
180d3c-63ba-41ca-989a-8bd2acff4d7e,cache=none,werror=stop,rerror=stop,aio=th
reads: could not open disk image
/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-4549-80
08-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-1319-4e
32-b9f3-4de3ad3272fb: Permission denied
[root@noc-3-synt mnt]# ls -lh
/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-4549-80
08-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-1319-4e
32-b9f3-4de3ad3272fb
-rw-rw----. 1 vdsm kvm 10G Jun 22 09:28
/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-4549-80
08-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-1319-4e
32-b9f3-4de3ad3272fb
[root@noc-3-synt mnt]# ps -aux | grep /usr/share/vdsm/vdsm
Warning: bad syntax, perhaps a bogus '-'? See
/usr/share/doc/procps-3.2.8/FAQ
root 2761 0.0 0.0 103280 804 pts/0 S+ 09:51 0:00 grep
/usr/share/vdsm/vdsm
vdsm 4480 0.0 0.0 9272 616 ? S< 09:07 0:00 /bin/bash
-e /usr/share/vdsm/respawn --minlifetime 10 --daemon --masterpid
/var/run/vdsm/respawn.pid /usr/share/vdsm/vdsm
vdsm 4483 0.6 0.2 1411684 34800 ? S<l 09:07 0:17
/usr/bin/python /usr/share/vdsm/vdsm
vdsm 5265 0.0 0.1 1387096 26880 ? S< 09:17 0:00
/usr/bin/python /usr/share/vdsm/vdsm
vdsm 5266 0.0 0.1 1387096 26660 ? S< 09:17 0:00
/usr/bin/python /usr/share/vdsm/vdsm
vdsm 5267 0.0 0.1 1387096 26660 ? S< 09:17 0:00
/usr/bin/python /usr/share/vdsm/vdsm
vdsm 5269 0.0 0.1 1387096 26584 ? S< 09:17 0:00
/usr/bin/python /usr/share/vdsm/vdsm
vdsm 5271 0.0 0.1 1387096 26584 ? S< 09:17 0:00
/usr/bin/python /usr/share/vdsm/vdsm
:.
------=_NextPart_000_00B5_01CD50A1.2C0D2C70
Content-Type: text/html;
charset="koi8-r"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head>
<META HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; =
charset=3Dkoi8-r">
<meta name=3DGenerator content=3D"Microsoft Word 12 (filtered =
medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:2.0cm 42.5pt 2.0cm 3.0cm;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DRU link=3Dblue =
vlink=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal><span =
lang=3DEN-US>Hi.<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>I use a bunch of ovirt 3.1 beta and gluster storage. =
<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US>The =
virtual machine was created successfully, but will not =
start.<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>In the logs:<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Vdsm.log:<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1426::DEBUG::2012-06-22 =
09:37:27,151::task::978::TaskManager.Task::(_decref) =
Task=3D`9a68c120-169f-4c0e-98e3-08e3bf5c66ab`::ref 0 aborting =
False<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1427::DEBUG::2012-06-22 =
09:37:27,162::BindingXMLRPC::160::vds::(wrapper) =
[10.1.20.2]<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1427::DEBUG::2012-06-22 =
09:37:27,163::task::588::TaskManager.Task::(_updateState) =
Task=3D`662a52dd-f00d-4be1-941d-eac8ec6a70f6`::moving from state init =
-> state preparing<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1427::INFO::2012-06-22 =
09:37:27,163::logUtils::37::dispatcher::(wrapper) Run and protect: =
getStoragePoolInfo(spUUID=3D'b1c7875a-964d-4633-8ea4-2b191d68c105', =
options=3DNone)<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1427::DEBUG::2012-06-22 =
09:37:27,163::resourceManager::175::ResourceManager.Request::(__init__) =
ResName=3D`Storage.b1c7875a-964d-4633-8ea4-2b191d68c105`ReqID=3D`ca9b7715=
-1f0b-4225-9717-d1179193c42e`::Request was made in =
'/usr/share/vdsm/storage/resourceManager.py' line '485' at =
'registerResource'<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1427::DEBUG::2012-06-22 =
09:37:27,164::resourceManager::486::ResourceManager::(registerResource) =
Trying to register resource =
'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105' for lock type =
'shared'<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1427::DEBUG::2012-06-22 =
09:37:27,164::resourceManager::528::ResourceManager::(registerResource) =
Resource 'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105' is free. Now =
locking as 'shared' (1 active user)<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1427::DEBUG::2012-06-22 =
09:37:27,164::resourceManager::212::ResourceManager.Request::(grant) =
ResName=3D`Storage.b1c7875a-964d-4633-8ea4-2b191d68c105`ReqID=3D`ca9b7715=
-1f0b-4225-9717-d1179193c42e`::Granted request<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1427::DEBUG::2012-06-22 =
09:37:27,164::task::817::TaskManager.Task::(resourceAcquired) =
Task=3D`662a52dd-f00d-4be1-941d-eac8ec6a70f6`::_resourcesAcquired: =
Storage.b1c7875a-964d-4633-8ea4-2b191d68c105 =
(shared)<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1427::DEBUG::2012-06-22 =
09:37:27,165::task::978::TaskManager.Task::(_decref) =
Task=3D`662a52dd-f00d-4be1-941d-eac8ec6a70f6`::ref 1 aborting =
False<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1427::INFO::2012-06-22 =
09:37:27,165::logUtils::39::dispatcher::(wrapper) Run and protect: =
getStoragePoolInfo, Return response: {'info': {'spm_id': 1, =
'master_uuid': '68aa0dc2-9cd1-4549-8008-30b1bae667db', 'name': =
'gluster', 'version': '0', 'domains': =
'68aa0dc2-9cd1-4549-8008-30b1bae667db:Active', 'pool_status': =
'connected', 'isoprefix': '', 'type': 'SHAREDFS', 'master_ver': 1, =
'lver': 0}, 'dominfo': {'68aa0dc2-9cd1-4549-8008-30b1bae667db': =
{'status': 'Active', 'diskfree': '27505983488', 'alerts': [], =
'disktotal': '53579874304'}}}<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1427::DEBUG::2012-06-22 =
09:37:27,165::task::1172::TaskManager.Task::(prepare) =
Task=3D`662a52dd-f00d-4be1-941d-eac8ec6a70f6`::finished: {'info': =
{'spm_id': 1, 'master_uuid': '68aa0dc2-9cd1-4549-8008-30b1bae667db', =
'name': 'gluster', 'version': '0', 'domains': =
'68aa0dc2-9cd1-4549-8008-30b1bae667db:Active', 'pool_status': =
'connected', 'isoprefix': '', 'type': 'SHAREDFS', 'master_ver': 1, =
'lver': 0}, 'dominfo': {'68aa0dc2-9cd1-4549-8008-30b1bae667db': =
{'status': 'Active', 'diskfree': '27505983488', 'alerts': [], =
'disktotal': '53579874304'}}}<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1427::DEBUG::2012-06-22 =
09:37:27,166::task::588::TaskManager.Task::(_updateState) =
Task=3D`662a52dd-f00d-4be1-941d-eac8ec6a70f6`::moving from state =
preparing -> state finished<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1427::DEBUG::2012-06-22 =
09:37:27,166::resourceManager::809::ResourceManager.Owner::(releaseAll) =
Owner.releaseAll requests {} resources =
{'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105': < ResourceRef =
'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105', isValid: 'True' obj: =
'None'>}<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1427::DEBUG::2012-06-22 =
09:37:27,166::resourceManager::844::ResourceManager.Owner::(cancelAll) =
Owner.cancelAll requests {}<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1427::DEBUG::2012-06-22 =
09:37:27,166::resourceManager::538::ResourceManager::(releaseResource) =
Trying to release resource =
'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105'<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1427::DEBUG::2012-06-22 =
09:37:27,166::resourceManager::553::ResourceManager::(releaseResource) =
Released resource 'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105' (0 =
active users)<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1427::DEBUG::2012-06-22 =
09:37:27,167::resourceManager::558::ResourceManager::(releaseResource) =
Resource 'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105' is free, finding =
out if anyone is waiting for it.<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1427::DEBUG::2012-06-22 =
09:37:27,167::resourceManager::565::ResourceManager::(releaseResource) =
No one is waiting for resource =
'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105', Clearing =
records.<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1427::DEBUG::2012-06-22 =
09:37:27,167::task::978::TaskManager.Task::(_decref) =
Task=3D`662a52dd-f00d-4be1-941d-eac8ec6a70f6`::ref 0 aborting =
False<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1428::DEBUG::2012-06-22 =
09:37:27,476::BindingXMLRPC::872::vds::(wrapper) client =
[10.1.20.2]::call vmCreate with ({'custom': {}, 'keyboardLayout': =
'en-us', 'kvmEnable': 'true', 'acpiEnable': 'true', 'emulatedMachine': =
'pc', 'tabletEnable': 'true', 'vmId': =
'92de99e5-067a-421b-a4b1-2a2b60e8894a', 'devices': [{'device': 'qxl', =
'specParams': {'vram': '65536'}, 'type': 'video', 'deviceId': =
'9780f3aa-4c0e-44eb-bc94-7ebfb63fe2f3'}, {'index': '2', 'iface': 'ide', =
'specParams': {'path': ''}, 'readonly': 'true', 'deviceId': =
'59b3e477-8ba9-4a09-ac4a-4d0da91708ce', 'device': 'cdrom', 'path': '', =
'type': 'disk'}, {'index': 0, 'iface': 'virtio', 'format': 'raw', =
'bootOrder': '1', 'volumeID': 'eb866d5a-1319-4e32-b9f3-4de3ad3272fb', =
'imageID': '61180d3c-63ba-41ca-989a-8bd2acff4d7e', 'specParams': {}, =
'readonly': 'false', 'domainID': '68aa0dc2-9cd1-4549-8008-30b1bae667db', =
'optional': 'false', 'deviceId': '61180d3c-63ba-41ca-989a-8bd2acff4d7e', =
'poolID': 'b1c7875a-964d-4633-8ea4-2b191d68c105', 'device': 'disk', =
'shared': 'false', 'propagateErrors': 'off', 'type': 'disk'}, =
{'nicModel': 'pv', 'macAddr': '00:1a:4a:01:14:00', 'network': =
'ovirtmgmt', 'specParams': {}, 'deviceId': =
'c580b531-2178-4a38-bb1e-971bf300bf8a', 'device': 'bridge', 'type': =
'interface'}, {'device': 'memballoon', 'specParams': {'model': =
'virtio'}, 'type': 'balloon', 'deviceId': =
'3df4e23d-85bd-41b3-a320-3a510a1c2e7f'}], 'smp': '1', 'vmType': 'kvm', =
'timeOffset': '0', 'memSize': 512, 'spiceSslCipherSuite': 'DEFAULT', =
'cpuType': 'Conroe', 'spiceSecureChannels': =
'smain,sinputs,scursor,splayback,srecord,sdisplay', 'smpCoresPerSocket': =
'1', 'vmName': 'fgjh', 'display': 'vnc', 'transparentHugePages': 'true', =
'nice': '0'},) {} flowID [60053096]<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1428::INFO::2012-06-22 =
09:37:27,477::API::603::vds::(_getNetworkIp) network None: using =
0<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1428::INFO::2012-06-22 =
09:37:27,477::API::229::vds::(create) vmContainerLock acquired by vm =
92de99e5-067a-421b-a4b1-2a2b60e8894a<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,479::vm::564::vm.Vm::(_startUnderlyingVm) =
vmId=3D`92de99e5-067a-421b-a4b1-2a2b60e8894a`::Start<o:p></o:p></span></p=
><p class=3DMsoNormal><span lang=3DEN-US>Thread-1428::DEBUG::2012-06-22 =
09:37:27,479::API::246::vds::(create) Total desktops after creation of =
92de99e5-067a-421b-a4b1-2a2b60e8894a is 1<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1428::DEBUG::2012-06-22 =
09:37:27,480::BindingXMLRPC::879::vds::(wrapper) return vmCreate with =
{'status': {'message': 'Done', 'code': 0}, 'vmList': {'status': =
'WaitForLaunch', 'acpiEnable': 'true', 'emulatedMachine': 'pc', =
'tabletEnable': 'true', 'pid': '0', 'timeOffset': '0', 'displayPort': =
'-1', 'displaySecurePort': '-1', 'spiceSslCipherSuite': 'DEFAULT', =
'cpuType': 'Conroe', 'custom': {}, 'clientIp': '', 'nicModel': =
'rtl8139,pv', 'keyboardLayout': 'en-us', 'kvmEnable': 'true', 'vmId': =
'92de99e5-067a-421b-a4b1-2a2b60e8894a', 'transparentHugePages': 'true', =
'devices': [{'device': 'qxl', 'specParams': {'vram': '65536'}, 'type': =
'video', 'deviceId': '9780f3aa-4c0e-44eb-bc94-7ebfb63fe2f3'}, {'index': =
'2', 'iface': 'ide', 'specParams': {'path': ''}, 'readonly': 'true', =
'deviceId': '59b3e477-8ba9-4a09-ac4a-4d0da91708ce', 'device': 'cdrom', =
'path': '', 'type': 'disk'}, {'index': 0, 'iface': 'virtio', 'format': =
'raw', 'bootOrder': '1', 'volumeID': =
'eb866d5a-1319-4e32-b9f3-4de3ad3272fb', 'imageID': =
'61180d3c-63ba-41ca-989a-8bd2acff4d7e', 'specParams': {}, 'readonly': =
'false', 'domainID': '68aa0dc2-9cd1-4549-8008-30b1bae667db', 'optional': =
'false', 'deviceId': '61180d3c-63ba-41ca-989a-8bd2acff4d7e', 'poolID': =
'b1c7875a-964d-4633-8ea4-2b191d68c105', 'device': 'disk', 'shared': =
'false', 'propagateErrors': 'off', 'type': 'disk'}, {'nicModel': 'pv', =
'macAddr': '00:1a:4a:01:14:00', 'network': 'ovirtmgmt', 'specParams': =
{}, 'deviceId': 'c580b531-2178-4a38-bb1e-971bf300bf8a', 'device': =
'bridge', 'type': 'interface'}, {'device': 'memballoon', 'specParams': =
{'model': 'virtio'}, 'type': 'balloon', 'deviceId': =
'3df4e23d-85bd-41b3-a320-3a510a1c2e7f'}], 'smp': '1', 'vmType': 'kvm', =
'memSize': 512, 'displayIp': '0', 'spiceSecureChannels': =
'smain,sinputs,scursor,splayback,srecord,sdisplay', 'smpCoresPerSocket': =
'1', 'vmName': 'fgjh', 'display': 'vnc', 'nice': =
'0'}}<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,481::vm::568::vm.Vm::(_startUnderlyingVm) =
vmId=3D`92de99e5-067a-421b-a4b1-2a2b60e8894a`::_ongoingCreations =
acquired<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1429::INFO::2012-06-22 =
09:37:27,482::libvirtvm::1287::vm.Vm::(_run) =
vmId=3D`92de99e5-067a-421b-a4b1-2a2b60e8894a`::VM wrapper has =
started<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,482::task::588::TaskManager.Task::(_updateState) =
Task=3D`5922124f-6997-4a7f-a3a8-b4852afabe18`::moving from state init =
-> state preparing<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1429::INFO::2012-06-22 =
09:37:27,482::logUtils::37::dispatcher::(wrapper) Run and protect: =
getVolumeSize(sdUUID=3D'68aa0dc2-9cd1-4549-8008-30b1bae667db', =
spUUID=3D'b1c7875a-964d-4633-8ea4-2b191d68c105', =
imgUUID=3D'61180d3c-63ba-41ca-989a-8bd2acff4d7e', =
volUUID=3D'eb866d5a-1319-4e32-b9f3-4de3ad3272fb', =
options=3DNone)<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,483::resourceManager::175::ResourceManager.Request::(__init__) =
ResName=3D`Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db`ReqID=3D`2b25c825=
-b3b5-44f1-a41c-3e21dd2e716f`::Request was made in =
'/usr/share/vdsm/storage/resourceManager.py' line '485' at =
'registerResource'<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,483::resourceManager::486::ResourceManager::(registerResource) =
Trying to register resource =
'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db' for lock type =
'shared'<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,483::resourceManager::528::ResourceManager::(registerResource) =
Resource 'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db' is free. Now =
locking as 'shared' (1 active user)<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,483::resourceManager::212::ResourceManager.Request::(grant) =
ResName=3D`Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db`ReqID=3D`2b25c825=
-b3b5-44f1-a41c-3e21dd2e716f`::Granted request<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,484::task::817::TaskManager.Task::(resourceAcquired) =
Task=3D`5922124f-6997-4a7f-a3a8-b4852afabe18`::_resourcesAcquired: =
Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db =
(shared)<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,484::task::978::TaskManager.Task::(_decref) =
Task=3D`5922124f-6997-4a7f-a3a8-b4852afabe18`::ref 1 aborting =
False<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,485::fileVolume::535::Storage.Volume::(validateVolumePath) =
validate path for =
eb866d5a-1319-4e32-b9f3-4de3ad3272fb<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,487::fileVolume::535::Storage.Volume::(validateVolumePath) =
validate path for =
eb866d5a-1319-4e32-b9f3-4de3ad3272fb<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1429::INFO::2012-06-22 =
09:37:27,488::logUtils::39::dispatcher::(wrapper) Run and protect: =
getVolumeSize, Return response: {'truesize': '10737426432', =
'apparentsize': '10737418240'}<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,488::task::1172::TaskManager.Task::(prepare) =
Task=3D`5922124f-6997-4a7f-a3a8-b4852afabe18`::finished: {'truesize': =
'10737426432', 'apparentsize': '10737418240'}<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,489::task::588::TaskManager.Task::(_updateState) =
Task=3D`5922124f-6997-4a7f-a3a8-b4852afabe18`::moving from state =
preparing -> state finished<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,489::resourceManager::809::ResourceManager.Owner::(releaseAll) =
Owner.releaseAll requests {} resources =
{'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db': < ResourceRef =
'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db', isValid: 'True' obj: =
'None'>}<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,489::resourceManager::844::ResourceManager.Owner::(cancelAll) =
Owner.cancelAll requests {}<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,489::resourceManager::538::ResourceManager::(releaseResource) =
Trying to release resource =
'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db'<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,489::resourceManager::553::ResourceManager::(releaseResource) =
Released resource 'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db' (0 =
active users)<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,490::resourceManager::558::ResourceManager::(releaseResource) =
Resource 'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db' is free, finding =
out if anyone is waiting for it.<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,490::resourceManager::565::ResourceManager::(releaseResource) =
No one is waiting for resource =
'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db', Clearing =
records.<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,490::task::978::TaskManager.Task::(_decref) =
Task=3D`5922124f-6997-4a7f-a3a8-b4852afabe18`::ref 0 aborting =
False<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1429::INFO::2012-06-22 =
09:37:27,490::clientIF::279::vds::(prepareVolumePath) prepared volume =
path:<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,490::task::588::TaskManager.Task::(_updateState) =
Task=3D`9eba14b9-a037-4f59-bc90-97b5ce032503`::moving from state init =
-> state preparing<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1429::INFO::2012-06-22 =
09:37:27,491::logUtils::37::dispatcher::(wrapper) Run and protect: =
prepareImage(sdUUID=3D'68aa0dc2-9cd1-4549-8008-30b1bae667db', =
spUUID=3D'b1c7875a-964d-4633-8ea4-2b191d68c105', =
imgUUID=3D'61180d3c-63ba-41ca-989a-8bd2acff4d7e', =
volUUID=3D'eb866d5a-1319-4e32-b9f3-4de3ad3272fb')<o:p></o:p></span></p><p=
class=3DMsoNormal><span lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,491::resourceManager::175::ResourceManager.Request::(__init__) =
ResName=3D`Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db`ReqID=3D`95022af4=
-4bcc-4503-81c6-767215c0cca5`::Request was made in =
'/usr/share/vdsm/storage/resourceManager.py' line '485' at =
'registerResource'<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,491::resourceManager::486::ResourceManager::(registerResource) =
Trying to register resource =
'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db' for lock type =
'shared'<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,491::resourceManager::528::ResourceManager::(registerResource) =
Resource 'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db' is free. Now =
locking as 'shared' (1 active user)<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,492::resourceManager::212::ResourceManager.Request::(grant) =
ResName=3D`Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db`ReqID=3D`95022af4=
-4bcc-4503-81c6-767215c0cca5`::Granted request<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,492::task::817::TaskManager.Task::(resourceAcquired) =
Task=3D`9eba14b9-a037-4f59-bc90-97b5ce032503`::_resourcesAcquired: =
Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db =
(shared)<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,492::task::978::TaskManager.Task::(_decref) =
Task=3D`9eba14b9-a037-4f59-bc90-97b5ce032503`::ref 1 aborting =
False<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,493::fileVolume::535::Storage.Volume::(validateVolumePath) =
validate path for =
eb866d5a-1319-4e32-b9f3-4de3ad3272fb<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1429::INFO::2012-06-22 =
09:37:27,496::image::357::Storage.Image::(getChain) =
sdUUID=3D68aa0dc2-9cd1-4549-8008-30b1bae667db =
imgUUID=3D61180d3c-63ba-41ca-989a-8bd2acff4d7e =
chain=3D[<storage.fileVolume.FileVolume instance at =
0x7fe5d4540ef0>]<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1429::INFO::2012-06-22 =
09:37:27,497::logUtils::39::dispatcher::(wrapper) Run and protect: =
prepareImage, Return response: {'path': =
'/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-454=
9-8008-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-=
1319-4e32-b9f3-4de3ad3272fb', 'chain': [{'path': =
'/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-454=
9-8008-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-=
1319-4e32-b9f3-4de3ad3272fb', 'domainID': =
'68aa0dc2-9cd1-4549-8008-30b1bae667db', 'volumeID': =
'eb866d5a-1319-4e32-b9f3-4de3ad3272fb', 'imageID': =
'61180d3c-63ba-41ca-989a-8bd2acff4d7e'}]}<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,498::task::1172::TaskManager.Task::(prepare) =
Task=3D`9eba14b9-a037-4f59-bc90-97b5ce032503`::finished: {'path': =
'/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-454=
9-8008-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-=
1319-4e32-b9f3-4de3ad3272fb', 'chain': [{'path': =
'/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-454=
9-8008-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-=
1319-4e32-b9f3-4de3ad3272fb', 'domainID': =
'68aa0dc2-9cd1-4549-8008-30b1bae667db', 'volumeID': =
'eb866d5a-1319-4e32-b9f3-4de3ad3272fb', 'imageID': =
'61180d3c-63ba-41ca-989a-8bd2acff4d7e'}]}<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,498::task::588::TaskManager.Task::(_updateState) =
Task=3D`9eba14b9-a037-4f59-bc90-97b5ce032503`::moving from state =
preparing -> state finished<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,498::resourceManager::809::ResourceManager.Owner::(releaseAll) =
Owner.releaseAll requests {} resources =
{'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db': < ResourceRef =
'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db', isValid: 'True' obj: =
'None'>}<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,498::resourceManager::844::ResourceManager.Owner::(cancelAll) =
Owner.cancelAll requests {}<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,499::resourceManager::538::ResourceManager::(releaseResource) =
Trying to release resource =
'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db'<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,499::resourceManager::553::ResourceManager::(releaseResource) =
Released resource 'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db' (0 =
active users)<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,499::resourceManager::558::ResourceManager::(releaseResource) =
Resource 'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db' is free, finding =
out if anyone is waiting for it.<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,499::resourceManager::565::ResourceManager::(releaseResource) =
No one is waiting for resource =
'Storage.68aa0dc2-9cd1-4549-8008-30b1bae667db', Clearing =
records.<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,500::task::978::TaskManager.Task::(_decref) =
Task=3D`9eba14b9-a037-4f59-bc90-97b5ce032503`::ref 0 aborting =
False<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1429::INFO::2012-06-22 =
09:37:27,500::clientIF::279::vds::(prepareVolumePath) prepared volume =
path: =
/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-4549=
-8008-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-1=
319-4e32-b9f3-4de3ad3272fb<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:27,507::libvirtvm::1340::vm.Vm::(_run) =
vmId=3D`92de99e5-067a-421b-a4b1-2a2b60e8894a`::<?xml =
version=3D"1.0" =
encoding=3D"utf-8"?><o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US><domain =
type=3D"kvm"><o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A =
<name>fgjh</name><o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A =
<uuid>92de99e5-067a-421b-a4b1-2a2b60e8894a</uuid><o:p></o:p><=
/span></p><p class=3DMsoNormal><span lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A =
<memory>524288</memory><o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A =
<currentMemory>524288</currentMemory><o:p></o:p></span></p><p=
class=3DMsoNormal><span lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A =
<vcpu>1</vcpu><o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A =
<devices><o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A <channel =
type=3D"unix"><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=
=9A=9A=9A <target name=3D"com.redhat.rhevm.vdsm" =
type=3D"virtio"/><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=
=9A=9A=9A <source mode=3D"bind" =
path=3D"/var/lib/libvirt/qemu/channels/fgjh.com.redhat.rhevm.vdsm&qu=
ot;/><o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A =
</channel><o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A <input =
bus=3D"usb" =
type=3D"tablet"/><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A <graphics =
autoport=3D"yes" keymap=3D"en-us" =
listen=3D"0" passwd=3D"*****" =
passwdValidTo=3D"1970-01-01T00:00:01" port=3D"-1" =
type=3D"vnc"/><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A <console =
type=3D"pty"><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=
=9A=9A=9A <target port=3D"0" =
type=3D"virtio"/><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A =
</console><o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A =
<video><o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=
=9A=9A=9A <model heads=3D"1" type=3D"qxl" =
vram=3D"65536"/><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A =
</video><o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A <interface =
type=3D"bridge"><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=
=9A=9A=9A <mac =
address=3D"00:1a:4a:01:14:00"/><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=
=9A=9A=9A <model =
type=3D"virtio"/><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=
=9A=9A=9A <source =
bridge=3D"ovirtmgmt"/><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A =
</interface><o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A =
<memballoon model=3D"virtio"/><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A <disk =
device=3D"cdrom" snapshot=3D"no" =
type=3D"file"><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=
=9A=9A=9A <source file=3D""/><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=
=9A=9A=9A <target bus=3D"ide" =
dev=3D"hdc"/><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=
=9A=9A=9A <readonly/><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=
=9A=9A=9A <serial></serial><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A =
</disk><o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A <disk =
device=3D"disk" snapshot=3D"no" =
type=3D"file"><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=
=9A=9A=9A <source =
file=3D"/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0=
dc2-9cd1-4549-8008-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d=
7e/eb866d5a-1319-4e32-b9f3-4de3ad3272fb"/><o:p></o:p></span></p><=
p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=
=9A=9A=9A <target bus=3D"virtio" =
dev=3D"vda"/><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=
=9A=9A=9A =
<serial>61180d3c-63ba-41ca-989a-8bd2acff4d7e</serial><o:p></o=
:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=
=9A=9A=9A <boot order=3D"1"/><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=
=9A=9A=9A <driver cache=3D"none" =
error_policy=3D"stop" io=3D"threads" =
name=3D"qemu" =
type=3D"raw"/><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A =
</disk><o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A =
</devices><o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A <os><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A <type =
arch=3D"x86_64" =
machine=3D"pc">hvm</type><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A <smbios =
mode=3D"sysinfo"/><o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A =
</os><o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A <sysinfo =
type=3D"smbios"><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A =
<system><o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=
=9A=9A=9A <entry name=3D"manufacturer">Red =
Hat</entry><o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=
=9A=9A=9A <entry name=3D"product">RHEV =
Hypervisor</entry><o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=
=9A=9A=9A <entry =
name=3D"version">6.2-1.1</entry><o:p></o:p></span></p>=
<p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=
=9A=9A=9A <entry =
name=3D"serial">068FD200-06AF-7318-06AF-73180A8F5201_00:1c:c=
4:74:94:f0</entry><o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=
=9A=9A=9A <entry =
name=3D"uuid">92de99e5-067a-421b-a4b1-2a2b60e8894a</entry=
><o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A =
</system><o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A =
</sysinfo><o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A <clock adjustment=3D"0" =
offset=3D"variable"><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A <timer =
name=3D"rtc" =
tickpolicy=3D"catchup"/><o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A =
</clock><o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A =
<features><o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A =
<acpi/><o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A =
</features><o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A <cpu =
match=3D"exact"><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A =
<model>Conroe</model><o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A=9A <topology =
cores=3D"1" sockets=3D"1" =
threads=3D"1"/><o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>=9A=9A=9A=9A=9A=9A=9A =
</cpu><o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US></domain><o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US><o:p> </o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:28,084::vm::580::vm.Vm::(_startUnderlyingVm) =
vmId=3D`92de99e5-067a-421b-a4b1-2a2b60e8894a`::_ongoingCreations =
released<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1429::ERROR::2012-06-22 =
09:37:28,084::vm::604::vm.Vm::(_startUnderlyingVm) =
vmId=3D`92de99e5-067a-421b-a4b1-2a2b60e8894a`::The vm start process =
failed<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Traceback (most recent call last):<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>=9A File =
"/usr/share/vdsm/vm.py", line 570, in =
_startUnderlyingVm<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A self._run()<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>=9A File =
"/usr/share/vdsm/libvirtvm.py", line 1366, in =
_run<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A self._connection.createXML(domxml, =
flags),<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US>=9A =
File =
"/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py",=
line 82, in wrapper<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A ret =3D f(*args, =
**kwargs)<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A File =
"/usr/lib64/python2.6/site-packages/libvirt.py", line 2087, in =
createXML<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>=9A=9A=9A if ret is None:raise =
libvirtError('virDomainCreateXML() failed', =
conn=3Dself)<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>libvirtError: internal error Process exited while reading =
console log output: char device redirected to =
/dev/pts/1<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>qemu-kvm: -drive =
file=3D/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9c=
d1-4549-8008-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb8=
66d5a-1319-4e32-b9f3-4de3ad3272fb,if=3Dnone,id=3Ddrive-virtio-disk0,forma=
t=3Draw,serial=3D61180d3c-63ba-41ca-989a-8bd2acff4d7e,cache=3Dnone,werror=
=3Dstop,rerror=3Dstop,aio=3Dthreads: could not open disk image =
/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-4549=
-8008-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-1=
319-4e32-b9f3-4de3ad3272fb: Permission denied<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US><o:p> </o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1429::DEBUG::2012-06-22 =
09:37:28,087::vm::920::vm.Vm::(setDownStatus) =
vmId=3D`92de99e5-067a-421b-a4b1-2a2b60e8894a`::Changed state to Down: =
internal error Process exited while reading console log output: char =
device redirected to /dev/pts/1<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>qemu-kvm: -drive =
file=3D/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9c=
d1-4549-8008-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb8=
66d5a-1319-4e32-b9f3-4de3ad3272fb,if=3Dnone,id=3Ddrive-virtio-disk0,forma=
t=3Draw,serial=3D61180d3c-63ba-41ca-989a-8bd2acff4d7e,cache=3Dnone,werror=
=3Dstop,rerror=3Dstop,aio=3Dthreads: could not open disk image =
/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-4549=
-8008-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-1=
319-4e32-b9f3-4de3ad3272fb: Permission denied<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US><o:p> </o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1432::DEBUG::2012-06-22 =
09:37:28,354::BindingXMLRPC::872::vds::(wrapper) client =
[10.1.20.2]::call vmGetStats with =
('92de99e5-067a-421b-a4b1-2a2b60e8894a',) {}<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1432::DEBUG::2012-06-22 =
09:37:28,354::BindingXMLRPC::879::vds::(wrapper) return vmGetStats with =
{'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': =
'Down', 'hash': '0', 'exitMessage': 'internal error Process exited while =
reading console log output: char device redirected to =
/dev/pts/1\nqemu-kvm: -drive =
file=3D/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9c=
d1-4549-8008-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb8=
66d5a-1319-4e32-b9f3-4de3ad3272fb,if=3Dnone,id=3Ddrive-virtio-disk0,forma=
t=3Draw,serial=3D61180d3c-63ba-41ca-989a-8bd2acff4d7e,cache=3Dnone,werror=
=3Dstop,rerror=3Dstop,aio=3Dthreads: could not open disk image =
/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-4549=
-8008-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-1=
319-4e32-b9f3-4de3ad3272fb: Permission denied\n', 'vmId': =
'92de99e5-067a-421b-a4b1-2a2b60e8894a', 'timeOffset': '0', 'exitCode': =
1}]}<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1433::DEBUG::2012-06-22 =
09:37:28,366::BindingXMLRPC::872::vds::(wrapper) client =
[10.1.20.2]::call vmDestroy with =
('92de99e5-067a-421b-a4b1-2a2b60e8894a',) {}<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1433::INFO::2012-06-22 =
09:37:28,366::API::319::vds::(destroy) vmContainerLock acquired by vm =
92de99e5-067a-421b-a4b1-2a2b60e8894a<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1433::DEBUG::2012-06-22 =
09:37:28,366::libvirtvm::2088::vm.Vm::(destroy) =
vmId=3D`92de99e5-067a-421b-a4b1-2a2b60e8894a`::destroy =
Called<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1433::INFO::2012-06-22 =
09:37:28,366::libvirtvm::2042::vm.Vm::(releaseVm) =
vmId=3D`92de99e5-067a-421b-a4b1-2a2b60e8894a`::Release VM =
resources<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1433::WARNING::2012-06-22 =
09:37:28,366::vm::328::vm.Vm::(_set_lastStatus) =
vmId=3D`92de99e5-067a-421b-a4b1-2a2b60e8894a`::trying to set state to =
Powering down when already Down<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1433::DEBUG::2012-06-22 =
09:37:28,367::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo =
-n /sbin/service ksmtuned retune' (cwd None)<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1433::DEBUG::2012-06-22 =
09:37:28,413::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS: =
<err> =3D ''; <rc> =3D 0<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1433::DEBUG::2012-06-22 =
09:37:28,414::task::588::TaskManager.Task::(_updateState) =
Task=3D`f042d1c3-5c18-4cb2-89d9-87d64a560922`::moving from state init =
-> state preparing<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1433::INFO::2012-06-22 =
09:37:28,415::logUtils::37::dispatcher::(wrapper) Run and protect: =
inappropriateDevices(thiefId=3D'92de99e5-067a-421b-a4b1-2a2b60e8894a')<o:=
p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1433::INFO::2012-06-22 =
09:37:28,418::logUtils::39::dispatcher::(wrapper) Run and protect: =
inappropriateDevices, Return response: None<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1433::DEBUG::2012-06-22 =
09:37:28,418::task::1172::TaskManager.Task::(prepare) =
Task=3D`f042d1c3-5c18-4cb2-89d9-87d64a560922`::finished: =
None<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1433::DEBUG::2012-06-22 =
09:37:28,418::task::588::TaskManager.Task::(_updateState) =
Task=3D`f042d1c3-5c18-4cb2-89d9-87d64a560922`::moving from state =
preparing -> state finished<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1433::DEBUG::2012-06-22 =
09:37:28,419::resourceManager::809::ResourceManager.Owner::(releaseAll) =
Owner.releaseAll requests {} resources {}<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1433::DEBUG::2012-06-22 =
09:37:28,419::resourceManager::844::ResourceManager.Owner::(cancelAll) =
Owner.cancelAll requests {}<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1433::DEBUG::2012-06-22 =
09:37:28,419::task::978::TaskManager.Task::(_decref) =
Task=3D`f042d1c3-5c18-4cb2-89d9-87d64a560922`::ref 0 aborting =
False<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1433::DEBUG::2012-06-22 =
09:37:28,419::libvirtvm::2083::vm.Vm::(deleteVm) =
vmId=3D`92de99e5-067a-421b-a4b1-2a2b60e8894a`::Total desktops after =
destroy of 92de99e5-067a-421b-a4b1-2a2b60e8894a is =
0<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1433::DEBUG::2012-06-22 =
09:37:28,420::BindingXMLRPC::879::vds::(wrapper) return vmDestroy with =
{'status': {'message': 'Machine destroyed', 'code': =
0}}<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1434::DEBUG::2012-06-22 =
09:37:30,459::task::588::TaskManager.Task::(_updateState) =
Task=3D`a479829e-33ba-4c9b-987a-bc61d8bf11d6`::moving from state init =
-> state preparing<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1434::INFO::2012-06-22 =
09:37:30,459::logUtils::37::dispatcher::(wrapper) Run and protect: =
repoStats(options=3DNone)<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1434::INFO::2012-06-22 =
09:37:30,459::logUtils::39::dispatcher::(wrapper) Run and protect: =
repoStats, Return response: {'68aa0dc2-9cd1-4549-8008-30b1bae667db': =
{'delay': '0.0014181137085', 'lastCheck': 1340372243.5057499, 'code': 0, =
'valid': True}}<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1434::DEBUG::2012-06-22 =
09:37:30,460::task::1172::TaskManager.Task::(prepare) =
Task=3D`a479829e-33ba-4c9b-987a-bc61d8bf11d6`::finished: =
{'68aa0dc2-9cd1-4549-8008-30b1bae667db': {'delay': '0.0014181137085', =
'lastCheck': 1340372243.5057499, 'code': 0, 'valid': =
True}}<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thread-1434::DEBUG::2012-06-22 =
09:37:30,460::task::588::TaskManager.Task::(_updateState) =
Task=3D`a479829e-33ba-4c9b-987a-bc61d8bf11d6`::moving from state =
preparing -> state finished<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1434::DEBUG::2012-06-22 =
09:37:30,460::resourceManager::809::ResourceManager.Owner::(releaseAll) =
Owner.releaseAll requests {} resources {}<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1434::DEBUG::2012-06-22 =
09:37:30,460::resourceManager::844::ResourceManager.Owner::(cancelAll) =
Owner.cancelAll requests {}<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thread-1434::DEBUG::2012-06-22 =
09:37:30,460::task::978::TaskManager.Task::(_decref) =
Task=3D`a479829e-33ba-4c9b-987a-bc61d8bf11d6`::ref 0 aborting =
False<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>^C<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Error:<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>libvirtError: internal error Process exited while reading =
console log output: char device redirected to =
/dev/pts/1<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>qemu-kvm: -drive =
file=3D/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9c=
d1-4549-8008-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb8=
66d5a-1319-4e32-b9f3-4de3ad3272fb,if=3Dnone,id=3Ddrive-virtio-disk0,forma=
t=3Draw,serial=3D61180d3c-63ba-41ca-989a-8bd2acff4d7e,cache=3Dnone,werror=
=3Dstop,rerror=3Dstop,aio=3Dthreads: could not open disk image =
/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-4549=
-8008-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-1=
319-4e32-b9f3-4de3ad3272fb: Permission denied<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US><o:p> </o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>[root@noc-3-synt mnt]# ls -lh =
/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-4549=
-8008-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-1=
319-4e32-b9f3-4de3ad3272fb<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>-rw-rw----. 1 vdsm kvm 10G Jun 22 =
09:28 =
/rhev/data-center/b1c7875a-964d-4633-8ea4-2b191d68c105/68aa0dc2-9cd1-4549=
-8008-30b1bae667db/images/61180d3c-63ba-41ca-989a-8bd2acff4d7e/eb866d5a-1=
319-4e32-b9f3-4de3ad3272fb<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US><o:p> </o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>[root@noc-3-synt mnt]# ps -aux | =
grep /usr/share/vdsm/vdsm<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Warning: bad syntax, perhaps a =
bogus '-'? See /usr/share/doc/procps-3.2.8/FAQ<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>root=9A=9A=9A=9A=9A 2761=9A 0.0=9A =
0.0 103280=9A=9A 804 pts/0=9A=9A=9A S+=9A=9A 09:51=9A=9A 0:00 grep =
/usr/share/vdsm/vdsm<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>vdsm=9A=9A=9A=9A=9A 4480=9A 0.0=9A 0.0=9A=9A 9272=9A=9A 616 =
?=9A=9A=9A=9A=9A=9A=9A S<=9A=9A 09:07=9A=9A 0:00 /bin/bash -e =
/usr/share/vdsm/respawn --minlifetime 10 --daemon --masterpid =
/var/run/vdsm/respawn.pid /usr/share/vdsm/vdsm<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>vdsm=9A=9A=9A=9A=9A 4483=9A 0.6=9A =
0.2 1411684 34800 ?=9A=9A=9A=9A=9A=9A S<l=9A 09:07=9A=9A 0:17 =
/usr/bin/python /usr/share/vdsm/vdsm<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>vdsm=9A=9A=9A=9A=9A 5265=9A 0.0=9A =
0.1 1387096 26880 ?=9A=9A=9A=9A=9A=9A S<=9A=9A 09:17=9A=9A 0:00 =
/usr/bin/python /usr/share/vdsm/vdsm<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>vdsm=9A=9A=9A=9A=9A 5266=9A 0.0=9A =
0.1 1387096 26660 ?=9A=9A=9A=9A=9A=9A S<=9A=9A 09:17=9A=9A 0:00 =
/usr/bin/python /usr/share/vdsm/vdsm<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>vdsm=9A=9A=9A=9A=9A 5267=9A 0.0=9A =
0.1 1387096 26660 ?=9A=9A=9A=9A=9A=9A S<=9A=9A 09:17=9A=9A 0:00 =
/usr/bin/python /usr/share/vdsm/vdsm<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>vdsm=9A=9A =9A=9A=9A5269=9A 0.0=9A =
0.1 1387096 26584 ?=9A=9A=9A=9A=9A=9A S<=9A=9A 09:17=9A=9A 0:00 =
/usr/bin/python /usr/share/vdsm/vdsm<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>vdsm=9A=9A=9A=9A=9A 5271=9A 0.0=9A =
0.1 1387096 26584 ?=9A=9A=9A=9A=9A=9A S<=9A=9A 09:17=9A=9A 0:00 =
/usr/bin/python /usr/share/vdsm/vdsm<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>….<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US><o:p> </o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US><o:p> </o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p></div></body></html>
------=_NextPart_000_00B5_01CD50A1.2C0D2C70--
2
3