[Users] very odd permission problem
by Alessandro Bianchi
This is a multi-part message in MIME format.
--------------040900080003090001020907
Content-Type: text/plain; charset=ISO-8859-15; format=flowed
Content-Transfer-Encoding: 8bit
Hi all
I'm running 3.2 on several Fedora 18 nodes
One of them has a local storage running 4 VMs
Today the UPS crashed and host was rebboted after UPS replacement
None of the VM's were able to be started
I tried to put the Host in maintenance and reinstalled it, but this
didn't give any result
Digging into the logs I discovered the following error:
The first was of this kind (on every VM)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2630, in
createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed',
conn=self)
libvirtError: errore interno process exited while connecting to monitor:
((null):5034): Spice-Warning **: reds.c:3247:reds_init_ssl: Could not
use private key file
qemu-kvm: failed to initialize spice server
Thread-564::DEBUG::2013-09-06
11:31:32,814::vm::1065::vm.Vm::(setDownStatus)
vmId=`49d84915-490b-497d-a3f8-c7dac7485281`::Changed state to Down:
errore interno process exited while connecting to monitor:
((null):5034): Spice-Warning **: reds.c:3247:reds_init_ssl: Could not
use private key file
qemu-kvm: failed to initialize spice server
The private key was marked 440 as permission owned by vdsm user and kvm
group
I had to change it to 444 to allow everyone to read it
After that I had for every VM the following error:
could not open disk image
/rhev/data-center/3935800a-abe4-406d-84a1-4c3c0b915cce/6818de31-5cda-41d0-a41a-681230a409ba/images/54144c03-5057-462e-8275-6ab386ae8c5a/01298998-32d5-44c2-b5d1-91be1316ed19:
Permission denied
Disks were owned by vdsm:kvm with 660 permission
I had to relax this to 666 to enable the VMs to start
Has anyone faced this kind f problem before?
Any hint about what may have caused this odd problem?
Thank you
Best regards
--
SkyNet SRL
Via Maggiate 67/a - 28021 Borgomanero (NO) - tel. +39 0322-836487/834765
- fax +39 0322-836608
http://www.skynet.it <http://www.skynet.it/>
Autorizzazione Ministeriale n.197
Le informazioni contenute in questo messaggio sono riservate e
confidenziali ed è vietata la diffusione in qualunque modo eseguita.
Qualora Lei non fosse la persona a cui il presente messaggio è
destinato, La invitiamo ad eliminarlo ed a distruggerlo non
divulgandolo, dandocene gentilmente comunicazione.
Per qualsiasi informazione si prega di contattare info(a)skynet.it (e-mail
dell'azienda). Rif. D.L. 196/2003
--------------040900080003090001020907
Content-Type: text/html; charset=ISO-8859-15
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-15">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hi all<br>
<br>
I'm running 3.2 on several Fedora 18 nodes<br>
<br>
One of them has a local storage running 4 VMs<br>
<br>
Today the UPS crashed and host was rebboted after UPS replacement<br>
<br>
None of the VM's were able to be started<br>
<br>
I tried to put the Host in maintenance and reinstalled it, but this
didn't give any result<br>
<br>
Digging into the logs I discovered the following error:<br>
<br>
The first was of this kind (on every VM)<br>
<br>
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2630, in
createXML<br>
if ret is None:raise libvirtError('virDomainCreateXML() failed',
conn=self)<br>
libvirtError: errore interno process exited while connecting to
monitor: ((null):5034): Spice-Warning **: reds.c:3247:reds_init_ssl:
Could not use private key file<br>
qemu-kvm: failed to initialize spice server<br>
<br>
Thread-564::DEBUG::2013-09-06
11:31:32,814::vm::1065::vm.Vm::(setDownStatus)
vmId=`49d84915-490b-497d-a3f8-c7dac7485281`::Changed state to Down:
errore interno process exited while connecting to monitor:
((null):5034): Spice-Warning **: reds.c:3247:reds_init_ssl: Could
not use private key file<br>
qemu-kvm: failed to initialize spice server<br>
<br>
The private key was marked 440 as permission owned by vdsm user and
kvm group<br>
<br>
I had to change it to 444 to allow everyone to read it<br>
<br>
After that I had for every VM the following error:<br>
<br>
could not open disk image
/rhev/data-center/3935800a-abe4-406d-84a1-4c3c0b915cce/6818de31-5cda-41d0-a41a-681230a409ba/images/54144c03-5057-462e-8275-6ab386ae8c5a/01298998-32d5-44c2-b5d1-91be1316ed19:
Permission denied<br>
<br>
Disks were owned by vdsm:kvm with 660 permission<br>
<br>
I had to relax this to 666 to enable the VMs to start<br>
<br>
Has anyone faced this kind f problem before?<br>
<br>
Any hint about what may have caused this odd problem?<br>
<br>
Thank you<br>
<br>
Best regards<br>
<br>
<div class="moz-signature">-- <br>
<meta http-equiv="CONTENT-TYPE" content="text/html;
charset=ISO-8859-15">
<title></title>
<meta name="generator" content="Bluefish 2.0.3">
<meta name="author" content="Alessandro Bianchi">
<meta name="CREATED" content="20100306;9474300">
<meta name="CHANGEDBY" content="Alessandro ">
<meta name="CHANGED" content="20100306;10212100">
<style type="text/css">
<!--
P { font-family: "Arial", "Helvetica", sans-serif; font-size: 10pt }
P.nome { color: #ff8000; font-family: "Arial", "Helvetica", sans-serif; font-size: 12pt; font-weight: bold; text-align: center }
P.indirizzo { color: #0084d1; font-family: "Arial", "Helvetica", sans-serif; font-size: 10pt; font-weight: bold; line-height: 0.48cm; text-align: center }
P.info { color: #b3b3b3; font-family: "Arial", "Helvetica", sans-serif; font-size: 9pt }
A:link { color: #005dff; text-decoration: none }
A:visited { color: #005dff; text-decoration: none }
-->
</style>
<p class="nome">SkyNet SRL</p>
<p class="indirizzo">Via Maggiate 67/a - 28021 Borgomanero (NO) -
tel.
+39 0322-836487/834765 - fax +39 0322-836608</p>
<p align="CENTER"><a href="http://www.skynet.it/">http://www.skynet.it</a></p>
<p class="indirizzo">Autorizzazione Ministeriale n.197</p>
<p class="info">Le informazioni contenute in questo messaggio sono
riservate e confidenziali ed è vietata la diffusione in
qualunque
modo eseguita.<br>
Qualora Lei non fosse la persona a cui il presente
messaggio è destinato, La invitiamo ad eliminarlo ed a
distruggerlo
non divulgandolo, dandocene gentilmente comunicazione. <br>
Per
qualsiasi informazione si prega di contattare <a class="moz-txt-link-abbreviated" href="mailto:info@skynet.it">info(a)skynet.it</a>
(e-mail dell'azienda). Rif. D.L. 196/2003</p>
</div>
</body>
</html>
--------------040900080003090001020907--
11 years, 3 months
[Users] oVirt API Code 500 with Foreman
by Andrew Lau
Hi,
Foreman guys say this is probably an oVirt side issue..
I was successfully able to hook up my foreman server to the oVirt
datacenter using the compute resources section. It detected the available
datacenters and logged in fine. Logs showed code 200, I can view the VMs
available, power them on and shut them down from the foreman UI.
But when I go to hosts->New Host and select oVirt I get the error:
Error loading virtual machine information: Internal Server Error
Logs are showing:
Operation FAILED: statementcallback; bad sql grammar select * from (select
* from vds groups view where ( vds group id in (select vds groups storage
domain.vds group id from vds groups storage domain left outer join
storage pool with storage domain on vds groups storage domain.storage pool
id=storage pool with storage domain.id where ( storage pool with
storage domain.name like '%dc_01%' or storage pool with storage
domain.description like '%dc_01%' or storage pool with storage
domain.comment like '%dc_01%' ) )) order by name asc ) as t1 offset (1 -1)
limit 100; nested exception is org.postgresql.util.psqlexception: error:
column storage pool with storage domain.comment does not exist
position: 421
Rendered common/500.html.erb (5.2ms)
Completed 500 Internal Server Error in 150ms (Views: 6.0ms | ActiveRecord:
0.3ms)
CentOS 6.4 - Foreman 1.2
CentOS 6.4 - oVirt 3.3 Nightly
Any suggestions?
Thanks,
Andrew.
11 years, 3 months
[Users] oVirt 3.3 ovirt-websocket-proxy
by Jakub Bittner
Hello,
if I install websocket proxy and configure it for spice-html5 usage by
ovirt-setup it does not add iptables rules about opening port 6100 to
/etc/sysconfig/iptables even if I set oVirt to manage iptables. I use
centos 6.4 and oVirt 3.3RC
Thank you.
11 years, 3 months
Re: [Users] Short delay in 3.3 release-- was [Re: oVirt 3.3 Release Go/No-Go Meeting Minutes]
by Markus Stockhausen
This is a multi-part message in MIME format.
------=_NextPartTM-000-ed7990d2-5ee9-4960-b34e-ff24930b40ee
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SSBzaW1wbHkgaW5zdGFsbGVkIHRoZSBrc20gcGFja2FnZSBhbmQgZXZlcnl0aGluZyBydW5zIGVp
bmUuIERvIEkgbWlzcyBzb21ldGhpbmcgaGVyZT8KCk1hcmt1cw==
------=_NextPartTM-000-ed7990d2-5ee9-4960-b34e-ff24930b40ee
Content-Type: text/plain;
name="InterScan_Disclaimer.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
filename="InterScan_Disclaimer.txt"
****************************************************************************
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.
Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
Vorstand:
Kadir Akin
Dr. Michael Höhnerbach
Vorsitzender des Aufsichtsrates:
Hans Kristian Langva
Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497
This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.
e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
executive board:
Kadir Akin
Dr. Michael Höhnerbach
President of the supervisory board:
Hans Kristian Langva
Registry office: district court Cologne
Register number: HRB 52 497
****************************************************************************
------=_NextPartTM-000-ed7990d2-5ee9-4960-b34e-ff24930b40ee--
11 years, 3 months
[Users] Support for VAAI SCSI primitives in ovirt?
by Paul Jansen
---1504104896-31436369-1378443783=:30278
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable
Hello.=0AI've been following the progress of VAAI support being added to th=
e 'target-core' framework in the Linux kernel.=0ASupport for all 4 features=
did not make it into the recent 3.11 kernel release but is planned for 3.1=
2.=0A=0AThere is some detail on VAAI (Vsphere APIs for Array Integration) h=
ere.=0AVAAI is obviously a VMware term, but the SCSI primitives it refers t=
o are open.=0AFrom the above linked page: "VAAI significantly enhances the =
integration of storage and servers by =0Aenabling seamless offload of locki=
ng and block operations onto the =0Astorage array."=0A=0AIt seems reasonabl=
e to assume that Fedora 20 (and probably Fedora 19 with a kernel update at =
some stage) will be using the 3.12 kernel and could be used to export iSCSI=
/FC targets to Ovirt.=0A=0AVMware also provides VAAI integration for NAS da=
tastores (via the installation of a vendor specific plugin into Vmware Vcen=
ter) that also significantly improves performance for some operations.=0A=
=0AFrom what I can make out from the VMware documentation the ability to us=
e the VAAI offloads only applies to the upper tier licensed version of vcen=
ter.=A0 I think there is an opportunity for Ovirt to add support for this f=
eature and make it stand out even against the freely licensed ESXi (which w=
ill be missing this feature).=A0 With more people looking to Ovirt rather t=
han getting started and potentially staying with VMware this is a good oppo=
rtunity to gather market share.=0A=0AWhat is the current status of support =
for these VAAI scsi primitives in Ovirt?=A0 Is there anything planned at th=
e moment?=0ARegarding the VAAI NAS plugin feature that VMware now has - are=
there plans to help offload certain operations happening on NFS datastores=
?=0AFor instance some sort of agent that can be installed on a Linux NFS se=
rver could allow oVirt to instruct the NFS server machine to perform an off=
loaded copy/clone operation rather than that process needing to be done ove=
r the wire.=0A=0AThanks,=0APaul=0A
---1504104896-31436369-1378443783=:30278
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"color:#000; background-color:#fff; font-family:lu=
cida console, sans-serif;font-size:12pt"><div>Hello.</div><div><span>I've b=
een following the progress of VAAI support being added to the 'target-core'=
framework in the Linux kernel.</span></div><div style=3D"color: rgb(0, 0, =
0); font-size: 16px; font-family: lucida console,sans-serif; background-col=
or: transparent; font-style: normal;"><span>Support for all 4 features did =
not make it into the recent 3.11 kernel release but is planned for 3.12.</s=
pan></div><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: =
lucida console,sans-serif; background-color: transparent; font-style: norma=
l;"><br><span></span></div><div class=3D"yui_3_7_2_52_1378431180201_67" sty=
le=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: lucida console,san=
s-serif; background-color: transparent; font-style: normal;"><span>There is=
some detail on VAAI (Vsphere APIs for Array Integration) <a
href=3D"http://linux-iscsi.org/wiki/VStorage_APIs_for_Array_Integration">h=
ere</a>.</span></div><div class=3D"yui_3_7_2_52_1378431180201_67" style=3D"=
color: rgb(0, 0, 0); font-size: 16px; font-family: lucida console,sans-seri=
f; background-color: transparent; font-style: normal;"><span>VAAI is obviou=
sly a VMware term, but the SCSI primitives it refers to are open.</span></d=
iv><div class=3D"yui_3_7_2_52_1378431180201_67" style=3D"color: rgb(0, 0, 0=
); font-size: 16px; font-family: lucida console,sans-serif; background-colo=
r: transparent; font-style: normal;"><span>From the above linked page: "</s=
pan>VAAI significantly enhances the integration of storage and servers by =
=0Aenabling seamless offload of locking and block operations onto the =0Ast=
orage array."</div><div class=3D"yui_3_7_2_52_1378431180201_67" style=3D"co=
lor: rgb(0, 0, 0); font-size: 16px; font-family: lucida console,sans-serif;=
background-color: transparent; font-style: normal;"><br></div><div class=
=3D"yui_3_7_2_52_1378431180201_67" style=3D"color: rgb(0, 0, 0); font-size:=
16px; font-family: lucida console,sans-serif; background-color: transparen=
t; font-style: normal;">It seems reasonable to assume that Fedora 20 (and p=
robably Fedora 19 with a kernel update at some stage) will be using the 3.1=
2 kernel and could be used to export iSCSI/FC targets to Ovirt.</div><div c=
lass=3D"yui_3_7_2_52_1378431180201_67" style=3D"color: rgb(0, 0, 0); font-s=
ize: 16px; font-family: lucida console,sans-serif; background-color: transp=
arent; font-style: normal;"><br></div><div class=3D"yui_3_7_2_52_1378431180=
201_67" style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: lucida =
console,sans-serif; background-color: transparent; font-style: normal;">VMw=
are also
provides VAAI integration for NAS datastores (via the installation of a ve=
ndor specific plugin into Vmware Vcenter) that also significantly improves =
performance for some operations.</div><div class=3D"yui_3_7_2_52_1378431180=
201_67" style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: lucida =
console,sans-serif; background-color: transparent; font-style: normal;"><br=
></div><div class=3D"yui_3_7_2_52_1378431180201_67" style=3D"color: rgb(0, =
0, 0); font-size: 16px; font-family: lucida console,sans-serif; background-=
color: transparent; font-style: normal;">From what I can make out from the =
VMware documentation the ability to use the VAAI offloads only applies to t=
he upper tier licensed version of vcenter. I think there is an opport=
unity for Ovirt to add support for this feature and make it stand out even =
against the freely licensed ESXi (which will be missing this feature). =
; With more people looking to Ovirt rather than getting started and
potentially staying with VMware this is a good opportunity to gather marke=
t share.</div><div class=3D"yui_3_7_2_52_1378431180201_67" style=3D"color: =
rgb(0, 0, 0); font-size: 16px; font-family: lucida console,sans-serif; back=
ground-color: transparent; font-style: normal;"><br></div><div class=3D"yui=
_3_7_2_52_1378431180201_67" style=3D"color: rgb(0, 0, 0); font-size: 16px; =
font-family: lucida console,sans-serif; background-color: transparent; font=
-style: normal;">What is the current status of support for these VAAI scsi =
primitives in Ovirt? Is there anything planned at the moment?</div><d=
iv class=3D"yui_3_7_2_52_1378431180201_67" style=3D"color: rgb(0, 0, 0); fo=
nt-size: 16px; font-family: lucida console,sans-serif; background-color: tr=
ansparent; font-style: normal;">Regarding the VAAI NAS plugin feature that =
VMware now has - are there plans to help offload certain operations happeni=
ng on NFS datastores?</div><div class=3D"yui_3_7_2_52_1378431180201_67"
style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: lucida console=
,sans-serif; background-color: transparent; font-style: normal;">For instan=
ce some sort of agent that can be installed on a Linux NFS server could all=
ow oVirt to instruct the NFS server machine to perform an offloaded copy/cl=
one operation rather than that process needing to be done over the wire.</d=
iv><div class=3D"yui_3_7_2_52_1378431180201_67" style=3D"color: rgb(0, 0, 0=
); font-size: 16px; font-family: lucida console,sans-serif; background-colo=
r: transparent; font-style: normal;"><br></div><div class=3D"yui_3_7_2_52_1=
378431180201_67" style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family=
: lucida console,sans-serif; background-color: transparent; font-style: nor=
mal;">Thanks,</div><div class=3D"yui_3_7_2_52_1378431180201_67" style=3D"co=
lor: rgb(0, 0, 0); font-size: 16px; font-family: lucida console,sans-serif;=
background-color: transparent; font-style: normal;">Paul</div><div style=
=3D"color:
rgb(0, 0, 0); font-size: 16px; font-family: lucida console,sans-serif; bac=
kground-color: transparent; font-style: normal;"><br></div></div></body></h=
tml>
---1504104896-31436369-1378443783=:30278--
11 years, 3 months
[Users] Add NFS Domain on Fedora 19 host fails (3.3 RC)
by Markus Stockhausen
------=_NextPartTM-000-b4629277-5f8e-43c0-b52b-006aee6797ba
Content-Type: multipart/alternative;
boundary="_000_12EF8D94C6F8734FB2FF37B9FBEDD17358573729EXCHANGEcollogi_"
--_000_12EF8D94C6F8734FB2FF37B9FBEDD17358573729EXCHANGEcollogi_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hello,
me again. Adding a NFS Domain to the cluster fails. It looks like the error
from http://lists.ovirt.org/pipermail/users/2013-April/014080.html but I
have a Fedora 19 host and sanlock seems to be a newer version. I wanted
to restart the setup process but ovirt-engine tells me that the storage
domain already exists on the host. How can I recover from this failed
action?
Just in case you are interested in the current situation you will find the
logs attached.
Markus
[root@colovn1 dom_md]# getsebool -a | grep virt_use_nfs
virt_use_nfs --> on
[root@colovn1 dom_md]# df
Filesystem
...
10.10.30.251:/var/nas5/ovirt 7810410496 7094904832 715505664 91% /rhev/da=
ta-center/mnt/10.10.30.251:_var_nas5_ovirt
content of /var/log/sanlock
2013-09-04 15:47:04+0200 14844 [2655]: sanlock daemon started 2.8 host e35c=
a5ca-dfab-4521-b267-4d9b1f48ded4.colovn1.co
2013-09-04 15:58:04+0200 23 [1024]: sanlock daemon started 2.8 host b3a8ef4=
a-0966-4ffa-9086-e5a8c6ad7363.colovn1.co
2013-09-04 19:25:53+0200 23 [919]: sanlock daemon started 2.8 host 9654f1d0=
-6d87-4dd1-abdc-68c00fc4fe64.colovn1.co
2013-09-05 06:39:47+0200 25 [1173]: sanlock daemon started 2.8 host e282901=
2-4b8f-417f-809e-532c64108955.colovn1.co
2013-09-05 06:48:01+0200 518 [1178]: s1 lockspace 8dd2a4a5-b49f-4fef-a427-e=
62627fc09f7:250:/rhev/data-center/mnt/10.10.30.251:_var_nas5_ovirt/8dd2a4a5=
-b49f-4fef-a427-e62627fc09f7/dom_md/ids:0
2013-09-05 06:48:01+0200 518 [1786]: open error -13 /rhev/data-center/mnt/1=
0.10.30.251:_var_nas5_ovirt/8dd2a4a5-b49f-4fef-a427-e62627fc09f7/dom_md/ids
2013-09-05 06:48:01+0200 518 [1786]: s1 open_disk /rhev/data-center/mnt/10.=
10.30.251:_var_nas5_ovirt/8dd2a4a5-b49f-4fef-a427-e62627fc09f7/dom_md/ids e=
rror -13
2013-09-05 06:48:02+0200 519 [1178]: s1 add_lockspace fail result -19
permissions of file:
[root@colovn1 dom_md]# ls -al /rhev/data-center/mnt/10.10.30.251:_var_nas5_=
ovirt/8dd2a4a5-b49f-4fef-a427-e62627fc09f7/dom_md/ids
-rw-rw----. 1 vdsm kvm 1048576 5. Sep 06:47 /rhev/data-center/mnt/10.10.30=
.251:_var_nas5_ovirt/8dd2a4a5-b49f-4fef-a427-e62627fc09f7/dom_md/ids
vdsm.log
Thread-186::DEBUG::2013-09-05 06:48:01,872::task::974::TaskManager.Task::(_=
decref) Task=3D`22e150b3-2fae-42e2-ac85-52cee9eadfb9`::ref 1 aborting False
Thread-186::INFO::2013-09-05 06:48:01,872::sp::592::Storage.StoragePool::(c=
reate) spUUID=3Db054727d-fe4a-41ed-8393-a81e36b8a1af poolName=3DCollogia ma=
ster_sd=3D8dd2a4a5-b49f-4fef-a427-e62627fc09f7 domList=3D['8dd2a4a5-b49f-4f=
ef-a427-e62627fc09f7'] masterVersion=3D1 {'LEASETIMESEC': 60, 'IOOPTIMEOUTS=
EC': 10, 'LEASERETRIES': 3, 'LOCKRENEWALINTERVALSEC': 5}
Thread-186::INFO::2013-09-05 06:48:01,872::fileSD::315::Storage.StorageDoma=
in::(validate) sdUUID=3D8dd2a4a5-b49f-4fef-a427-e62627fc09f7
Thread-186::DEBUG::2013-09-05 06:48:01,888::persistentDict::234::Storage.Pe=
rsistentDict::(refresh) read lines (FileMetadataRW)=3D['CLASS=3DData', 'DES=
CRIPTION=3DNAS3_IB', 'IOOPTIMEOUTSEC=3D1', 'LEASERETRIES=3D3', 'LEASETIMESE=
C=3D5', 'LOCKPOLICY=3D', 'LOCKRENEWALINTERVALSEC=3D5', 'POOL_UUID=3D', 'REM=
OTE_PATH=3D10.10.30.251:/var/nas5/ovirt', 'ROLE=3DRegular', 'SDUUID=3D8dd2a=
4a5-b49f-4fef-a427-e62627fc09f7', 'TYPE=3DNFS', 'VERSION=3D3', '_SHA_CKSUM=
=3Db0b0af59d3c7c6ec83dd18ca11a6c1653de9a3b6']
Thread-186::DEBUG::2013-09-05 06:48:01,898::persistentDict::234::Storage.Pe=
rsistentDict::(refresh) read lines (FileMetadataRW)=3D['CLASS=3DData', 'DES=
CRIPTION=3DNAS3_IB', 'IOOPTIMEOUTSEC=3D1', 'LEASERETRIES=3D3', 'LEASETIMESE=
C=3D5', 'LOCKPOLICY=3D', 'LOCKRENEWALINTERVALSEC=3D5', 'POOL_UUID=3D', 'REM=
OTE_PATH=3D10.10.30.251:/var/nas5/ovirt', 'ROLE=3DRegular', 'SDUUID=3D8dd2a=
4a5-b49f-4fef-a427-e62627fc09f7', 'TYPE=3DNFS', 'VERSION=3D3', '_SHA_CKSUM=
=3Db0b0af59d3c7c6ec83dd18ca11a6c1653de9a3b6']
Thread-186::DEBUG::2013-09-05 06:48:01,899::persistentDict::167::Storage.Pe=
rsistentDict::(transaction) Starting transaction
Thread-186::DEBUG::2013-09-05 06:48:01,900::persistentDict::173::Storage.Pe=
rsistentDict::(transaction) Flushing changes
Thread-186::DEBUG::2013-09-05 06:48:01,900::persistentDict::299::Storage.Pe=
rsistentDict::(flush) about to write lines (FileMetadataRW)=3D['CLASS=3DDat=
a', 'DESCRIPTION=3DNAS3_IB', 'IOOPTIMEOUTSEC=3D10', 'LEASERETRIES=3D3', 'LE=
ASETIMESEC=3D60', 'LOCKPOLICY=3D', 'LOCKRENEWALINTERVALSEC=3D5', 'POOL_UUID=
=3D', 'REMOTE_PATH=3D10.10.30.251:/var/nas5/ovirt', 'ROLE=3DRegular', 'SDUU=
ID=3D8dd2a4a5-b49f-4fef-a427-e62627fc09f7', 'TYPE=3DNFS', 'VERSION=3D3', '_=
SHA_CKSUM=3De2dfa292d09c0cb420dc4d30ab5eed11c84a399e']
Thread-186::DEBUG::2013-09-05 06:48:01,903::persistentDict::175::Storage.Pe=
rsistentDict::(transaction) Finished transaction
Thread-186::INFO::2013-09-05 06:48:01,903::clusterlock::174::SANLock::(acqu=
ireHostId) Acquiring host id for domain 8dd2a4a5-b49f-4fef-a427-e62627fc09f=
7 (id: 250)
Thread-186::ERROR::2013-09-05 06:48:02,905::task::850::TaskManager.Task::(_=
setError) Task=3D`22e150b3-2fae-42e2-ac85-52cee9eadfb9`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 857, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
res =3D f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 960, in createStoragePool
masterVersion, leaseParams)
File "/usr/share/vdsm/storage/sp.py", line 617, in create
self._acquireTemporaryClusterLock(msdUUID, leaseParams)
File "/usr/share/vdsm/storage/sp.py", line 559, in _acquireTemporaryClust=
erLock
msd.acquireHostId(self.id)
File "/usr/share/vdsm/storage/sd.py", line 458, in acquireHostId
self._clusterLock.acquireHostId(hostId, async)
File "/usr/share/vdsm/storage/clusterlock.py", line 189, in acquireHostId
raise se.AcquireHostIdFailure(self._sdUUID, e)
AcquireHostIdFailure: Cannot acquire host id: ('8dd2a4a5-b49f-4fef-a427-e62=
627fc09f7', SanlockException(19, 'Sanlock lockspace add failure', 'No such =
device'))
/var/log/messages content
Sep 5 06:48:01 colovn1 multipathd: dm-3: remove map (uevent)
Sep 5 06:48:01 colovn1 multipathd: dm-3: remove map (uevent)
Sep 5 06:48:01 colovn1 multipathd: dm-3: remove map (uevent)
Sep 5 06:48:01 colovn1 multipathd: dm-3: remove map (uevent)
Sep 5 06:48:01 colovn1 sanlock[1173]: 2013-09-05 06:48:01+0200 518 [1786]:=
open error -13 /rhev/data-center/mnt/10.10.30.251:_var_nas5_ovirt/8dd2a4a5=
-b49f-4fef-a427-e62627fc09f7/dom_md/ids
Sep 5 06:48:01 colovn1 sanlock[1173]: 2013-09-05 06:48:01+0200 518 [1786]:=
s1 open_disk /rhev/data-center/mnt/10.10.30.251:_var_nas5_ovirt/8dd2a4a5-b=
49f-4fef-a427-e62627fc09f7/dom_md/ids error -13
Sep 5 06:48:02 colovn1 sanlock[1173]: 2013-09-05 06:48:02+0200 519 [1178]:=
s1 add_lockspace fail result -19
Sep 5 06:48:02 colovn1 vdsm TaskManager.Task ERROR Task=3D`22e150b3-2fae-4=
2e2-ac85-52cee9eadfb9`::Unexpected error
Sep 5 06:50:13 colovn1 su: (to root) root on none
Sep 5 06:54:23 colovn1 systemd[1]: Starting Cleanup of Temporary Directori=
es...
Sep 5 06:54:23 colovn1 systemd[1]: Started Cleanup of Temporary Directorie=
s.
Sep 5 06:55:05 colovn1 ntpd[1241]: 0.0.0.0 c612 02 freq_set kernel -32.960=
PPM
Sep 5 06:55:05 colovn1 ntpd[1241]: 0.0.0.0 c615 05 clock_sync
sanlock version
[root@colovn1 dom_md]# yum list | grep sanlock
libvirt-lock-sanlock.x86_64 1.0.5.5-1.fc19 @up=
dates
sanlock.x86_64 2.8-1.fc19 @up=
dates
sanlock-lib.x86_64 2.8-1.fc19 @up=
dates
sanlock-python.x86_64 2.8-1.fc19 @up=
dates
fence-sanlock.x86_64 2.8-1.fc19 upd=
ates
sanlock-devel.i686 2.8-1.fc19 upd=
ates
sanlock-devel.x86_64 2.8-1.fc19 upd=
ates
sanlock-lib.i686 2.8-1.fc19 upd=
ates
--_000_12EF8D94C6F8734FB2FF37B9FBEDD17358573729EXCHANGEcollogi_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html dir=3D"ltr">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style id=3D"owaParaStyle" type=3D"text/css">P {margin-top:0;margin-bottom:=
0;}</style>
</head>
<body ocsi=3D"0" fpstyle=3D"1">
<div style=3D"direction: ltr;font-family: Tahoma;color: #000000;font-size: =
10pt;">Hello,<br>
<br>
me again. Adding a NFS Domain to the cluster fails. It looks like the error=
<br>
from http://lists.ovirt.org/pipermail/users/2013-April/014080.html but I <b=
r>
have a Fedora 19 host and sanlock seems to be a newer version. I wanted <br=
>
to restart the setup process but ovirt-engine tells me that the storage <br=
>
domain already exists on the host. How can I recover from this failed <br>
action? <br>
<br>
Just in case you are interested in the current situation you will find the<=
br>
logs attached.<br>
<br>
Markus<br>
<br>
[root@colovn1 dom_md]# getsebool -a | grep virt_use_nfs<br>
virt_use_nfs --> on<br>
<br>
[root@colovn1 dom_md]# df<br>
Filesystem<br>
...<br>
10.10.30.251:/var/nas5/ovirt 7810410496 7094904832 715505664 91=
% /rhev/data-center/mnt/10.10.30.251:_var_nas5_ovirt<br>
<br>
content of /var/log/sanlock<br>
2013-09-04 15:47:04+0200 14844 [2655]: sanlock daemon started 2.8 host =
e35ca5ca-dfab-4521-b267-4d9b1f48ded4.colovn1.co<br>
2013-09-04 15:58:04+0200 23 [1024]: sanlock daemon started 2.8 host b3a=
8ef4a-0966-4ffa-9086-e5a8c6ad7363.colovn1.co<br>
2013-09-04 19:25:53+0200 23 [919]: sanlock daemon started 2.8 host 9654=
f1d0-6d87-4dd1-abdc-68c00fc4fe64.colovn1.co<br>
2013-09-05 06:39:47+0200 25 [1173]: sanlock daemon started 2.8 host e28=
29012-4b8f-417f-809e-532c64108955.colovn1.co<br>
2013-09-05 06:48:01+0200 518 [1178]: s1 lockspace 8dd2a4a5-b49f-4fef-a4=
27-e62627fc09f7:250:/rhev/data-center/mnt/10.10.30.251:_var_nas5_ovirt/8dd2=
a4a5-b49f-4fef-a427-e62627fc09f7/dom_md/ids:0<br>
2013-09-05 06:48:01+0200 518 [1786]: open error -13 /rhev/data-center/m=
nt/10.10.30.251:_var_nas5_ovirt/8dd2a4a5-b49f-4fef-a427-e62627fc09f7/dom_md=
/ids<br>
2013-09-05 06:48:01+0200 518 [1786]: s1 open_disk /rhev/data-center/mnt=
/10.10.30.251:_var_nas5_ovirt/8dd2a4a5-b49f-4fef-a427-e62627fc09f7/dom_md/i=
ds error -13<br>
2013-09-05 06:48:02+0200 519 [1178]: s1 add_lockspace fail result -19<b=
r>
<br>
permissions of file:<br>
[root@colovn1 dom_md]# ls -al /rhev/data-center/mnt/10.10.30.251:_var_nas5_=
ovirt/8dd2a4a5-b49f-4fef-a427-e62627fc09f7/dom_md/ids<br>
-rw-rw----. 1 vdsm kvm 1048576 5. Sep 06:47 /rhev/data-center/mnt/10.=
10.30.251:_var_nas5_ovirt/8dd2a4a5-b49f-4fef-a427-e62627fc09f7/dom_md/ids<b=
r>
<br>
vdsm.log<br>
Thread-186::DEBUG::2013-09-05 06:48:01,872::task::974::TaskManager.Task::(_=
decref) Task=3D`22e150b3-2fae-42e2-ac85-52cee9eadfb9`::ref 1 aborting False=
<br>
Thread-186::INFO::2013-09-05 06:48:01,872::sp::592::Storage.StoragePool::(c=
reate) spUUID=3Db054727d-fe4a-41ed-8393-a81e36b8a1af poolName=3DCollogia ma=
ster_sd=3D8dd2a4a5-b49f-4fef-a427-e62627fc09f7 domList=3D['8dd2a4a5-b49f-4f=
ef-a427-e62627fc09f7'] masterVersion=3D1 {'LEASETIMESEC':
60, 'IOOPTIMEOUTSEC': 10, 'LEASERETRIES': 3, 'LOCKRENEWALINTERVALSEC': 5}<=
br>
Thread-186::INFO::2013-09-05 06:48:01,872::fileSD::315::Storage.StorageDoma=
in::(validate) sdUUID=3D8dd2a4a5-b49f-4fef-a427-e62627fc09f7<br>
Thread-186::DEBUG::2013-09-05 06:48:01,888::persistentDict::234::Storage.Pe=
rsistentDict::(refresh) read lines (FileMetadataRW)=3D['CLASS=3DData', 'DES=
CRIPTION=3DNAS3_IB', 'IOOPTIMEOUTSEC=3D1', 'LEASERETRIES=3D3', 'LEASETIMESE=
C=3D5', 'LOCKPOLICY=3D', 'LOCKRENEWALINTERVALSEC=3D5',
'POOL_UUID=3D', 'REMOTE_PATH=3D10.10.30.251:/var/nas5/ovirt', 'ROLE=3DRegu=
lar', 'SDUUID=3D8dd2a4a5-b49f-4fef-a427-e62627fc09f7', 'TYPE=3DNFS', 'VERSI=
ON=3D3', '_SHA_CKSUM=3Db0b0af59d3c7c6ec83dd18ca11a6c1653de9a3b6']<br>
Thread-186::DEBUG::2013-09-05 06:48:01,898::persistentDict::234::Storage.Pe=
rsistentDict::(refresh) read lines (FileMetadataRW)=3D['CLASS=3DData', 'DES=
CRIPTION=3DNAS3_IB', 'IOOPTIMEOUTSEC=3D1', 'LEASERETRIES=3D3', 'LEASETIMESE=
C=3D5', 'LOCKPOLICY=3D', 'LOCKRENEWALINTERVALSEC=3D5',
'POOL_UUID=3D', 'REMOTE_PATH=3D10.10.30.251:/var/nas5/ovirt', 'ROLE=3DRegu=
lar', 'SDUUID=3D8dd2a4a5-b49f-4fef-a427-e62627fc09f7', 'TYPE=3DNFS', 'VERSI=
ON=3D3', '_SHA_CKSUM=3Db0b0af59d3c7c6ec83dd18ca11a6c1653de9a3b6']<br>
Thread-186::DEBUG::2013-09-05 06:48:01,899::persistentDict::167::Storage.Pe=
rsistentDict::(transaction) Starting transaction<br>
Thread-186::DEBUG::2013-09-05 06:48:01,900::persistentDict::173::Storage.Pe=
rsistentDict::(transaction) Flushing changes<br>
Thread-186::DEBUG::2013-09-05 06:48:01,900::persistentDict::299::Storage.Pe=
rsistentDict::(flush) about to write lines (FileMetadataRW)=3D['CLASS=3DDat=
a', 'DESCRIPTION=3DNAS3_IB', 'IOOPTIMEOUTSEC=3D10', 'LEASERETRIES=3D3', 'LE=
ASETIMESEC=3D60', 'LOCKPOLICY=3D', 'LOCKRENEWALINTERVALSEC=3D5',
'POOL_UUID=3D', 'REMOTE_PATH=3D10.10.30.251:/var/nas5/ovirt', 'ROLE=3DRegu=
lar', 'SDUUID=3D8dd2a4a5-b49f-4fef-a427-e62627fc09f7', 'TYPE=3DNFS', 'VERSI=
ON=3D3', '_SHA_CKSUM=3De2dfa292d09c0cb420dc4d30ab5eed11c84a399e']<br>
Thread-186::DEBUG::2013-09-05 06:48:01,903::persistentDict::175::Storage.Pe=
rsistentDict::(transaction) Finished transaction<br>
Thread-186::INFO::2013-09-05 06:48:01,903::clusterlock::174::SANLock::(acqu=
ireHostId) Acquiring host id for domain 8dd2a4a5-b49f-4fef-a427-e62627fc09f=
7 (id: 250)<br>
Thread-186::ERROR::2013-09-05 06:48:02,905::task::850::TaskManager.Task::(_=
setError) Task=3D`22e150b3-2fae-42e2-ac85-52cee9eadfb9`::Unexpected error<b=
r>
Traceback (most recent call last):<br>
File "/usr/share/vdsm/storage/task.py", line 857, in _run<=
br>
return fn(*args, **kargs)<br>
File "/usr/share/vdsm/logUtils.py", line 45, in wrapper<br=
>
res =3D f(*args, **kwargs)<br>
File "/usr/share/vdsm/storage/hsm.py", line 960, in create=
StoragePool<br>
masterVersion, leaseParams)<br>
File "/usr/share/vdsm/storage/sp.py", line 617, in create<=
br>
self._acquireTemporaryClusterLock(msdUUID, leaseParams)<=
br>
File "/usr/share/vdsm/storage/sp.py", line 559, in _acquir=
eTemporaryClusterLock<br>
msd.acquireHostId(self.id)<br>
File "/usr/share/vdsm/storage/sd.py", line 458, in acquire=
HostId<br>
self._clusterLock.acquireHostId(hostId, async)<br>
File "/usr/share/vdsm/storage/clusterlock.py", line 189, i=
n acquireHostId<br>
raise se.AcquireHostIdFailure(self._sdUUID, e)<br>
AcquireHostIdFailure: Cannot acquire host id: ('8dd2a4a5-b49f-4fef-a427-e62=
627fc09f7', SanlockException(19, 'Sanlock lockspace add failure', 'No such =
device'))<br>
<br>
/var/log/messages content<br>
Sep 5 06:48:01 colovn1 multipathd: dm-3: remove map (uevent)<br>
Sep 5 06:48:01 colovn1 multipathd: dm-3: remove map (uevent)<br>
Sep 5 06:48:01 colovn1 multipathd: dm-3: remove map (uevent)<br>
Sep 5 06:48:01 colovn1 multipathd: dm-3: remove map (uevent)<br>
Sep 5 06:48:01 colovn1 sanlock[1173]: 2013-09-05 06:48:01+0200 51=
8 [1786]: open error -13 /rhev/data-center/mnt/10.10.30.251:_var_nas5_ovirt=
/8dd2a4a5-b49f-4fef-a427-e62627fc09f7/dom_md/ids<br>
Sep 5 06:48:01 colovn1 sanlock[1173]: 2013-09-05 06:48:01+0200 51=
8 [1786]: s1 open_disk /rhev/data-center/mnt/10.10.30.251:_var_nas5_ovirt/8=
dd2a4a5-b49f-4fef-a427-e62627fc09f7/dom_md/ids error -13<br>
Sep 5 06:48:02 colovn1 sanlock[1173]: 2013-09-05 06:48:02+0200 51=
9 [1178]: s1 add_lockspace fail result -19<br>
Sep 5 06:48:02 colovn1 vdsm TaskManager.Task ERROR Task=3D`22e150b3-2=
fae-42e2-ac85-52cee9eadfb9`::Unexpected error<br>
Sep 5 06:50:13 colovn1 su: (to root) root on none<br>
Sep 5 06:54:23 colovn1 systemd[1]: Starting Cleanup of Temporary Dire=
ctories...<br>
Sep 5 06:54:23 colovn1 systemd[1]: Started Cleanup of Temporary Direc=
tories.<br>
Sep 5 06:55:05 colovn1 ntpd[1241]: 0.0.0.0 c612 02 freq_set kernel -3=
2.960 PPM<br>
Sep 5 06:55:05 colovn1 ntpd[1241]: 0.0.0.0 c615 05 clock_sync<br>
<br>
sanlock version<br>
[root@colovn1 dom_md]# yum list | grep sanlock<br>
libvirt-lock-sanlock.x86_64 =
1.0.5.5-1.fc19 =
@updates=
<br>
sanlock.x86_64 &=
nbsp; &nbs=
p; 2.8-1.fc19 &n=
bsp;  =
; @updates<br>
sanlock-lib.x86_64 &nb=
sp; 2.8-1=
.fc19 &nbs=
p; @updates<br>
sanlock-python.x86_64 =
2.8-1.fc19 &nbs=
p; &=
nbsp; @updates<br>
fence-sanlock.x86_64 &=
nbsp; 2.8-1.fc19 =
; &n=
bsp; updates<br>
sanlock-devel.i686 &nb=
sp; 2.8-1=
.fc19 &nbs=
p; updates<br>
sanlock-devel.x86_64 &=
nbsp; 2.8-1.fc19 =
; &n=
bsp; updates<br>
sanlock-lib.i686  =
; &n=
bsp; 2.8-1.fc19 =
updates<=
br>
<br>
</div>
</body>
</html>
--_000_12EF8D94C6F8734FB2FF37B9FBEDD17358573729EXCHANGEcollogi_--
------=_NextPartTM-000-b4629277-5f8e-43c0-b52b-006aee6797ba
Content-Type: text/plain;
name="InterScan_Disclaimer.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
filename="InterScan_Disclaimer.txt"
****************************************************************************
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.
Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
Vorstand:
Kadir Akin
Dr. Michael Höhnerbach
Vorsitzender des Aufsichtsrates:
Hans Kristian Langva
Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497
This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.
e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
executive board:
Kadir Akin
Dr. Michael Höhnerbach
President of the supervisory board:
Hans Kristian Langva
Registry office: district court Cologne
Register number: HRB 52 497
****************************************************************************
------=_NextPartTM-000-b4629277-5f8e-43c0-b52b-006aee6797ba--
11 years, 3 months
[Users] OVirt-Engine 3.3 RC - Add Fedora 19 host fails
by Markus Stockhausen
This is a multi-part message in MIME format.
------=_NextPartTM-000-f7989a6b-9542-42c9-bd64-42345bb9b36c
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hello,=0A=
=0A=
I'm trying to add a Fedora 19 host to my newly installed ovirt engine 3.3=
=0A=
The ost has vdsm.x86_64 4.12.1-1.fc19 installed from the beta repos.=0A=
During the installation two interfaces are active:=0A=
=0A=
Storage/NFS network:=0A=
ib0: flags=3D4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 2044=0A=
inet 10.10.30.1 netmask 255.255.255.0 broadcast 10.10.30.255=0A=
Infiniband hardware address can be incorrect! Please read BUGS section in i=
fconfig(8).=0A=
infiniband 80:00:00:48:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00=
:00 txqueuelen 256 (InfiniBand)=0A=
RX packets 38899 bytes 2213697 (2.1 MiB)=0A=
RX errors 0 dropped 0 overruns 0 frame 0=0A=
TX packets 67314 bytes 15746759 (15.0 MiB)=0A=
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0=0A=
=0A=
Frontend/Management network:=0A=
p49p1: flags=3D4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500=0A=
inet 192.168.10.51 netmask 255.255.255.0 broadcast 192.168.10.255=
=0A=
ether 00:30:48:d7:9e:c4 txqueuelen 1000 (Ethernet)=0A=
RX packets 14631 bytes 9069189 (8.6 MiB)=0A=
RX errors 0 dropped 354 overruns 0 frame 0=0A=
TX packets 6552 bytes 2348866 (2.2 MiB)=0A=
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0=0A=
device interrupt 16 memory 0xface0000-fad00000=0A=
=0A=
The installation (with IP 192.168.10.51) fails and afterwards the frontend =
interface =0A=
is down. Attached the VDSM logs.=0A=
=0A=
Thanks for your help in advance.=0A=
=0A=
Markus=0A=
=0A=
supervdsm.log:=0A=
=0A=
MainProcess|Thread-16::ERROR::2013-09-04 20:37:32,695::netinfo::239::root::=
(speed) cannot read ib0 speed=0A=
Traceback (most recent call last):=0A=
File "/usr/lib64/python2.7/site-packages/vdsm/netinfo.py", line 235, in s=
peed=0A=
s =3D int(file('/sys/class/net/%s/speed' % dev).read())=0A=
IOError: [Errno 22] Invalid argument=0A=
MainProcess|Thread-16::DEBUG::2013-09-04 20:37:32,698::configNetwork::176::=
root::(addNetwork) validating network...=0A=
MainProcess|Thread-16::INFO::2013-09-04 20:37:32,698::configNetwork::191::r=
oot::(addNetwork) Adding network ovirtmgmt with vlan=3DNone, bonding=3DNone=
, nics=3D['p49p1'], bondingOptions=3DNone, mtu=3DNone, bridged=3DTrue, opti=
ons=3D{'STP': 'no', 'implicitBonding': True}=0A=
MainProcess|Thread-16::DEBUG::2013-09-04 20:37:32,699::ifcfg::366::root::(_=
persistentBackup) backing up ifcfg-ovirtmgmt: # original file did not exist=
=0A=
=0A=
MainProcess|Thread-16::DEBUG::2013-09-04 20:37:32,702::ifcfg::665::Storage.=
Misc.excCmd::(ifdown) '/usr/sbin/ifdown ovirtmgmt' (cwd None)=0A=
MainProcess|Thread-16::DEBUG::2013-09-04 20:37:32,917::ifcfg::665::Storage.=
Misc.excCmd::(ifdown) SUCCESS: <err> =3D ''; <rc> =3D 0=0A=
MainProcess|Thread-16::DEBUG::2013-09-04 20:37:32,917::utils::486::root::(e=
xecCmd) '/sbin/ip route show to 0.0.0.0/0 table all' (cwd None)=0A=
MainProcess|Thread-16::DEBUG::2013-09-04 20:37:32,919::utils::505::root::(e=
xecCmd) SUCCESS: <err> =3D ''; <rc> =3D 0=0A=
MainProcess|Thread-16::ERROR::2013-09-04 20:37:32,923::netinfo::239::root::=
(speed) cannot read ib0 speed=0A=
Traceback (most recent call last):=0A=
File "/usr/lib64/python2.7/site-packages/vdsm/netinfo.py", line 235, in s=
peed=0A=
s =3D int(file('/sys/class/net/%s/speed' % dev).read())=0A=
IOError: [Errno 22] Invalid argument=0A=
MainProcess|Thread-16::DEBUG::2013-09-04 20:37:32,926::ifcfg::366::root::(_=
persistentBackup) backing up ifcfg-p49p1: # original file did not exist=0A=
=0A=
MainProcess|Thread-16::DEBUG::2013-09-04 20:37:32,926::ifcfg::665::Storage.=
Misc.excCmd::(ifdown) '/usr/sbin/ifdown p49p1' (cwd None)=0A=
MainProcess|Thread-16::DEBUG::2013-09-04 20:37:33,243::ifcfg::665::Storage.=
Misc.excCmd::(ifdown) SUCCESS: <err> =3D 'bridge ovirtmgmt does not exist!\=
n'; <rc> =3D 0=0A=
MainProcess|Thread-16::DEBUG::2013-09-04 20:37:33,243::ifcfg::672::Storage.=
Misc.excCmd::(_ifup) '/usr/sbin/ifup p49p1' (cwd None)=0A=
MainProcess|Thread-16::DEBUG::2013-09-04 20:37:33,354::ifcfg::672::Storage.=
Misc.excCmd::(_ifup) SUCCESS: <err> =3D ''; <rc> =3D 0=0A=
MainProcess|Thread-16::DEBUG::2013-09-04 20:37:33,354::__init__::78::root::=
(_addSourceRoute) Adding source route ovirtmgmt, None, None, None=0A=
MainProcess|Thread-16::ERROR::2013-09-04 20:37:33,354::sourceRoute::68::roo=
t::(configure) ipaddr, mask or gateway not received=0A=
MainProcess|Thread-16::DEBUG::2013-09-04 20:37:33,355::ifcfg::672::Storage.=
Misc.excCmd::(_ifup) '/usr/sbin/ifup ovirtmgmt' (cwd None)=0A=
MainProcess|Thread-16::DEBUG::2013-09-04 20:37:33,731::ifcfg::672::Storage.=
Misc.excCmd::(_ifup) SUCCESS: <err> =3D ''; <rc> =3D 0=0A=
MainProcess|Thread-16::DEBUG::2013-09-04 20:37:33,733::libvirtconnection::1=
01::libvirtconnection::(wrapper) Unknown libvirterror: ecode: 43 edom: 19 l=
evel: 2 message: Netzwerk nicht gefunden: Kein Netzwerk mit =FCbereinstimme=
ndem Namen =BBvdsm-ovirtmgmt=AB=0A=
MainProcess|Thread-16::DEBUG::2013-09-04 20:37:33,733::ifcfg::245::root::(_=
atomicNetworkBackup) Backed up ovirtmgmt=0A=
MainProcess|Thread-16::DEBUG::2013-09-04 20:37:33,734::libvirtconnection::1=
01::libvirtconnection::(wrapper) Unknown libvirterror: ecode: 43 edom: 19 l=
evel: 2 message: Netzwerk nicht gefunden: Kein Netzwerk mit =FCbereinstimme=
ndem Namen =BBvdsm-ovirtmgmt=AB=0A=
MainProcess|Thread-16::DEBUG::2013-09-04 20:37:33,734::ifcfg::254::root::(_=
persistentNetworkBackup) backing up network ovirtmgmt: # original file did =
not exist=0A=
=0A=
MainProcess|Thread-16::DEBUG::2013-09-04 20:37:33,741::configNetwork::541::=
setupNetworks::(setupNetworks) Checking connectivity...=0A=
MainProcess|storageRefresh::DEBUG::2013-09-04 20:37:34,278::misc::817::Samp=
lingMethod::(__call__) Returning last result=0A=
=0A=
vdsm.log:=0A=
=0A=
storageRefresh::DEBUG::2013-09-04 20:37:32,255::misc::817::SamplingMethod::=
(__call__) Returning last result=0A=
Thread-13::DEBUG::2013-09-04 20:37:32,365::BindingXMLRPC::979::vds::(wrappe=
r) client [192.168.10.110]::call ping with () {}=0A=
Thread-13::DEBUG::2013-09-04 20:37:32,366::BindingXMLRPC::986::vds::(wrappe=
r) return ping with {'status': {'message': 'Done', 'code': 0}}=0A=
Thread-14::DEBUG::2013-09-04 20:37:32,374::BindingXMLRPC::979::vds::(wrappe=
r) client [192.168.10.110]::call getCapabilities with () {}=0A=
Thread-14::DEBUG::2013-09-04 20:37:32,562::utils::486::root::(execCmd) '/sb=
in/ip route show to 0.0.0.0/0 table all' (cwd None)=0A=
Thread-14::DEBUG::2013-09-04 20:37:32,573::utils::505::root::(execCmd) SUCC=
ESS: <err> =3D ''; <rc> =3D 0=0A=
Thread-14::ERROR::2013-09-04 20:37:32,577::netinfo::239::root::(speed) cann=
ot read ib0 speed=0A=
Traceback (most recent call last):=0A=
File "/usr/lib64/python2.7/site-packages/vdsm/netinfo.py", line 235, in s=
peed=0A=
s =3D int(file('/sys/class/net/%s/speed' % dev).read())=0A=
IOError: [Errno 22] Invalid argument=0A=
Thread-14::DEBUG::2013-09-04 20:37:32,611::BindingXMLRPC::986::vds::(wrappe=
r) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, '=
info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redha=
t:df62bd8ee7d8'}], 'FC': []}, 'packages2': {'kernel': {'release': '200.fc19=
.x86_64', 'buildtime': 1377795945.0, 'version': '3.10.10'}, 'spice-server':=
{'release': '1.fc19', 'buildtime': 1375454091L, 'version': '0.12.4'}, 'vds=
m': {'release': '1.fc19', 'buildtime': 1377639886L, 'version': '4.12.1'}, '=
qemu-kvm': {'release': '7.fc19', 'buildtime': 1376836471L, 'version': '1.4.=
2'}, 'libvirt': {'release': '1.fc19', 'buildtime': 1375400611L, 'version': =
'1.0.5.5'}, 'qemu-img': {'release': '7.fc19', 'buildtime': 1376836471L, 've=
rsion': '1.4.2'}, 'mom': {'release': '3.fc19', 'buildtime': 1375215820L, 'v=
ersion': '0.3.2'}}, 'cpuModel': 'Intel(R) Core(TM) i7 CPU 920 @ 2.=
67GHz', 'hooks': {}, 'cpuSockets': '1', 'vmTypes': ['kvm'], 'supportedProto=
cols': ['2.2', '2.3'], 'networks': {}, 'bridges': {}, 'uuid': '49434D53-020=
0-48D7-3000-D7483000C49E', 'lastClientIface': 'p49p1', 'nics': {'ib0': {'ne=
tmask': '255.255.255.0', 'addr': '10.10.30.1', 'hwaddr': '80:00:00:48:fe:80=
:00:00:00:00:00:00:00:23:7d:ff:ff:94:d3:fd', 'cfg': {'NETWORK': '10.10.30.0=
', 'IPADDR': '10.10.30.1', 'BROADCAST': '10.10.30.255', 'NETMASK': '255.255=
.255.0', 'BOOTPROTO': 'static', 'DEVICE': 'ib0', 'TYPE': 'Infiniband', 'ONB=
OOT': 'yes'}, 'ipv6addrs': [], 'speed': 0, 'mtu': '2044'}, 'ib1': {'netmask=
': '', 'addr': '', 'hwaddr': '80:00:00:49:fe:80:00:00:00:00:00:00:00:23:7d:=
ff:ff:94:d3:fe', 'cfg': {}, 'ipv6addrs': [], 'speed': 0, 'mtu': '4092'}, 'p=
50p1': {'netmask': '', 'addr': '', 'hwaddr': '00:30:48:d7:9e:c5', 'cfg': {}=
, 'ipv6addrs': [], 'speed': 0, 'mtu': '1500'}, 'p49p1': {'netmask': '255.25=
5.255.0', 'addr': '192.168.10.51', 'hwaddr': '00:30:48:d7:9e:c4', 'cfg': {}=
, 'ipv6addrs': [], 'speed': 1000, 'mtu': '1500'}}, 'software_revision': '1'=
, 'clusterLevels': ['3.0', '3.1', '3.2', '3.3'], 'cpuFlags': u'fpu,vme,de,p=
se,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acp=
i,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_per=
fmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,aperfmperf,pni,dtes64,mon=
itor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,sse4_1,sse4_2,popcnt,lahf_lm,i=
da,dtherm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,=
model_coreduo,model_core2duo,model_Penryn,model_n270', 'ISCSIInitiatorName'=
: 'iqn.1994-05.com.redhat:df62bd8ee7d8', 'netConfigDirty': 'False', 'suppor=
tedENGINEs': ['3.0', '3.1', '3.2', '3.3'], 'reservedMem': '321', 'bondings'=
: {'bond0': {'netmask': '', 'addr': '', 'slaves': [], 'hwaddr': 'e6:a8:3e:2=
0:57:b0', 'cfg': {}, 'ipv6addrs': [], 'mtu': '1500'}}, 'software_version': =
'4.12', 'memSize': '24104', 'cpuSpeed': '2668.000', 'version_name': 'Snow M=
an', 'vlans': {}, 'cpuCores': '4', 'kvmEnabled': 'true', 'guestOverhead': '=
65', 'management_ip': '0.0.0.0', 'cpuThreads': '8', 'emulatedMachines': [u'=
pc', u'q35', u'isapc', u'pc-0.10', u'pc-0.11', u'pc-0.12', u'pc-0.13', u'pc=
-0.14', u'pc-0.15', u'pc-1.0', u'pc-1.1', u'pc-1.2', u'pc-1.3', u'none'], '=
operatingSystem': {'release': '3', 'version': '19', 'name': 'Fedora'}, 'las=
tClient': '192.168.10.110'}}=0A=
Thread-15::DEBUG::2013-09-04 20:37:32,640::BindingXMLRPC::979::vds::(wrappe=
r) client [192.168.10.110]::call ping with () {}=0A=
Thread-15::DEBUG::2013-09-04 20:37:32,640::BindingXMLRPC::986::vds::(wrappe=
r) return ping with {'status': {'message': 'Done', 'code': 0}}=0A=
Thread-16::DEBUG::2013-09-04 20:37:32,642::BindingXMLRPC::979::vds::(wrappe=
r) client [192.168.10.110]::call setupNetworks with ({'ovirtmgmt': {'nic': =
'p49p1', 'STP': 'no', 'bridged': 'true'}}, {}, {'connectivityCheck': 'true'=
, 'connectivityTimeout': 120}) {}=0A=
storageRefresh::DEBUG::2013-09-04 20:37:34,278::multipath::111::Storage.Mis=
c.excCmd::(rescan) '/usr/bin/sudo -n /sbin/multipath' (cwd None)=0A=
storageRefresh::DEBUG::2013-09-04 20:37:34,315::multipath::111::Storage.Mis=
c.excCmd::(rescan) SUCCESS: <err> =3D ''; <rc> =3D 0=0A=
storageRefresh::DEBUG::2013-09-04 20:37:34,315::lvm::483::OperationMutex::(=
_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation m=
utex=0A=
storageRefresh::DEBUG::2013-09-04 20:37:34,315::lvm::485::OperationMutex::(=
_invalidateAllPvs) Operation 'lvm invalidate operation' released the operat=
ion mutex=0A=
storageRefresh::DEBUG::2013-09-04 20:37:34,315::lvm::494::OperationMutex::(=
_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation m=
utex=0A=
storageRefresh::DEBUG::2013-09-04 20:37:34,316::lvm::496::OperationMutex::(=
_invalidateAllVgs) Operation 'lvm invalidate operation' released the operat=
ion mutex=0A=
storageRefresh::DEBUG::2013-09-04 20:37:34,316::lvm::514::OperationMutex::(=
_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation m=
utex=0A=
storageRefresh::DEBUG::2013-09-04 20:37:34,316::lvm::516::OperationMutex::(=
_invalidateAllLvs) Operation 'lvm invalidate operation' released the operat=
ion mutex=0A=
storageRefresh::DEBUG::2013-09-04 20:37:34,316::misc::817::SamplingMethod::=
(__call__) Returning last result=0A=
storageRefresh::WARNING::2013-09-04 20:37:34,316::fileUtils::167::Storage.f=
ileUtils::(createdir) Dir /rhev/data-center/hsm-tasks already exists=0A=
Thread-16::ERROR::2013-09-04 20:39:36,278::API::1261::vds::(setupNetworks) =
connectivity check failed=0A=
Traceback (most recent call last):=0A=
File "/usr/share/vdsm/API.py", line 1259, in setupNetworks=0A=
supervdsm.getProxy().setupNetworks(networks, bondings, options)=0A=
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__=0A=
return callMethod()=0A=
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>=0A=
**kwargs)=0A=
File "<string>", line 2, in setupNetworks=0A=
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca=
llmethod=0A=
raise convert_to_error(kind, result)=0A=
ConfigNetworkError: (10, 'connectivity check failed')=0A=
Thread-16::DEBUG::2013-09-04 20:39:36,280::BindingXMLRPC::986::vds::(wrappe=
r) return setupNetworks with {'status': {'message': 'connectivity check fai=
led', 'code': 10}}=0A=
=0A=
------=_NextPartTM-000-f7989a6b-9542-42c9-bd64-42345bb9b36c
Content-Type: text/plain;
name="InterScan_Disclaimer.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
filename="InterScan_Disclaimer.txt"
****************************************************************************
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.
Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
Vorstand:
Kadir Akin
Dr. Michael Höhnerbach
Vorsitzender des Aufsichtsrates:
Hans Kristian Langva
Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497
This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.
e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
executive board:
Kadir Akin
Dr. Michael Höhnerbach
President of the supervisory board:
Hans Kristian Langva
Registry office: district court Cologne
Register number: HRB 52 497
****************************************************************************
------=_NextPartTM-000-f7989a6b-9542-42c9-bd64-42345bb9b36c--
11 years, 3 months
[Users] oVirt Weekly Meeting Minutes -- 2013-09-04
by Mike Burns
Minutes:
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-09-04-14.01.html
Minutes (text):
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-09-04-14.01.txt
Log:
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-09-04-14.01.log.html
============================
#ovirt: oVirt Weekly Meeting
============================
Meeting started by mburns at 14:01:12 UTC. The full logs are available
at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-09-04-14.01.log.html
.
Meeting summary
---------------
* Agenda and Roll Call (mburns, 14:01:23)
* 3.3 brief update (mburns, 14:01:31)
* conferences and workshops (mburns, 14:01:41)
* infra update (mburns, 14:01:43)
* 3.3 brief update (mburns, 14:03:53)
* as decided by the go/no-go meeting yesterday, there were a couple
issues that needed to be addressed (mburns, 14:04:15)
* AIO would fail on F19 (mburns, 14:04:27)
* due to sdk issue, resolved by new build (mburns, 14:04:41)
* AIO = All in One install (dneary, 14:05:04)
* el6 would not install due to missing spice-html package (fixed)
(mburns, 14:05:13)
* AIO = All in One install (mburns, 14:05:21)
* AIO = All in One install (dneary, 14:05:32)
* gluster related bug when installing gluster only engine (fixed, el6
build uploaded, f19 uploading now) (mburns, 14:06:00)
* in order to give a few days for testing, the new release date is
09-Sep (mburns, 14:06:32)
* ACTION: mburns to add info about bug 1004005 to release notes
(mburns, 14:16:09)
* ACTION: mburns to add info about gluster issue on el6 to release
notes (mburns, 14:16:23)
* Conferences and workshops (mburns, 14:19:14)
* oVirt developer meetings set for 21-23 October in Edinburgh
(mburns, 14:21:08)
* to attend, please register for the KVM Forum through the
linuxfoundation website (mburns, 14:21:48)
* First 200 registrants for KVM Forum get free admission (mburns,
14:21:51)
* please register soon for the KVM Forum, there are ~7 ovirt
presentations on the schedule (mburns, 14:24:39)
* schedule to be posted by the end of the week (mburns, 14:24:49)
* infra update (mburns, 14:26:23)
* infra team had informal meeting to where they worked on tasks
(mburns, 14:28:06)
* dneary installed and configured spamassassin for the ovirt.org email
lists (mburns, 14:28:26)
* 3.4 planning (mburns, 14:29:40)
* there is an open thread on users@ soliciting feature requests for
the next oVirt release (mburns, 14:30:13)
* please respond or vote on new features that you're interested in
(mburns, 14:30:33)
* also looking for people to commit to features they want to develop
for the next version (mburns, 14:34:08)
* Other Topics (mburns, 14:35:58)
* no more topics (mburns, 14:38:47)
Meeting ended at 14:38:50 UTC.
Action Items
------------
* mburns to add info about bug 1004005 to release notes
* mburns to add info about gluster issue on el6 to release notes
Action Items, by person
-----------------------
* mburns
* mburns to add info about bug 1004005 to release notes
* mburns to add info about gluster issue on el6 to release notes
* **UNASSIGNED**
* (none)
People Present (lines said)
---------------------------
* mburns (64)
* dneary (16)
* sahina (6)
* sbonazzo (5)
* ewoud (3)
* itamar (3)
* ovirtbot (3)
* jb_netapp (2)
* ekarlso (2)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
11 years, 3 months
[Users] ipmilan or iLo4
by Neil
Hi guys,
I'm testing some DL360gen8 hosts and was wondering what power
management utility would be recommended, I usually only use Dell
servers and use the ipmilan by default, but not sure if there would be
any benefits to rather using iLo instead?
Thanks.
Regards.
Neil Wilson.
11 years, 3 months
[Users] Host stuck in unresponsive state
by Frank Wall
Hi,
my all-in-one host is stuck in unresponsive state, no matter what I try.
Found this error in my engine.log:
2013-08-31 23:53:59,325 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-54) Command GetCapabilitiesVDS execution
failed. Exception: VDSNetworkException: java.net.ConnectException:
Connection refused
This message starts to appear when I try to activate the host (from
maintenance state).
Any idea how to activate the host?
Running oVirt 3.3 RC2 on Fedora 19, all-in-one Setup.
Thanks
- Frank
11 years, 3 months