[Users] oVirt Weekly Sync Meeting Minutes -- 2012-05-23
by Mike Burns
Minutes: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-05-23-14.00.html
Minutes (text): http://ovirt.org/meetings/ovirt/2012/ovirt.2012-05-23-14.00.txt
Log: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-05-23-14.00.log.html
=========================
#ovirt: oVirt Weekly Sync
=========================
Meeting started by mburns at 14:00:23 UTC. The full logs are available
at http://ovirt.org/meetings/ovirt/2012/ovirt.2012-05-23-14.00.log.html
.
Meeting summary
---------------
* agenda and roll call (mburns, 14:00:41)
* Status of next release (mburns, 14:05:17)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=822145 (mburns,
14:05:29)
* AGREED: freeze date and beta release delayed by 1 week to 2012-06-07
(mburns, 14:12:33)
* post freeze, release notes flag needs to be used where required
(mburns, 14:14:21)
* https://bugzilla.redhat.com/show_bug.cgi?id=821867 is a VDSM blocker
for 3.1 (oschreib, 14:17:27)
* ACTION: dougsland to fix upstream vdsm right now, and open a bug on
libvirt augeas (oschreib, 14:21:44)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=822158 (mburns,
14:23:39)
* assignee not available, update to come tomorrow (mburns, 14:24:59)
* ACTION: oschreib to make sure BZ#822158 is handled quickly
(oschreib, 14:25:29)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=824397 (mburns,
14:28:55)
* 824397 expected to be merged prior next week's meeting (mburns,
14:29:45)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=824420 (mburns,
14:30:15)
* tracker for node based on F17 (mburns, 14:30:28)
* blocked by util-linux bug currently (mburns, 14:30:40)
* new build expected from util-linux maintainer in next couple days
(mburns, 14:30:55)
* sub-project status -- engine (mburns, 14:32:49)
* nothing to report outside of blockers discussed above (mburns,
14:34:00)
* sub-project status -- vdsm (mburns, 14:34:09)
* nothing outside of blockers above (mburns, 14:35:36)
* sub-project status -- node (mburns, 14:35:43)
* working on f17 migration, but blocked by util-linux bug (mburns,
14:35:58)
* should be ready for freeze deadline (mburns, 14:36:23)
* Review decision on Java 7 and Fedora jboss rpms in oVirt Engine
(mburns, 14:36:43)
* Java7 basically working (mburns, 14:37:19)
* LINK: http://gerrit.ovirt.org/#change,4416 (oschreib, 14:39:35)
* engine will make ack/nack statement next week (mburns, 14:39:49)
* fedora jboss rpms patch is in review, short tests passed (mburns,
14:40:04)
* engine ack on fedora jboss rpms and java7 needed next week (mburns,
14:44:47)
* Upcoming Workshops (mburns, 14:45:11)
* NetApp workshop set for Jan 22-24 2013 (mburns, 14:47:16)
* already at half capacity for Workshop at LinuxCon Japan (mburns,
14:47:37)
* please continue to promote it (mburns, 14:48:19)
* proposal: board meeting to be held at all major workshops (mburns,
14:48:43)
* LINK: http://www.ovirt.org/wiki/OVirt_Global_Workshops (mburns,
14:49:30)
* Open Discussion (mburns, 14:50:12)
* oVirt/Quantum integration discussion will be held separately
(mburns, 14:50:43)
Meeting ended at 14:52:47 UTC.
Action Items
------------
* dougsland to fix upstream vdsm right now, and open a bug on libvirt
augeas
* oschreib to make sure BZ#822158 is handled quickly
Action Items, by person
-----------------------
* dougsland
* dougsland to fix upstream vdsm right now, and open a bug on libvirt
augeas
* oschreib
* oschreib to make sure BZ#822158 is handled quickly
* **UNASSIGNED**
* (none)
People Present (lines said)
---------------------------
* mburns (98)
* oschreib (55)
* doronf (12)
* lh (11)
* sgordon (8)
* dougsland (8)
* ovirtbot (6)
* ofrenkel (4)
* cestila (2)
* RobertMdroid (2)
* ydary (2)
* rickyh (1)
* yzaslavs (1)
* cctrieloff (1)
* mestery_ (1)
* dustins (1)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
6 years
[Users] Nested virtualization with Opteron 2nd generation and oVirt 3.1 possible?
by Gianluca Cecchi
Hello,
I have 2 physical servers with Opteron 2nd gen cpu.
There is CentOS 6.3 installed and some VM already configured on them.
Their /proc/cpuinfo contains
...
model name : Dual-Core AMD Opteron(tm) Processor 8222
...
kvm_amd kernel module is loaded with its default enabled nested option
# systool -m kvm_amd -v
Module = "kvm_amd"
Attributes:
initstate = "live"
refcnt = "15"
srcversion = "43D8067144E7D8B0D53D46E"
Parameters:
nested = "1"
npt = "1"
...
I already configured a fedora 17 VM as a oVirt 3.1 Engine
I'm trying to configure another VM as oVirt 3.1 node with
ovirt-node-iso-2.5.5-0.1.fc17.iso
It seems I'm not able to configure so that ovirt install doesn't complain.
After some attempts, I tried this in my vm.xml for the cpu:
<cpu mode='custom' match='exact'>
<model fallback='allow'>athlon</model>
<vendor>AMD</vendor>
<feature policy='require' name='pni'/>
<feature policy='require' name='rdtscp'/>
<feature policy='force' name='svm'/>
<feature policy='require' name='clflush'/>
<feature policy='require' name='syscall'/>
<feature policy='require' name='lm'/>
<feature policy='require' name='cr8legacy'/>
<feature policy='require' name='ht'/>
<feature policy='require' name='lahf_lm'/>
<feature policy='require' name='fxsr_opt'/>
<feature policy='require' name='cx16'/>
<feature policy='require' name='extapic'/>
<feature policy='require' name='mca'/>
<feature policy='require' name='cmp_legacy'/>
</cpu>
Inside node /proc/cpuinfo becomes
processor : 3
vendor_id : AuthenticAMD
cpu family : 6
model : 2
model name : QEMU Virtual CPU version 0.12.1
stepping : 3
microcode : 0x1000065
cpu MHz : 3013.706
cache size : 512 KB
fpu : yes
fpu_exception : yes
cpuid level : 2
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat
pse36 clflush mmx fxsr sse sse2 syscall mmxext fxsr_opt lm nopl pni
cx16 hypervisor lahf_lm cmp_legacy cr8_legacy
bogomips : 6027.41
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
2 questions:
1) Is there any combination in xml file to give to my VM so that oVirt
doesn't complain about missing hardware virtualization with this
processor?
2) suppose 1) is not possible in my case and I still want to test the
interface and try some config operations to see for example the
differences with RHEV 3.0, how can I do?
At the moment this complaint about hw virtualization prevents me to
activate the node.
I get
Installing Host f17ovn01. Step: RHEV_INSTALL.
Host f17ovn01 was successfully approved.
Host f17ovn01 running without virtualization hardware acceleration
Detected new Host f17ovn01. Host state was set to Non Operational.
Host f17ovn01 moved to Non-Operational state.
Host f17ovn01 moved to Non-Operational state as host does not meet the
cluster's minimum CPU level. Missing CPU features : CpuFlags
Can I lower the requirements to be able to operate without hw
virtualization in 3.1?
Thanks in advance,
Gianluca
6 years
[Users] importing from kvm into ovirt
by Jonathan Horne
--_000_9BE6F493F83A594DA60C45E6A09DC5AC016BA694AUSP01DAG0201co_
Content-Type: text/plain; charset="Windows-1252"
Content-Transfer-Encoding: quoted-printable
I need to import a kvm virtual machine from a standalone kvm into my ovirt =
cluster. Standalone is using local storage, and my ovirt cluster is using =
iscsi. Can i please have some advice on whats the best way to get this sys=
tem into ovirt?
Right now i see it as copying the .img file to somewhere=85 but i have no i=
dea where to start. I found this directory on one of my ovirt nodes:
/rhev/data-center/mnt/blockSD/fe633237-14b2-4f8b-aedd-bbf753bcafaf/master/v=
ms
But inside is just directories that appear to have uuid-type of names, and =
i can't tell what belongs to which vm.
Any advice would be greatly appreciated.
Thanks,
jonathan
________________________________
This is a PRIVATE message. If you are not the intended recipient, please de=
lete without copying and kindly advise us by e-mail of the mistake in deliv=
ery. NOTE: Regardless of content, this e-mail shall not operate to bind SKO=
POS to any order or other contract unless pursuant to explicit written agre=
ement or government initiative expressly permitting the use of e-mail for s=
uch purpose.
--_000_9BE6F493F83A594DA60C45E6A09DC5AC016BA694AUSP01DAG0201co_
Content-Type: text/html; charset="Windows-1252"
Content-ID: <41FAB2B157C43549B6577A3495BA255C(a)collaborationhost.net>
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DWindows-1=
252">
</head>
<body style=3D"word-wrap:break-word; color:rgb(0,0,0); font-size:14px; font=
-family:Calibri,sans-serif">
<div>
<div>
<div>I need to import a kvm virtual machine from a standalone kvm into my o=
virt cluster. Standalone is using local storage, and my ovirt cluster=
is using iscsi. Can i please have some advice on whats the best way =
to get this system into ovirt?</div>
</div>
</div>
<div><br>
</div>
<div>Right now i see it as copying the .img file to somewhere=85 but i have=
no idea where to start. I found this directory on one of my ovirt no=
des:</div>
<div><br>
</div>
<div>/rhev/data-center/mnt/blockSD/fe633237-14b2-4f8b-aedd-bbf753bcafaf/mas=
ter/vms</div>
<div><br>
</div>
<div>But inside is just directories that appear to have uuid-type of names,=
and i can't tell what belongs to which vm.</div>
<div><br>
</div>
<div>Any advice would be greatly appreciated.</div>
<div><br>
</div>
<div>Thanks,</div>
<div>jonathan</div>
<br>
<hr>
<font color=3D"Gray" face=3D"Arial" size=3D"1">This is a PRIVATE message. I=
f you are not the intended recipient, please delete without copying and kin=
dly advise us by e-mail of the mistake in delivery. NOTE: Regardless of con=
tent, this e-mail shall not operate to
bind SKOPOS to any order or other contract unless pursuant to explicit wri=
tten agreement or government initiative expressly permitting the use of e-m=
ail for such purpose.</font>
</body>
</html>
--_000_9BE6F493F83A594DA60C45E6A09DC5AC016BA694AUSP01DAG0201co_--
6 years
[Users] oVirt Workshop at LinuxCon Japan 2012
by Leslie Hawthorn
Hello everyone,
As part of our efforts to raise awareness of and educate more developers
about the oVirt project, we will be holding an oVirt workshop at
LinuxCon Japan, taking place on June 8, 2012. You can find full details
of the workshop agenda on the LinuxCon Japan site. [0]
Registration for the workshop is now open and is free of charge for the
first 50 participants. We will also look at adding additional
participant slots to the workshop based on demand.
Attendees who register for LinuxCon Japan via the workshop registration
link [1] will also be eligible for a discount on their LinuxCon Japan
registration.
Please spread the word to folks you think would find the workshop
useful. If they have already registered for LinuxCon Japan, they can
simply edit their existing registration to include the workshop.
[0] -
https://events.linuxfoundation.org/events/linuxcon-japan/ovirt-gluster-wo...
[1] - http://www.regonline.com/Register/Checkin.aspx?EventID=1099949
Cheers,
LH
--
Leslie Hawthorn
Community Action and Impact
Open Source and Standards @ Red Hat
identi.ca/lh
twitter.com/lhawthorn
6 years, 1 month
[Users] Can't access RHEV-H aka ovirt-node
by Scotto Alberto
--_004_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: multipart/alternative;
boundary="_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_"
--_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi all,
I can't login to the hypervisor, neither as root nor as admin, neither from=
another computer via ssh nor directly on the machine.
I'm sure I remember the passwords. This is not the first time it happens: l=
ast time I reinstalled the host. Everything worked ok for about 2 weeks, an=
d then...
What's going on? Is it a known behavior, somehow?
Before rebooting the hypervisor, I would like to try something. RHEV Manage=
r talks to RHEV-H without any problems. Can I login with RHEV-M's keys? how=
?
Thank you all.
Alberto Scotto
[Blue]
Via Cardinal Massaia, 83
10147 - Torino - ITALY
phone: +39 011 29100
al.scotto(a)reply.it
www.reply.it
________________________________
--
The information transmitted is intended for the person or entity to which i=
t is addressed and may contain confidential and/or privileged material. Any=
review, retransmission, dissemination or other use of, or taking of any ac=
tion in reliance upon, this information by persons or entities other than t=
he intended recipient is prohibited. If you received this in error, please =
contact the sender and delete the material from any computer.
--_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<style>
<!--
@font-face
{font-family:Calibri}
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif"}
a:link, span.MsoHyperlink
{color:blue;
text-decoration:underline}
a:visited, span.MsoHyperlinkFollowed
{color:purple;
text-decoration:underline}
span.EmailStyle17
{font-family:"Calibri","sans-serif";
color:windowtext}
.MsoChpDefault
{font-family:"Calibri","sans-serif"}
@page WordSection1
{margin:70.85pt 2.0cm 2.0cm 2.0cm}
div.WordSection1
{}
-->
</style>
</head>
<body lang=3D"IT" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span lang=3D"EN-US">Hi all,</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I can’t login to the hype=
rvisor, neither as root nor as admin, neither from another computer via ssh=
nor directly on the machine.</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I’m sure I remember the p=
asswords. This is not the first time it happens: last time I reinstalled th=
e host. Everything worked ok for about 2 weeks, and then...</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">What’s going on? Is it a =
known behavior, somehow?</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Before rebooting the hypervisor=
, I would like to try something. RHEV Manager talks to RHEV-H without any p=
roblems. Can I login with RHEV-M’s keys? how?</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"> </span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Thank you all.</span></p>
</div>
<br>
<br>
<div align=3D"left">
<p style=3D"font-family:Calibri,Sans-Serif; font-size:10pt"><span style=3D"=
color:#000000; font-weight:bold">Alberto Scotto</span>
<span style=3D"color:#808080"></span><br>
<br>
<span style=3D"color:#000000"><img border=3D"0" alt=3D"Blue" src=3D"cid:bde=
5ac62d10545908e269a6006dbd5ac" style=3D"margin:0px">
</span><br>
<span style=3D"color:#808080">Via Cardinal Massaia, 83<br>
10147 - Torino - ITALY <br>
phone: +39 011 29100 <br>
<a href=3D"al.scotto(a)reply.it" target=3D"" style=3D"color:blue; text-decora=
tion:underline">al.scotto(a)reply.it</a>
<br>
<a title=3D"" href=3D"www.reply.it" target=3D"" style=3D"color:blue; text-d=
ecoration:underline">www.reply.it</a>
</span><br>
</p>
</div>
<br>
<hr>
<font face=3D"Arial" color=3D"Gray" size=3D"1"><br>
--<br>
The information transmitted is intended for the person or entity to which i=
t is addressed and may contain confidential and/or privileged material. Any=
review, retransmission, dissemination or other use of, or taking of any ac=
tion in reliance upon, this information
by persons or entities other than the intended recipient is prohibited. If=
you received this in error, please contact the sender and delete the mater=
ial from any computer.<br>
</font>
</body>
</html>
--_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_--
--_004_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: image/png; name="blue.png"
Content-Description: blue.png
Content-Disposition: inline; filename="blue.png"; size=2834;
creation-date="Tue, 11 Sep 2012 14:14:44 GMT";
modification-date="Tue, 11 Sep 2012 14:14:44 GMT"
Content-ID: <bde5ac62d10545908e269a6006dbd5ac>
Content-Transfer-Encoding: base64
iVBORw0KGgoAAAANSUhEUgAAAIwAAAAyCAYAAACOADM7AAAABmJLR0QA/gD+AP7rGNSCAAAACXBI
WXMAAA3XAAAN1wFCKJt4AAAACXZwQWcAAACMAAAAMgCR0D3bAAAKaUlEQVR42u2ce5AUxRnAf313
3Al4eCAYFaIgyMNEUF6KlYoVIDBArDxqopWxQgViQlWsPHA0MUlZVoyKRsdSE4lGomjIaHS0UlHL
wTIPpEgQFQUUjYIWdfIIScyBHi/Z6/zRM1xP3yzs7t3unOX8qra2H9M9vb3f9Pf19/WukFKSk1Mq
dVkPIOejRS4wOWXR6wVGuP5I4foDsh5HjkL0VhtGuP5A4CFgNrAD+Lb0nKeyHtfHnd68wixGCQvA
qcA9wvWPy3pQH3caan1D4fonAYeBDwEZjaFflAaok56zHRhsNG0B+gAHSrhHarn0nFp/3NLnxbKP
B06I5kECO2UYZD2sLtRcYIBJwK+BoYBACU89cAjoAIRw/TuAJcClQGy//FJ6zvvH6ly4/qXAz4vU
HQA2A4H0nIcz+OxH41eAHaU3AhdkPaA0MrFhhOuPB2YA5wBnA6ehni5dgKcBu4C5wLZS7Rfh+g8A
80u49HHgEuk5h2s+AeaYLbsO2AKMiIqWyzBYkPW40shihUF6zkbUUwSAcP0G4FHgS9pl10rPmQMs
LbXfSBVNLPHyrwDfBO7JYg4MRqEempjnsh5QMXqL0Xsl8EUt3w5cXUE/w4AztfzzwGSUGrwoyuvM
yfqDR5yLUssxL2U9oGJkssLoCNdfjLJXdBZIz9lQQXcTgSYt/4z0nHjy1wvX3wW8oNX3O8q4TgKm
AGegjNB/As9JzzmYer1lTwKGoOyyV2UYtArLngLMQ9lh64EVRQxZ3V5pje4V9zsVGBRl22QYrDXu
e0HUvwD+K8NgXbe/lKOQqcAI178MuM0ovk16zqMVdjnNyL9g5E2DrTVlTP1RRvM3gIFG9RvC9RdK
z/lHoo2yQQJgeFR0hbDsT6FUns544Icp456qpV+RYaAL5RJgepR+FWXzxfcdA6zRrr0SqKrAZKaS
hOt/DbjXKH5Geo7bjW71iT8AvGLUzzXyfzfGNBBlPyymq7AAjAWeFK5/slE+AvhklC4At6KEZb9x
3cJo+9x5T8s+ERinFa012uzU0vuMuu9r6W3AXd2Yu5LIRGCE618E/D6l6rpu9Hk8MEEr2iQ9p1Wr
n4wShJgPgCeMbh6g02jeB9wILASe1q4ZBHzBaDeRThukHghRdskoQF+NmlH+JJ0JqB1ijCkw72np
jiOfx7JPQrkdYm6QYXBMH1V3qYlKEq7fhNLvw1CTeztK55rcJlz/s8XshGPwaeBELd8sXP961Bd4
Bsqo1u2bm6Tn7NbGeCHKMI6ZLz3nsajuT6gtfjxfpxr31lXhThkG8470a9mrtPp2uq4652np94FN
Rr0uMM1a+jI6fVTvAMsrmLOy6VGBEa5fB3wOpctHaK9TgVOAxmN0MRXlwPpWBbefYuTHAj8tcu39
0nNuMMq+qqXfjoUl4mSSq/HbRlv9S3/ZqBumpXcB/zPqz9fSm2UY/Nuo1wWmCUBYdiPwHa3ck2Hw
YQVzVjbVWGFmkW7YmewDfga8CNwHnB6VXyZcf7X0nAfLvG8pntE3gSXSc5an1Olf+hDh+i+jVieJ
UiOxwBSiMQMgLLsFOEtr+7xWB8rQjdkgw0BXK40o1RWTZrDu0dKx0X4xylMOynZZVuZcVUyPCoz0
nA7gR8L1N6FWmQIqZtRGpwoSwF7gRek5WwCE658P3A9Y0TV3C9ffUOrWOlrZdIfdXuBhlCqaqZU/
myYs0RZaNzybUV7oNFqBt7T8BJJ2iW6zDAPGFKkDGE1yBTLtF0gKTCF6/4FWtsTYVVWVqtgw0nNW
lHn9LmCOcP2bgKuAvsAtqNWqFGLVF7NGes4i4fpjgNfpFNbzi7QfD/TX8vtQMa40VkvPKWh5fWfW
DuhCfg5Ju8nc5k/RxpZYuTR0gWkTlj0D5YgEeJca2S4xvcXTC4D0nKvpdNWXc2hqEiqSHROrhR0k
bYAzhesPTmmvG61tKAE6PXoNRRnTg6OX6VvRhfB1GQa7tbyu5v6D8qNQpH4bsDVlbLrADACu0fK/
qOXqAr1MYCLip7AcI+48I78WIIpuv6mVN5NUPWntN0nP2So9p016ThtwEKU6RpIMOyAsuw9JVWiu
INO19AYZBma0fbKWXi/DoEBX9tBpu4wDLozS2+jqx6o6vVFgYt+JKKON/pTvJ6kWzKc6LTg5XEtv
MeruAF5DqbZVgH6IayTJoOHf4oSw7LNICuKTeqfCsj9BUnhN+yamPXqZc3JrLfwuJpnHklKIBaa+
lIuF67eQ3KW8HtlEMabhPCmlG/3JnhX5ZHaifDeLtLqlxpmcySQfuvnCstdH6WXaZ9iPMsJ1xpOM
ZaXZL6DsqfcB3UO8A7WzrDm9T2DqG7dTOHSIEgUGIc5GyhatZJ1Rv4HkmZ/xKb08o5UPRa0UkuQT
vY6uQVJTFc5D7fQ6SNpUN8ow2GVcq7sB2ugq2DGHUYfLdG6SYbCPDMhcYIRlJwWjcGg/Z1/yATBE
zJxXT0Pf4o0P7pWcO39W4nuVHS+JGfPq6dMXOjpgzNyt9En0MUF877fDee3x1iPlo2beTOPxnwGh
qzahuhUAjwCLpOeYKkDfIT2BUl1XkxT2+2QYXJ8yen0H+JYMgz2kY9o126mh38UkITBRYGwp5e1Q
usNjwL/Ql3VRX2D35mUI0UB90wyOZmc19i+wa+NB+vTrnMA9re00RO3q6iRbVtYxeOzt1NXHS3od
e96dRkPT6CN9v/HUIRr738Dg0bMRDSdQVzeAjsJh+ra8SfMpf5S3XNzFoSYsewhJVbhKhoEnLDtE
HV4vRGXPprQFFTdrRklk2u4opoVkyMOTYbCfjEgc0RSWPQhlQ/SruMfymCrD4IXud1N7In+ILgzT
ZRj8tYfvcSLwOzoPer0DjKv1VlrHVEltqBhMafZD99mR1QfvAXT1tYfiNkhZCMvuD1yLCtbORsXg
Yi7PUljAEJgoztFaYV8fN8yg4XsV95TkLJS32+QaGQZPl9tZT5O50ftRJLL1Pq8V9cjqEjHdyG8D
rpdhkJmhq5MLTGX0QR2diLdnYQ/2vRq1wsRe6nUyDNq712XP0Wt/W53TO+mNoYGcXkwuMDll0eM2
TPRbnGnAvaaDSVj2bOA0GQY1j7Lm9AzVWGG+jIrwphlH3wXuzvpD51RONXZJ7aizLFcIyx4O3CXD
IN527kUdJAJAWPbFqBXnVmHZV6FO3K+I6oahzgYPAX7T017UnMqoxgpTQAniONRJ/AeFZRc72+IA
P47SPwEWAAjLbgL+jPJ1NAF/EZZd6o/sc6pINQSmARAyDL6OOm45mmSoX+cDVDiC6D0+azI0arcS
FSkG9fcgORlTbcfdXtR5jqOdnpPGO3QK8nzU33KsoutvgXIyoBorjP7FN6OEsph3sE6rq9fS8RmQ
RTIMTgP+QPJsbk5GVENgjgMQlv0QcDnwBp0nxgaQ/O+6dmCUsOxHUGdj459kbI/a3Sksew3qjE5L
1pOVUx2VtBJljxxAhf3v0v4TZRnKmI25ObruLdTZkvcAZBgcEpY9E3BRu6TrZBisznqycvJYUk6Z
5KGBnLLIBSanLHKBySmLXGByyiIXmJyy+D/P9uGVPOu6DAAAACh6VFh0U29mdHdhcmUAAHja801M
LsrPTU3JTFRwyyxKLc8vyi5WsAAAYBUIJ4KDNosAAAAASUVORK5CYII=
--_004_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_--
6 years, 2 months
[Users] Documentation: Storage Domain conversion from Data Domain to Export Domain
by Michael Ayers
This is a multipart message in MIME format.
------=_NextPart_000_018F_01CDB69A.3C5318B0
Content-Type: text/plain;
charset="us-ascii"
Content-Transfer-Encoding: 7bit
Hey All,
I ran into this issue myself where I needed to convert a data domain to an
export domain in order to recover virtual guests from a corrupted
ovirt/rhevm instance into a new ovirt/rhevm instance. This wasn't
documented anywhere that I saw but with the help of Itamar Heim and an
well timed email to the list from Igor Lvovsky last night I was able to do
this and wanted to send to the list a documented procedure for how to
modify the metadata of the data domain prior to import as an export
domain. This procedure works for both RHEV-M and Ovirt. Let me know if
you have any questions.
Original Data Domain Metadata File
--------------------------------------------------
CLASS=Data
DESCRIPTION=vm-storage
IOOPTIMEOUTSEC=10
LEASERETRIES=3
LEASETIMESEC=60
LOCKPOLICY=
LOCKRENEWALINTERVALSEC=5
MASTER_VERSION=2
POOL_DESCRIPTION=MyPool
POOL_DOMAINS=dd8bc200-6e28-4185-bfe0-f0affb94f283:Active,ddefdf6c-ef68-419
c-9f72-76d27bf7d788:Active,66b3b243-6cc6-465f-b130-6f2cd0b70514:Active
POOL_SPM_ID=2
POOL_SPM_LVER=17
POOL_UUID=a207f052-f4bf-44a3-b637-c6d2020a7c41
REMOTE_PATH=nfsserver:/ovirt/vm-storage
ROLE=Master
SDUUID=66b3b243-6cc6-465f-b130-6f2cd0b70514
TYPE=NFS
VERSION=0
_SHA_CKSUM=009fa538321ac56749669127f43cc754aa59d398
Diff between Original DD Metadata File and ED Metadata File
--------------------------------------------------
--- metadata-data-storage 2012-10-30 12:24:52.484006958 -0700
+++ metadata-exp-storage 2012-10-30 12:14:59.043807789 -0700
@@ -1,5 +1,5 @@
-CLASS=Data
-DESCRIPTION=vm-storage
+CLASS=Backup
+DESCRIPTION=export-storage
IOOPTIMEOUTSEC=10
LEASERETRIES=3
LEASETIMESEC=60
@@ -7,13 +7,12 @@
LOCKRENEWALINTERVALSEC=5
MASTER_VERSION=2
POOL_DESCRIPTION=MyPool
-POOL_DOMAINS=dd8bc200-6e28-4185-bfe0-f0affb94f283:Active,ddefdf6c-ef68-41
9c-9f72-76d27bf7d788:Active,66b3b243-6cc6-465f-b130-6f2cd0b70514:Active
+POOL_DOMAINS=
POOL_SPM_ID=2
POOL_SPM_LVER=17
-POOL_UUID=a207f052-f4bf-44a3-b637-c6d2020a7c41
-REMOTE_PATH=nfsserver:/ovirt/vm-storage
-ROLE=Master
+POOL_UUID=
+REMOTE_PATH=nfsserver:/ovirt/export-storage
+ROLE=Regular
SDUUID=66b3b243-6cc6-465f-b130-6f2cd0b70514
TYPE=NFS
VERSION=0
-_SHA_CKSUM=009fa538321ac56749669127f43cc754aa59d398
Thanks,
Michael
-------------------------------------
Michael J. Ayers
Red Hat Inc.
Solutions Architect
e: <mailto:ayersmj@redhat.com> ayersmj(a)redhat.com
w: <http://www.redhat.com/> www.redhat.com
------=_NextPart_000_018F_01CDB69A.3C5318B0
Content-Type: text/html;
charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><META =
HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 14 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Consolas;
panose-1:2 11 6 9 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US link=3Dblue =
vlink=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal>Hey =
All,<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>I ran into this issue myself where I needed to convert =
a data domain to an export domain in order to recover virtual guests =
from a corrupted ovirt/rhevm instance into a new ovirt/rhevm =
instance. This wasn’t documented anywhere that I saw but =
with the help of Itamar Heim and an well timed email to the list from =
Igor Lvovsky last night I was able to do this and wanted to send to the =
list a documented procedure for how to modify the metadata of the data =
domain prior to import as an export domain. This procedure works =
for both RHEV-M and Ovirt. Let me know if you have any =
questions.<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>Original Data Domain =
Metadata File<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>-------------------------=
-------------------------<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>CLASS=3DData<o:p></o:p></=
span></p><p class=3DMsoNormal style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>DESCRIPTION=3Dvm-storage<=
o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>IOOPTIMEOUTSEC=3D10<o:p><=
/o:p></span></p><p class=3DMsoNormal style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>LEASERETRIES=3D3<o:p></o:=
p></span></p><p class=3DMsoNormal style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>LEASETIMESEC=3D60<o:p></o=
:p></span></p><p class=3DMsoNormal style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>LOCKPOLICY=3D<o:p></o:p><=
/span></p><p class=3DMsoNormal style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>LOCKRENEWALINTERVALSEC=3D=
5<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>MASTER_VERSION=3D2<o:p></=
o:p></span></p><p class=3DMsoNormal style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>POOL_DESCRIPTION=3DMyPool=
<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>POOL_DOMAINS=3Ddd8bc200-6=
e28-4185-bfe0-f0affb94f283:Active,ddefdf6c-ef68-419c-9f72-76d27bf7d788:Ac=
tive,66b3b243-6cc6-465f-b130-6f2cd0b70514:Active<o:p></o:p></span></p><p =
class=3DMsoNormal style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>POOL_SPM_ID=3D2<o:p></o:p=
></span></p><p class=3DMsoNormal style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>POOL_SPM_LVER=3D17<o:p></=
o:p></span></p><p class=3DMsoNormal style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>POOL_UUID=3Da207f052-f4bf=
-44a3-b637-c6d2020a7c41<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>REMOTE_PATH=3Dnfsserver:/=
ovirt/vm-storage<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>ROLE=3DMaster<o:p></o:p><=
/span></p><p class=3DMsoNormal style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>SDUUID=3D66b3b243-6cc6-46=
5f-b130-6f2cd0b70514<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>TYPE=3DNFS<o:p></o:p></sp=
an></p><p class=3DMsoNormal style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>VERSION=3D0<o:p></o:p></s=
pan></p><p class=3DMsoNormal style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>_SHA_CKSUM=3D009fa538321a=
c56749669127f43cc754aa59d398<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'><o:p> </o:p></span><=
/p><p class=3DMsoNormal style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'><o:p> </o:p></span><=
/p><p class=3DMsoNormal style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>Diff between Original DD =
Metadata File and ED Metadata File<o:p></o:p></span></p><p =
class=3DMsoNormal style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>-------------------------=
-------------------------<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>--- =
metadata-data-storage 2012-10-30 =
12:24:52.484006958 -0700<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>+++ =
metadata-exp-storage =
2012-10-30 12:14:59.043807789 -0700<o:p></o:p></span></p><p =
class=3DMsoNormal style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>@@ -1,5 +1,5 =
@@<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>-CLASS=3DData<o:p></o:p><=
/span></p><p class=3DMsoNormal style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>-DESCRIPTION=3Dvm-storage=
<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>+CLASS=3DBackup<o:p></o:p=
></span></p><p class=3DMsoNormal style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>+DESCRIPTION=3Dexport-sto=
rage<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'> =
IOOPTIMEOUTSEC=3D10<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'> =
LEASERETRIES=3D3<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'> =
LEASETIMESEC=3D60<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>@@ -7,13 +7,12 =
@@<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'> =
LOCKRENEWALINTERVALSEC=3D5<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'> =
MASTER_VERSION=3D2<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'> =
POOL_DESCRIPTION=3DMyPool<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>-POOL_DOMAINS=3Ddd8bc200-=
6e28-4185-bfe0-f0affb94f283:Active,ddefdf6c-ef68-419c-9f72-76d27bf7d788:A=
ctive,66b3b243-6cc6-465f-b130-6f2cd0b70514:Active<o:p></o:p></span></p><p=
class=3DMsoNormal style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>+POOL_DOMAINS=3D<o:p></o:=
p></span></p><p class=3DMsoNormal style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'> =
POOL_SPM_ID=3D2<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'> =
POOL_SPM_LVER=3D17<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>-POOL_UUID=3Da207f052-f4b=
f-44a3-b637-c6d2020a7c41<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>-REMOTE_PATH=3Dnfsserver:=
/ovirt/vm-storage<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>-ROLE=3DMaster<o:p></o:p>=
</span></p><p class=3DMsoNormal style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>+POOL_UUID=3D<o:p></o:p><=
/span></p><p class=3DMsoNormal style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>+REMOTE_PATH=3Dnfsserver:=
/ovirt/export-storage<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>+ROLE=3DRegular<o:p></o:p=
></span></p><p class=3DMsoNormal style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'> =
SDUUID=3D66b3b243-6cc6-465f-b130-6f2cd0b70514<o:p></o:p></span></p><p =
class=3DMsoNormal style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'> =
TYPE=3DNFS<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'> =
VERSION=3D0<o:p></o:p></span></p><p class=3DMsoNormal =
style=3D'margin-left:.5in'><span =
style=3D'font-size:10.0pt;font-family:Consolas'>-_SHA_CKSUM=3D009fa538321=
ac56749669127f43cc754aa59d398<o:p></o:p></span></p><p =
class=3DMsoNormal><span =
style=3D'font-size:10.0pt;font-family:Consolas'><o:p> </o:p></span><=
/p><p class=3DMsoNormal><span =
style=3D'font-size:10.0pt;font-family:Consolas'>Thanks,<o:p></o:p></span>=
</p><p class=3DMsoNormal><span =
style=3D'font-size:10.0pt;font-family:Consolas'><o:p> </o:p></span><=
/p><p class=3DMsoNormal><span =
style=3D'font-size:10.0pt;font-family:Consolas'>Michael<o:p></o:p></span>=
</p><p class=3DMsoNormal><span =
style=3D'font-size:10.0pt;font-family:Consolas'>-------------------------=
------------<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:10.0pt;font-family:Consolas'>Michael J. =
Ayers<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:10.0pt;font-family:Consolas'>Red Hat =
Inc.<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:10.0pt;font-family:Consolas'>Solutions =
Architect<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:10.0pt;font-family:Consolas'>e: <a =
href=3D"mailto:ayersmj@redhat.com"><span =
style=3D'color:blue'>ayersmj(a)redhat.com</span></a><o:p></o:p></span></p><=
p class=3DMsoNormal><span =
style=3D'font-size:10.0pt;font-family:Consolas'>w: <a =
href=3D"http://www.redhat.com/"><span =
style=3D'color:blue'>www.redhat.com</span></a><o:p></o:p></span></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal><span =
style=3D'font-size:10.0pt;font-family:Consolas'><o:p> </o:p></span><=
/p></div></body></html>
------=_NextPart_000_018F_01CDB69A.3C5318B0--
11 years, 5 months
[Users] Database creation failed on engine-setup
by Robber Phex
Hi, I'm unable to complete engine-setup:
$ sudo engine-setup
Welcome to oVirt Engine setup utility
HTTP Port [8080] :
HTTPS Port [8443] :
Host fully qualified domain name, note that this name should be fully
resolvable [RobberPhex-PC] :
ERROR: domain is not a valid domain name
User input failed validation, do you still wish to use it? (yes|no): yes
Password for Administrator (admin@internal) :
Warning: Weak Password.
Confirm password :
Database password (required for secure authentication with the locally
created database) :
Warning: Weak Password.
Confirm password :
Organization Name for the Certificate: RobberPhexCloud
The default storage type you will be using ['NFS'| 'FC'| 'ISCSI'] [NFS] :
Should the installer configure NFS share on this server to be used as an
ISO Domain? ['yes'| 'no'] [yes] :
Mount point path: /mnt/iso
Display name for the ISO Domain: RPiso
Firewall ports need to be opened.
You can let the installer configure iptables automatically overriding the
current configuration. The old configuration will be backed up.
Alternately you can configure the firewall later using an example iptables
file found under /usr/share/ovirt-engine/conf/iptables.example
Configure iptables ? ['yes'| 'no']: yes
oVirt Engine will be installed using the following configuration:
=================================================================
http-port: 8080
https-port: 8443
host-fqdn: RobberPhex-PC
auth-pass: ********
db-pass: ********
org-name: RobberPhexCloud
default-dc-type: NFS
nfs-mp: /mnt/iso
iso-domain-name: RPiso
override-iptables: yes
Proceed with the configuration listed above? (yes|no): yes
Installing:
Configuring oVirt-engine... [ DONE ]
Creating CA... [ DONE ]
Editing JBoss Configuration... [ DONE ]
Setting Database Security... [ DONE ]
Creating Database... [ ERROR ]
Database creation failed
Please check log file
/var/log/ovirt-engine/engine-setup_2012_08_13_01_01_02.log for more
information
and,It's engine-setup_2012_08_13_01_01_02.log:
2012-08-13 01:03:58::DEBUG::common_utils::196::root:: cmd = /sbin/ip addr
2012-08-13 01:03:58::DEBUG::common_utils::201::root:: output = 1: lo:
<LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast
state DOWN qlen 1000
link/ether 98:4b:e1:ca:1a:ba brd ff:ff:ff:ff:ff:ff
3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen
1000
link/ether ec:55:f9:6c:54:15 brd ff:ff:ff:ff:ff:ff
inet 172.16.70.224/16 brd 172.16.255.255 scope global wlan0
inet6 fe80::ee55:f9ff:fe6c:5415/64 scope link
valid_lft forever preferred_lft forever
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN
link/ether 52:54:00:78:30:62 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master virbr0
state DOWN qlen 500
link/ether 52:54:00:78:30:62 brd ff:ff:ff:ff:ff:ff
2012-08-13 01:03:58::DEBUG::common_utils::202::root:: stderr =
2012-08-13 01:03:58::DEBUG::common_utils::203::root:: retcode = 0
2012-08-13 01:03:58::DEBUG::common_utils::318::root:: Found IP Address:
172.16.70.224
2012-08-13 01:03:58::DEBUG::common_utils::318::root:: Found IP Address:
192.168.122.1
2012-08-13 01:03:58::DEBUG::engine-setup::2004::root:: initiating command
line option parser
2012-08-13 01:03:58::DEBUG::engine-setup::1897::root:: Entered
main(configFile='None')
2012-08-13 01:03:58::DEBUG::engine-setup::1464::root:: checking the status
of engine
2012-08-13 01:03:58::DEBUG::common_utils::196::root:: cmd = /sbin/service
ovirt-engine status
2012-08-13 01:03:58::DEBUG::common_utils::201::root:: output =
ovirt-engine.service - oVirt Engine
Loaded: loaded (/usr/lib/systemd/system/ovirt-engine.service; enabled)
Active: failed (Result: signal) since Mon, 13 Aug 2012 00:55:25
+0800; 8min ago
Main PID: 587 (code=killed, signal=KILL)
CGroup: name=systemd:/system/ovirt-engine.service
2012-08-13 01:03:58::DEBUG::common_utils::202::root:: stderr = Redirecting
to /bin/systemctl status ovirt-engine.service
2012-08-13 01:03:58::DEBUG::common_utils::203::root:: retcode = 3
2012-08-13 01:03:58::DEBUG::engine-setup::1211::root:: going over group
{'PRE_CONDITION_MATCH': True, 'DESCRIPTION': 'General configuration
parameters', 'POST_CONDITION': False, 'GROUP_NAME': 'ALL_PARAMS',
'PRE_CONDITION': False, 'POST_CONDITION_MATCH': True}
2012-08-13 01:03:59::DEBUG::engine_validators::48::root:: Validating 8080
as a valid TCP Port
2012-08-13 01:03:59::DEBUG::common_utils::175::root:: Checking if TCP port
8080 is open by any process
2012-08-13 01:03:59::DEBUG::common_utils::196::root:: cmd = /usr/sbin/lsof
-i -n -P
2012-08-13 01:03:59::DEBUG::common_utils::201::root:: output = COMMAND
PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
systemd 1 root 29u IPv6 12597 0t0 TCP *:631 (LISTEN)
systemd 1 root 30u IPv4 12598 0t0 UDP *:631
avahi-dae 606 avahi 12u IPv4 16379 0t0 UDP *:5353
avahi-dae 606 avahi 13u IPv4 16380 0t0 UDP *:57705
chronyd 648 chrony 1u IPv4 12086 0t0 UDP *:123
chronyd 648 chrony 2u IPv6 12087 0t0 UDP *:123
chronyd 648 chrony 3u IPv4 12088 0t0 UDP *:323
chronyd 648 chrony 5u IPv6 12089 0t0 UDP *:323
rpcbind 785 root 7u IPv4 18666 0t0 UDP *:111
rpcbind 785 root 8u IPv4 18667 0t0 UDP *:959
rpcbind 785 root 9u IPv4 18668 0t0 TCP *:111 (LISTEN)
rpcbind 785 root 10u IPv6 18669 0t0 UDP *:111
rpcbind 785 root 11u IPv6 18670 0t0 UDP *:959
rpcbind 785 root 12u IPv6 18671 0t0 TCP *:111 (LISTEN)
rpc.statd 828 rpcuser 5u IPv4 15226 0t0 UDP 127.0.0.1:1004
rpc.statd 828 rpcuser 8u IPv4 15296 0t0 UDP *:48348
rpc.statd 828 rpcuser 9u IPv4 15298 0t0 TCP *:49245 (LISTEN)
rpc.statd 828 rpcuser 10u IPv6 15300 0t0 UDP *:48164
rpc.statd 828 rpcuser 11u IPv6 15302 0t0 TCP *:37143 (LISTEN)
dnsmasq 885 nobody 5u IPv4 17073 0t0 UDP *:67
dnsmasq 885 nobody 6u IPv4 17077 0t0 UDP 192.168.122.1:53
dnsmasq 885 nobody 7u IPv4 17078 0t0 TCP
192.168.122.1:53(LISTEN)
rpc.mount 959 root 7u IPv4 15360 0t0 UDP *:20048
rpc.mount 959 root 8u IPv4 20482 0t0 TCP *:20048 (LISTEN)
rpc.mount 959 root 9u IPv6 20484 0t0 UDP *:20048
rpc.mount 959 root 10u IPv6 20486 0t0 TCP *:20048 (LISTEN)
httpd 986 root 3u IPv4 19590 0t0 TCP *:12080 (LISTEN)
httpd 990 apache 3u IPv4 19590 0t0 TCP *:12080 (LISTEN)
httpd 991 apache 3u IPv4 19590 0t0 TCP *:12080 (LISTEN)
httpd 992 apache 3u IPv4 19590 0t0 TCP *:12080 (LISTEN)
httpd 993 apache 3u IPv4 19590 0t0 TCP *:12080 (LISTEN)
httpd 994 apache 3u IPv4 19590 0t0 TCP *:12080 (LISTEN)
httpd 995 apache 3u IPv4 19590 0t0 TCP *:12080 (LISTEN)
httpd 996 apache 3u IPv4 19590 0t0 TCP *:12080 (LISTEN)
httpd 997 apache 3u IPv4 19590 0t0 TCP *:12080 (LISTEN)
proftpd 1002 nobody 2u IPv6 18309 0t0 TCP *:12021 (LISTEN)
mysqld 1203 mysql 10u IPv4 20816 0t0 TCP *:3306 (LISTEN)
cupsd 1780 root 4u IPv6 12597 0t0 TCP *:631 (LISTEN)
cupsd 1780 root 5u IPv4 12598 0t0 UDP *:631
cupsd 1780 root 12u IPv4 21343 0t0 TCP 127.0.0.1:631(LISTEN)
dhclient 3149 root 6u IPv4 28928 0t0 UDP *:68
dhclient 3149 root 20u IPv4 29992 0t0 UDP *:3721
dhclient 3149 root 21u IPv6 29993 0t0 UDP *:29724
firefox 3153 robberphex 50u IPv4 32247 0t0 TCP
172.16.70.224:40030->74.125.31.18:443 (ESTABLISHED)
firefox 3153 robberphex 65u IPv4 45162 0t0 TCP
172.16.70.224:60227->74.125.31.106:443 (ESTABLISHED)
firefox 3153 robberphex 66u IPv4 45185 0t0 TCP
172.16.70.224:46433->74.125.31.94:443 (ESTABLISHED)
firefox 3153 robberphex 67u IPv4 45190 0t0 TCP
172.16.70.224:37722->74.125.31.138:443 (ESTABLISHED)
telepathy 3224 robberphex 10u IPv4 28333 0t0 TCP
172.16.70.224:43924->74.125.31.125:5222 (ESTABLISHED)
telepathy 3224 robberphex 16u IPv4 31169 0t0 TCP
172.16.70.224:40980->207.46.124.170:5222 (ESTABLISHED)
telepathy 3225 robberphex 8u IPv4 30929 0t0 TCP
172.16.70.224:33843->78.40.125.4:6697 (ESTABLISHED)
plugin-co 4223 robberphex 22u IPv4 34571 0t0 TCP 127.0.0.1:33441->
127.0.0.1:38507 (ESTABLISHED)
GoogleTal 4278 robberphex 16u IPv4 36888 0t0 TCP
127.0.0.1:38507(LISTEN)
GoogleTal 4278 robberphex 18u IPv4 36895 0t0 TCP
127.0.0.1:54688(LISTEN)
GoogleTal 4278 robberphex 21u IPv4 35218 0t0 TCP 127.0.0.1:38507->
127.0.0.1:33441 (ESTABLISHED)
sendmail 4443 root 4u IPv4 37046 0t0 TCP 127.0.0.1:25(LISTEN)
postgres 5872 postgres 3u IPv4 46120 0t0 TCP
127.0.0.1:5432(LISTEN)
postgres 5872 postgres 6u IPv4 41925 0t0 UDP 127.0.0.1:42123->
127.0.0.1:42123
postgres 5875 postgres 6u IPv4 41925 0t0 UDP 127.0.0.1:42123->
127.0.0.1:42123
postgres 5876 postgres 6u IPv4 41925 0t0 UDP 127.0.0.1:42123->
127.0.0.1:42123
postgres 5877 postgres 6u IPv4 41925 0t0 UDP 127.0.0.1:42123->
127.0.0.1:42123
postgres 5878 postgres 6u IPv4 41925 0t0 UDP 127.0.0.1:42123->
127.0.0.1:42123
2012-08-13 01:03:59::DEBUG::common_utils::202::root:: stderr =
2012-08-13 01:03:59::DEBUG::common_utils::203::root:: retcode = 0
2012-08-13 01:04:00::DEBUG::engine_validators::48::root:: Validating 8443
as a valid TCP Port
2012-08-13 01:04:00::DEBUG::common_utils::175::root:: Checking if TCP port
8443 is open by any process
2012-08-13 01:04:00::DEBUG::common_utils::196::root:: cmd = /usr/sbin/lsof
-i -n -P
2012-08-13 01:04:00::DEBUG::common_utils::201::root:: output = COMMAND
PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
systemd 1 root 29u IPv6 12597 0t0 TCP *:631 (LISTEN)
systemd 1 root 30u IPv4 12598 0t0 UDP *:631
avahi-dae 606 avahi 12u IPv4 16379 0t0 UDP *:5353
avahi-dae 606 avahi 13u IPv4 16380 0t0 UDP *:57705
chronyd 648 chrony 1u IPv4 12086 0t0 UDP *:123
chronyd 648 chrony 2u IPv6 12087 0t0 UDP *:123
chronyd 648 chrony 3u IPv4 12088 0t0 UDP *:323
chronyd 648 chrony 5u IPv6 12089 0t0 UDP *:323
rpcbind 785 root 7u IPv4 18666 0t0 UDP *:111
rpcbind 785 root 8u IPv4 18667 0t0 UDP *:959
rpcbind 785 root 9u IPv4 18668 0t0 TCP *:111 (LISTEN)
rpcbind 785 root 10u IPv6 18669 0t0 UDP *:111
rpcbind 785 root 11u IPv6 18670 0t0 UDP *:959
rpcbind 785 root 12u IPv6 18671 0t0 TCP *:111 (LISTEN)
rpc.statd 828 rpcuser 5u IPv4 15226 0t0 UDP 127.0.0.1:1004
rpc.statd 828 rpcuser 8u IPv4 15296 0t0 UDP *:48348
rpc.statd 828 rpcuser 9u IPv4 15298 0t0 TCP *:49245 (LISTEN)
rpc.statd 828 rpcuser 10u IPv6 15300 0t0 UDP *:48164
rpc.statd 828 rpcuser 11u IPv6 15302 0t0 TCP *:37143 (LISTEN)
dnsmasq 885 nobody 5u IPv4 17073 0t0 UDP *:67
dnsmasq 885 nobody 6u IPv4 17077 0t0 UDP 192.168.122.1:53
dnsmasq 885 nobody 7u IPv4 17078 0t0 TCP
192.168.122.1:53(LISTEN)
rpc.mount 959 root 7u IPv4 15360 0t0 UDP *:20048
rpc.mount 959 root 8u IPv4 20482 0t0 TCP *:20048 (LISTEN)
rpc.mount 959 root 9u IPv6 20484 0t0 UDP *:20048
rpc.mount 959 root 10u IPv6 20486 0t0 TCP *:20048 (LISTEN)
httpd 986 root 3u IPv4 19590 0t0 TCP *:12080 (LISTEN)
httpd 990 apache 3u IPv4 19590 0t0 TCP *:12080 (LISTEN)
httpd 991 apache 3u IPv4 19590 0t0 TCP *:12080 (LISTEN)
httpd 992 apache 3u IPv4 19590 0t0 TCP *:12080 (LISTEN)
httpd 993 apache 3u IPv4 19590 0t0 TCP *:12080 (LISTEN)
httpd 994 apache 3u IPv4 19590 0t0 TCP *:12080 (LISTEN)
httpd 995 apache 3u IPv4 19590 0t0 TCP *:12080 (LISTEN)
httpd 996 apache 3u IPv4 19590 0t0 TCP *:12080 (LISTEN)
httpd 997 apache 3u IPv4 19590 0t0 TCP *:12080 (LISTEN)
proftpd 1002 nobody 2u IPv6 18309 0t0 TCP *:12021 (LISTEN)
mysqld 1203 mysql 10u IPv4 20816 0t0 TCP *:3306 (LISTEN)
cupsd 1780 root 4u IPv6 12597 0t0 TCP *:631 (LISTEN)
cupsd 1780 root 5u IPv4 12598 0t0 UDP *:631
cupsd 1780 root 12u IPv4 21343 0t0 TCP 127.0.0.1:631(LISTEN)
dhclient 3149 root 6u IPv4 28928 0t0 UDP *:68
dhclient 3149 root 20u IPv4 29992 0t0 UDP *:3721
dhclient 3149 root 21u IPv6 29993 0t0 UDP *:29724
firefox 3153 robberphex 50u IPv4 32247 0t0 TCP
172.16.70.224:40030->74.125.31.18:443 (ESTABLISHED)
firefox 3153 robberphex 65u IPv4 45162 0t0 TCP
172.16.70.224:60227->74.125.31.106:443 (ESTABLISHED)
firefox 3153 robberphex 66u IPv4 45185 0t0 TCP
172.16.70.224:46433->74.125.31.94:443 (ESTABLISHED)
firefox 3153 robberphex 67u IPv4 45190 0t0 TCP
172.16.70.224:37722->74.125.31.138:443 (ESTABLISHED)
telepathy 3224 robberphex 10u IPv4 28333 0t0 TCP
172.16.70.224:43924->74.125.31.125:5222 (ESTABLISHED)
telepathy 3224 robberphex 16u IPv4 31169 0t0 TCP
172.16.70.224:40980->207.46.124.170:5222 (ESTABLISHED)
telepathy 3225 robberphex 8u IPv4 30929 0t0 TCP
172.16.70.224:33843->78.40.125.4:6697 (ESTABLISHED)
plugin-co 4223 robberphex 22u IPv4 34571 0t0 TCP 127.0.0.1:33441->
127.0.0.1:38507 (ESTABLISHED)
GoogleTal 4278 robberphex 16u IPv4 36888 0t0 TCP
127.0.0.1:38507(LISTEN)
GoogleTal 4278 robberphex 18u IPv4 36895 0t0 TCP
127.0.0.1:54688(LISTEN)
GoogleTal 4278 robberphex 21u IPv4 35218 0t0 TCP 127.0.0.1:38507->
127.0.0.1:33441 (ESTABLISHED)
sendmail 4443 root 4u IPv4 37046 0t0 TCP 127.0.0.1:25(LISTEN)
postgres 5872 postgres 3u IPv4 46120 0t0 TCP
127.0.0.1:5432(LISTEN)
postgres 5872 postgres 6u IPv4 41925 0t0 UDP 127.0.0.1:42123->
127.0.0.1:42123
postgres 5875 postgres 6u IPv4 41925 0t0 UDP 127.0.0.1:42123->
127.0.0.1:42123
postgres 5876 postgres 6u IPv4 41925 0t0 UDP 127.0.0.1:42123->
127.0.0.1:42123
postgres 5877 postgres 6u IPv4 41925 0t0 UDP 127.0.0.1:42123->
127.0.0.1:42123
postgres 5878 postgres 6u IPv4 41925 0t0 UDP 127.0.0.1:42123->
127.0.0.1:42123
2012-08-13 01:04:00::DEBUG::common_utils::202::root:: stderr =
2012-08-13 01:04:00::DEBUG::common_utils::203::root:: retcode = 0
2012-08-13 01:04:00::DEBUG::engine-setup::433::root:: setting default value
(00:1A:4A:A8:7A:00-00:1A:4A:A8:7A:FF) for key (MAC_RANGE)
2012-08-13 01:04:01::INFO::engine_validators::122::root:: Validating
RobberPhex-PC as a FQDN
2012-08-13 01:04:01::INFO::engine_validators::96::root:: validating
RobberPhex-PC as a valid domain string
2012-08-13 01:04:01::DEBUG::engine-setup::519::root:: asking user: User
input failed validation, do you still wish to use it? (yes|no):
2012-08-13 01:04:03::DEBUG::engine-setup::523::root:: user answered: yes
2012-08-13 01:04:06::DEBUG::engine_validators::71::root:: Validating
password
2012-08-13 01:04:06::WARNING::engine_validators::77::root:: Password failed
check
2012-08-13 01:04:06::WARNING::engine_validators::78::root:: Traceback (most
recent call last):
File "/usr/share/ovirt-engine/scripts/engine_validators.py", line 75, in
validatePassword
cracklib.FascistCheck(param)
ValueError: it is based on a dictionary word
2012-08-13 01:04:13::DEBUG::engine_validators::71::root:: Validating
password
2012-08-13 01:04:13::WARNING::engine_validators::77::root:: Password failed
check
2012-08-13 01:04:13::WARNING::engine_validators::78::root:: Traceback (most
recent call last):
File "/usr/share/ovirt-engine/scripts/engine_validators.py", line 75, in
validatePassword
cracklib.FascistCheck(param)
ValueError: it is based on a dictionary word
2012-08-13 01:04:30::INFO::engine_validators::203::root:: validating
organization name
2012-08-13 01:04:31::INFO::engine_validators::84::root:: Validating NFS as
part of ['NFS', 'FC', 'ISCSI']
2012-08-13 01:04:31::DEBUG::engine-setup::1211::root:: going over group
{'PRE_CONDITION_MATCH': 'yes', 'DESCRIPTION': 'ISO Domain paramters',
'POST_CONDITION': False, 'GROUP_NAME': 'NFS', 'PRE_CONDITION':
'CONFIG_NFS', 'POST_CONDITION_MATCH': True}
2012-08-13 01:04:31::INFO::engine_validators::84::root:: Validating yes as
part of ['yes', 'no']
2012-08-13 01:04:36::INFO::engine_validators::17::root:: validating
/mnt/iso as a valid mount point
2012-08-13 01:04:36::DEBUG::engine_validators::288::root:: attempting to
write temp file to /mnt
2012-08-13 01:04:36::DEBUG::common_utils::349::root:: Checking available
space on /mnt
2012-08-13 01:04:36::DEBUG::common_utils::354::root:: Available space on
/mnt is 59217
2012-08-13 01:04:42::INFO::engine_validators::186::root:: validating iso
domain name
2012-08-13 01:04:42::DEBUG::engine-setup::1211::root:: going over group
{'PRE_CONDITION_MATCH': True, 'DESCRIPTION': 'Firewall related paramters',
'POST_CONDITION': False, 'GROUP_NAME': 'IPTABLES', 'PRE_CONDITION': False,
'POST_CONDITION_MATCH': True}
2012-08-13 01:04:47::INFO::engine_validators::84::root:: Validating yes as
part of ['yes', 'no']
2012-08-13 01:04:47::INFO::engine-setup::1292::root:: *** User input
summary ***
2012-08-13 01:04:47::INFO::engine-setup::1307::root:: http-port: 8080
2012-08-13 01:04:47::INFO::engine-setup::1307::root:: https-port: 8443
2012-08-13 01:04:47::INFO::engine-setup::1307::root:: host-fqdn:
RobberPhex-PC
2012-08-13 01:04:47::INFO::engine-setup::1303::root:: auth-pass: ********
2012-08-13 01:04:47::INFO::engine-setup::1303::root:: db-pass: ********
2012-08-13 01:04:47::INFO::engine-setup::1307::root:: org-name:
RobberPhexCloud
2012-08-13 01:04:47::INFO::engine-setup::1307::root:: default-dc-type: NFS
2012-08-13 01:04:47::INFO::engine-setup::1307::root:: nfs-mp: /mnt/iso
2012-08-13 01:04:47::INFO::engine-setup::1307::root:: iso-domain-name: RPiso
2012-08-13 01:04:47::INFO::engine-setup::1307::root:: override-iptables: yes
2012-08-13 01:04:47::INFO::engine-setup::1309::root:: *** User input
summary ***
2012-08-13 01:04:47::DEBUG::engine-setup::519::root:: asking user: Proceed
with the configuration listed above? (yes|no):
2012-08-13 01:04:49::DEBUG::engine-setup::523::root:: user answered: yes
2012-08-13 01:04:49::DEBUG::engine-setup::1336::root:: user chose to accept
user parameters
2012-08-13 01:04:49::DEBUG::engine-setup::1923::root:: {'ORG_NAME':
'RobberPhexCloud', 'HOST_FQDN': 'RobberPhex-PC', 'AUTH_PASS_CONFIRMED':
'********', 'HTTP_PORT': '8080', 'HTTPS_PORT': '8443', 'DB_PASS_CONFIRMED':
'********', 'CONFIG_NFS': 'yes', 'AUTH_PASS': '********', 'DB_PASS':
'********', 'ISO_DOMAIN_NAME': 'RPiso', 'MAC_RANGE':
'00:1A:4A:A8:7A:00-00:1A:4A:A8:7A:FF', 'NFS_MP': '/mnt/iso',
'OVERRIDE_IPTABLES': 'yes', 'DC_TYPE': 'NFS'}
2012-08-13 01:04:49::DEBUG::engine-setup::1926::root:: Entered
Configuration stage
2012-08-13 01:04:49::DEBUG::engine-setup::1016::root:: running
setMaxSharedMemory
2012-08-13 01:04:49::DEBUG::engine-setup::1390::root:: loading
/etc/sysctl.conf
2012-08-13 01:04:49::DEBUG::engine-setup::1397::root:: current shared
memory max in kernel is 35554432, there is no need to update the kernel
parameters
2012-08-13 01:04:49::DEBUG::engine-setup::1019::root:: running _createCA
2012-08-13 01:04:49::DEBUG::engine-setup::729::root:: updating
/etc/pki/ovirt-engine/cacert.template
2012-08-13 01:04:49::DEBUG::engine-setup::729::root:: updating
/etc/pki/ovirt-engine/cert.template
2012-08-13 01:04:49::DEBUG::engine-setup::647::root:: current timezone
offset is -8
2012-08-13 01:04:49::DEBUG::engine-setup::661::root:: Date string is
120812010449-0800
2012-08-13 01:04:49::DEBUG::common_utils::219::root:: Executing command -->
'/etc/pki/ovirt-engine/installCA.sh RobberPhex-PC US RobberPhexCloud engine
******** 120812010449-0800 /etc/pki/ovirt-engine RobberPhex-PC.59529'
2012-08-13 01:04:52::DEBUG::common_utils::226::root:: output =
} Creating CA...
} Creating KeyStore...
}} Converting formats...
> Importing CA certificate...
} Creating client certificate for oVirt...
}} Creating certificate request...
}} Signing certificate request...
X509v3 Subject Key Identifier:
}} Converting formats...
} Importing oVirt certificate...
} Exporting oVirt key as SSH...
2012-08-13 01:04:52::DEBUG::common_utils::227::root:: stderr = Generating
RSA private key, 1024 bit long modulus
.....................................................................++++++
...............++++++
e is 65537 (0x10001)
Using configuration from openssl.conf
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
countryName :PRINTABLE:'US'
organizationName :PRINTABLE:'RobberPhexCloud'
commonName :PRINTABLE:'CA-RobberPhex-PC.59529'
Certificate is to be certified until Aug 10 17:04:49 2022 GMT (3650 days)
Write out database with 1 new entries
Data Base Updated
Certificate was added to keystore
Certificate was added to keystore
Using configuration from openssl.conf
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
countryName :PRINTABLE:'US'
organizationName :PRINTABLE:'RobberPhexCloud'
commonName :PRINTABLE:'RobberPhex-PC'
Certificate is to be certified until Jul 17 17:04:51 2017 GMT (1800 days)
Write out database with 1 new entries
Data Base Updated
Certificate reply was installed in keystore
Certificate stored in file <keys/engine.pub.tmp>
2012-08-13 01:04:52::DEBUG::common_utils::228::root:: retcode = 0
2012-08-13 01:04:52::DEBUG::common_utils::219::root:: Executing command -->
'/etc/pki/ovirt-engine/generate-ssh-keys -s /etc/pki/ovirt-engine/.keystore
-p ******** -a engine -k /etc/pki/ovirt-engine/keys/engine_id_rsa'
2012-08-13 01:04:52::DEBUG::common_utils::226::root:: output = Finished
successfuly
2012-08-13 01:04:52::DEBUG::common_utils::227::root:: stderr =
2012-08-13 01:04:52::DEBUG::common_utils::228::root:: retcode = 0
2012-08-13 01:04:52::DEBUG::common_utils::406::root:: successfully copied
file /etc/pki/ovirt-engine/keys/engine.ssh.key.txt to target destination
/usr/share/ovirt-engine/deployments/ROOT.war
2012-08-13 01:04:52::DEBUG::common_utils::414::root:: setting file
/usr/share/ovirt-engine/deployments/ROOT.war/engine.ssh.key.txt uid/gid
ownership
2012-08-13 01:04:52::DEBUG::common_utils::417::root:: setting file
/usr/share/ovirt-engine/deployments/ROOT.war/engine.ssh.key.txt mode to -1
2012-08-13 01:04:52::DEBUG::common_utils::219::root:: Executing command -->
'/usr/bin/openssl x509 -in /etc/pki/ovirt-engine/ca.pem -fingerprint -noout'
2012-08-13 01:04:52::DEBUG::common_utils::226::root:: output = SHA1
Fingerprint=38:C1:B1:E0:4F:D6:D6:18:7D:C9:29:BB:0A:AB:60:76:FC:0E:6A:4F
2012-08-13 01:04:52::DEBUG::common_utils::227::root:: stderr =
2012-08-13 01:04:52::DEBUG::common_utils::228::root:: retcode = 0
2012-08-13 01:04:52::DEBUG::common_utils::219::root:: Executing command -->
'/usr/bin/ssh-keygen -lf /etc/pki/ovirt-engine/keys/engine.ssh.key.txt'
2012-08-13 01:04:52::DEBUG::common_utils::226::root:: output = 2048
a8:7f:bd:f9:5d:6f:2d:b0:5f:68:ec:9d:8c:57:3b:63 engine (RSA)
2012-08-13 01:04:52::DEBUG::common_utils::227::root:: stderr =
2012-08-13 01:04:52::DEBUG::common_utils::228::root:: retcode = 0
2012-08-13 01:04:52::DEBUG::engine-setup::722::root:: changing ownership of
/etc/pki/ovirt-engine/ca.pem to 108/986 (uid/gid)
2012-08-13 01:04:52::DEBUG::engine-setup::724::root:: changing file
permissions for /etc/pki/ovirt-engine/ca.pem to 0750
2012-08-13 01:04:52::DEBUG::engine-setup::722::root:: changing ownership of
/etc/pki/ovirt-engine/.keystore to 108/986 (uid/gid)
2012-08-13 01:04:52::DEBUG::engine-setup::724::root:: changing file
permissions for /etc/pki/ovirt-engine/.keystore to 0750
2012-08-13 01:04:52::DEBUG::engine-setup::722::root:: changing ownership of
/etc/pki/ovirt-engine/private to 108/986 (uid/gid)
2012-08-13 01:04:52::DEBUG::engine-setup::724::root:: changing file
permissions for /etc/pki/ovirt-engine/private to 0750
2012-08-13 01:04:52::DEBUG::engine-setup::722::root:: changing ownership of
/etc/pki/ovirt-engine/private/ca.pem to 108/986 (uid/gid)
2012-08-13 01:04:52::DEBUG::engine-setup::724::root:: changing file
permissions for /etc/pki/ovirt-engine/private/ca.pem to 0750
2012-08-13 01:04:52::DEBUG::engine-setup::722::root:: changing ownership of
/etc/pki/ovirt-engine/.truststore to 108/986 (uid/gid)
2012-08-13 01:04:52::DEBUG::engine-setup::724::root:: changing file
permissions for /etc/pki/ovirt-engine/.truststore to 0750
2012-08-13 01:04:52::DEBUG::common_utils::406::root:: successfully copied
file /etc/pki/ovirt-engine/keys/engine.ssh.key.txt to target destination
/usr/share/ovirt-engine/deployments/ROOT.war
2012-08-13 01:04:52::DEBUG::common_utils::414::root:: setting file
/usr/share/ovirt-engine/deployments/ROOT.war/engine.ssh.key.txt uid/gid
ownership
2012-08-13 01:04:52::DEBUG::common_utils::417::root:: setting file
/usr/share/ovirt-engine/deployments/ROOT.war/engine.ssh.key.txt mode to -1
2012-08-13 01:04:52::DEBUG::engine-setup::1016::root:: running
configJbossXml
2012-08-13 01:04:52::DEBUG::engine-setup::1710::root:: Backing up
/etc/ovirt-engine/ovirt-engine.xml into
/etc/ovirt-engine/ovirt-engine.xml.BACKUP.4141705
2012-08-13 01:04:52::DEBUG::common_utils::406::root:: successfully copied
file /etc/ovirt-engine/ovirt-engine.xml to target destination
/etc/ovirt-engine/ovirt-engine.xml.BACKUP.4141705
2012-08-13 01:04:52::DEBUG::common_utils::414::root:: setting file
/etc/ovirt-engine/ovirt-engine.xml.BACKUP.4141705 uid/gid ownership
2012-08-13 01:04:52::DEBUG::common_utils::417::root:: setting file
/etc/ovirt-engine/ovirt-engine.xml.BACKUP.4141705 mode to -1
2012-08-13 01:04:52::DEBUG::common_utils::406::root:: successfully copied
file /etc/ovirt-engine/ovirt-engine.xml to target destination
/etc/ovirt-engine/ovirt-engine.xml.EDIT.9205173
2012-08-13 01:04:52::DEBUG::common_utils::414::root:: setting file
/etc/ovirt-engine/ovirt-engine.xml.EDIT.9205173 uid/gid ownership
2012-08-13 01:04:52::DEBUG::common_utils::417::root:: setting file
/etc/ovirt-engine/ovirt-engine.xml.EDIT.9205173 mode to -1
2012-08-13 01:04:52::DEBUG::engine-setup::1715::root:: loading xml file
handler
2012-08-13 01:04:52::DEBUG::engine-setup::1722::root:: Configuring Jboss
2012-08-13 01:04:52::DEBUG::engine-setup::1781::root:: Configuring Jboss's
network
2012-08-13 01:04:52::DEBUG::engine-setup::1783::root:: Removing all
interfaces from the public interface
2012-08-13 01:04:52::DEBUG::engine-setup::1786::root:: Adding access to
public interface
2012-08-13 01:04:52::DEBUG::engine-setup::1789::root:: Setting ports
2012-08-13 01:04:52::DEBUG::engine-setup::1796::root:: Network has been
configured for jboss
2012-08-13 01:04:52::DEBUG::engine-setup::1802::root:: Configuring SSL for
jboss
2012-08-13 01:04:52::DEBUG::engine-setup::1804::root:: Registering web
namespaces
2012-08-13 01:04:52::DEBUG::engine-setup::1814::root:: Disabling default
welcome-content
2012-08-13 01:04:52::DEBUG::engine-setup::1818::root:: SSL has been
configured for jboss
2012-08-13 01:04:52::DEBUG::engine-setup::1725::root:: Jboss has been
configured
2012-08-13 01:04:52::DEBUG::engine-setup::1731::root:: Jboss configuration
has been saved
2012-08-13 01:04:52::DEBUG::engine-setup::1016::root:: running _editRootWar
2012-08-13 01:04:52::DEBUG::engine-setup::595::root:: update
/etc/ovirt-engine/web-conf.js with http & ssl urls
2012-08-13 01:04:52::DEBUG::common_utils::406::root:: successfully copied
file /etc/ovirt-engine/web-conf.js to target destination
/usr/share/ovirt-engine/deployments/ROOT.war
2012-08-13 01:04:52::DEBUG::common_utils::414::root:: setting file
/usr/share/ovirt-engine/deployments/ROOT.war/web-conf.js uid/gid ownership
2012-08-13 01:04:52::DEBUG::common_utils::417::root:: setting file
/usr/share/ovirt-engine/deployments/ROOT.war/web-conf.js mode to -1
2012-08-13 01:04:52::DEBUG::engine-setup::583::root:: copying
/etc/pki/ovirt-engine/ca.pem to
/usr/share/ovirt-engine/deployments/ROOT.war/ca.crt
2012-08-13 01:04:52::DEBUG::common_utils::406::root:: successfully copied
file /etc/pki/ovirt-engine/ca.pem to target destination
/usr/share/ovirt-engine/deployments/ROOT.war/ca.crt
2012-08-13 01:04:52::DEBUG::common_utils::414::root:: setting file
/usr/share/ovirt-engine/deployments/ROOT.war/ca.crt uid/gid ownership
2012-08-13 01:04:52::DEBUG::common_utils::417::root:: setting file
/usr/share/ovirt-engine/deployments/ROOT.war/ca.crt mode to -1
2012-08-13 01:04:52::DEBUG::engine-setup::1635::root:: checking if rhevm db
is already installed..
2012-08-13 01:04:52::DEBUG::common_utils::235::root:: running sql query
engine on db: 'select 1'.
2012-08-13 01:04:52::DEBUG::common_utils::196::root:: cmd = /usr/bin/psql
-U postgres -d engine -c "select 1"
2012-08-13 01:04:52::DEBUG::common_utils::201::root:: output =
2012-08-13 01:04:52::DEBUG::common_utils::202::root:: stderr = psql: FATAL:
database "engine" does not exist
2012-08-13 01:04:52::DEBUG::common_utils::203::root:: retcode = 2
2012-08-13 01:04:52::DEBUG::engine-setup::1016::root:: running
_updatePgPassFile
2012-08-13 01:04:52::DEBUG::engine-setup::950::root:: found existing pgpass
file, backing current to /root/.pgpass.2012_08_13_01_04_52
2012-08-13 01:04:52::DEBUG::engine-setup::1016::root:: running
_encryptDBPass
2012-08-13 01:04:52::DEBUG::common_utils::219::root:: Executing command -->
'/etc/pki/ovirt-engine/encryptpasswd.sh ********'
2012-08-13 01:04:52::DEBUG::common_utils::226::root:: output = Encoded
password: -39a96ce1daaca3a5
2012-08-13 01:04:52::DEBUG::common_utils::227::root:: stderr =
2012-08-13 01:04:52::DEBUG::common_utils::228::root:: retcode = 0
2012-08-13 01:04:52::DEBUG::common_utils::376::root:: found new parsed
string: -39a96ce1daaca3a5
2012-08-13 01:04:52::DEBUG::engine-setup::1016::root:: running
configEncryptedPass
2012-08-13 01:04:52::DEBUG::engine-setup::1677::root:: Backing up
/etc/ovirt-engine/ovirt-engine.xml into None
2012-08-13 01:04:52::DEBUG::common_utils::406::root:: successfully copied
file /etc/ovirt-engine/ovirt-engine.xml to target destination
/etc/ovirt-engine/ovirt-engine.xml.EDIT.6399505
2012-08-13 01:04:52::DEBUG::common_utils::414::root:: setting file
/etc/ovirt-engine/ovirt-engine.xml.EDIT.6399505 uid/gid ownership
2012-08-13 01:04:52::DEBUG::common_utils::417::root:: setting file
/etc/ovirt-engine/ovirt-engine.xml.EDIT.6399505 mode to -1
2012-08-13 01:04:52::DEBUG::engine-setup::1681::root:: loading xml file
handler
2012-08-13 01:04:52::DEBUG::engine-setup::1742::root:: Configuring security
for jboss
2012-08-13 01:04:52::DEBUG::engine-setup::1744::root:: Registering security
namespaces
2012-08-13 01:04:53::DEBUG::engine-setup::1774::root:: Security has been
configured for jboss
2012-08-13 01:04:53::DEBUG::engine-setup::1693::root:: Jboss configuration
has been saved
2012-08-13 01:04:53::DEBUG::engine-setup::1016::root:: running _createDB
2012-08-13 01:04:53::DEBUG::engine-setup::806::root:: installing postgres db
2012-08-13 01:04:53::DEBUG::engine-setup::809::root:: engine db creation is
logged at /var/log/ovirt-engine//engine-db-install-2012_08_13_01_04_53.log
2012-08-13 01:04:53::DEBUG::common_utils::219::root:: Executing command -->
'/usr/share/ovirt-engine/dbscripts/engine-db-install.sh
engine-db-install-2012_08_13_01_04_53.log ********'
2012-08-13 01:04:56::DEBUG::common_utils::226::root:: output = error,
failed creating enginedb
2012-08-13 01:04:56::DEBUG::common_utils::227::root:: stderr =
2012-08-13 01:04:56::DEBUG::common_utils::228::root:: retcode = 1
2012-08-13 01:04:56::DEBUG::engine-setup::1614::root:: *** The following
params were used as user input:
2012-08-13 01:04:56::DEBUG::engine-setup::1618::root:: override-iptables:
yes
2012-08-13 01:04:56::DEBUG::engine-setup::1618::root:: http-port: 8080
2012-08-13 01:04:56::DEBUG::engine-setup::1618::root:: https-port: 8443
2012-08-13 01:04:56::DEBUG::engine-setup::1618::root:: mac-range:
00:1A:4A:A8:7A:00-00:1A:4A:A8:7A:FF
2012-08-13 01:04:56::DEBUG::engine-setup::1618::root:: host-fqdn:
RobberPhex-PC
2012-08-13 01:04:56::DEBUG::engine-setup::1618::root:: auth-pass: ********
2012-08-13 01:04:56::DEBUG::engine-setup::1618::root:: db-pass: ********
2012-08-13 01:04:56::DEBUG::engine-setup::1618::root:: org-name:
RobberPhexCloud
2012-08-13 01:04:56::DEBUG::engine-setup::1618::root:: default-dc-type: NFS
2012-08-13 01:04:56::DEBUG::engine-setup::1618::root:: config-nfs: yes
2012-08-13 01:04:56::DEBUG::engine-setup::1618::root:: nfs-mp: /mnt/iso
2012-08-13 01:04:56::DEBUG::engine-setup::1618::root:: iso-domain-name:
RPiso
2012-08-13 01:04:56::ERROR::engine-setup::2115::root:: Traceback (most
recent call last):
File "/bin/engine-setup", line 2109, in <module>
main(confFile)
File "/bin/engine-setup", line 1930, in main
runMainFunctions(conf)
File "/bin/engine-setup", line 1852, in runMainFunctions
runFunction([_createDB, _updateVDCOptions],
output_messages.INFO_CREATE_DB)
File "/bin/engine-setup", line 1026, in runFunction
raise Exception(instance)
Exception: Database creation failed
I think postgresql-Mon.log is useful:
LOG: database system was shut down at 2012-08-13 01:00:57 CST
LOG: autovacuum launcher started
LOG: database system is ready to accept connections
FATAL: database "engine" does not exist
LOG: received fast shutdown request
LOG: aborting any active transactions
LOG: autovacuum launcher shutting down
LOG: shutting down
LOG: database system is shut down
LOG: database system was shut down at 2012-08-13 01:01:46 CST
LOG: autovacuum launcher started
LOG: database system is ready to accept connections
FATAL: database "engine" does not exist
ERROR: database "engine" does not exist
STATEMENT: DROP DATABASE engine;
ERROR: extension "uuid-ossp" already exists
STATEMENT: CREATE EXTENSION "uuid-ossp";
Any idea whats wrong and how I can fix it?
11 years, 9 months
[Users] mozilla-xpi for Ubuntu
by Mario Giammarco
Hello,
I need a working mozilla-xpi for Ubuntu 12.04 (and soon 12.10).
It is strange that an opensource project as ovirt is only working on Fedora.
Thanks in advance for any help.
Mario
11 years, 11 months
Re: [Users] Fwd: Trouble with SSO rhev-agent and rhev-agent-pam-rhev-cred
by Gal Hammer
On 20/08/2012 08:31, Roy Golan wrote:
> Cannot login with SSO on system...
>
> cat /var/log/secure
>
> Aug 19 03:54:43 ws2 pam: gdm-rhevcred[2618]:
> pam_unix(gdm-rhevcred:auth): conversation failed
> Aug 19 03:54:43 ws2 pam: gdm-rhevcred[2618]:
> pam_unix(gdm-rhevcred:auth): auth could not identify password for
> [sirin]
> Aug 19 03:54:43 ws2 pam: gdm-rhevcred[2618]:
> pam_sss(gdm-rhevcred:auth): system info: [Cannot read password]
> Aug 19 03:54:43 ws2 pam: gdm-rhevcred[2618]:
> pam_sss(gdm-rhevcred:auth): authentication failure; logname= uid=0
> euid=0 tty=:0 ruser= rhost= user=sirin
> Aug 19 03:54:43 ws2 pam: gdm-rhevcred[2618]:
> pam_sss(gdm-rhevcred:auth): received for user sirin: 4 (System error)
> Aug 19 03:54:43 ws2 pam: gdm-password[2617]:
> pam_unix(gdm-password:auth): conversation failed
> Aug 19 03:54:43 ws2 pam: gdm-password[2617]:
> pam_unix(gdm-password:auth): auth could not identify password for
> [sirin]
> Aug 19 03:54:43 ws2 pam: gdm-password[2617]:
> pam_sss(gdm-password:auth): system info: [Cannot read password]
> Aug 19 03:54:43 ws2 pam: gdm-password[2617]:
> pam_sss(gdm-password:auth): authentication failure; logname= uid=0
> euid=0 tty=:0 ruser= rhost= user=sirin
> Aug 19 03:54:43 ws2 pam: gdm-password[2617]:
> pam_sss(gdm-password:auth): received for user sirin: 4 (System error)
> Aug 19 03:54:43 ws2 pam: gdm-password[2617]: gkr-pam: no password is
> available for user
>
> But login with user and password done... I use FreeIPA for this user.
>
> What could be wrong?
What does the agent's log say (/var/log/ovirt-guest-agent.log)?
Usually, if everything is running as it should, the problem is that the
Linux machine is not configure to work with the same authentication
server as the one that the RHEV-M is using.
Gal.
12 years
[Users] How to import KVM machines into oVirt iSCSI
by Nicolas Ecarnot
Hi,
Reading the docs all day long helped me to setup a nice Data center in
iSCSI mode, connected to a big LUN in a san.
Many many points are working, mostly thanks to you, people of this list.
Apart from this oVirt setup (1 manager, 3 nodes, 1 san), I have a
completely separated ubuntu hypervisor running a standalone local KVM
with a local storage.
I have not a clear view of how I will manage to import these VM into oVirt.
Of course, I've read about ovirt-v2v (and its huge amount of
dependencies...), but I'm not sure this is the way I have to follow?
As far as I've understood, v2v seems to be dedicated to connection
between oVirt datacenters, or vmware or Xen plateforms, but I see
nothing about the connection to a distant standalone KVM hypervisor.
One more thing that is unclear to me, and seems related, is the notion
of export / import domain. I read that this principle could allow me to
export (then to backup) my VMs. This could help me to import some VMs.
But I read that this export domain has to be the same type of my
datacenter (iscsi), so this is not helping me with my standalone kvm
hypervisor.
I'd be glad to get some light on these points.
--
Nicolas Ecarnot
12 years, 1 month
[Users] Vdsm/libvir error during deploy
by Joop
While using the latest nightly I can't deploy new hosts using oVirt.
rpm -aq | grep ovirt
ovirt-engine-backend-3.2.0-1.20121220.git9fdb0c2.fc17.noarch
ovirt-engine-dbscripts-3.2.0-1.20121220.git9fdb0c2.fc17.noarch
ovirt-iso-uploader-3.1.0-1.fc17.noarch
ovirt-engine-tools-common-3.2.0-1.20121220.git9fdb0c2.fc17.noarch
ovirt-engine-userportal-3.2.0-1.20121220.git9fdb0c2.fc17.noarch
ovirt-host-deploy-0.0.0-0.0.master.20121220.gitec2416f.fc17.noarch
ovirt-engine-config-3.2.0-1.20121220.git9fdb0c2.fc17.noarch
ovirt-engine-cli-3.2.0.7-1.20121219.git5eddf58.fc17.noarch
ovirt-engine-webadmin-portal-3.2.0-1.20121220.git9fdb0c2.fc17.noarch
ovirt-engine-restapi-3.2.0-1.20121220.git9fdb0c2.fc17.noarch
ovirt-engine-genericapi-3.2.0-1.20121220.git9fdb0c2.fc17.noarch
ovirt-host-deploy-java-0.0.0-0.0.master.20121220.gitec2416f.fc17.noarch
ovirt-engine-notification-service-3.2.0-1.20121220.git9fdb0c2.fc17.noarch
ovirt-engine-3.2.0-1.20121220.git9fdb0c2.fc17.noarch
ovirt-release-fedora-5-2.noarch
ovirt-image-uploader-3.1.0-1.fc17.noarch
ovirt-engine-sdk-3.2.0.5-1.20121219.gitc0ab704.fc17.noarch
ovirt-log-collector-3.1.0-1.fc17.noarch
ovirt-engine-setup-3.2.0-1.20121220.git9fdb0c2.fc17.noarch
I did an engine-cleanup/engine-setup to start with a clean slate, further
I have a host that is used before and keeps loosing networkconnectivity
when the vdsmd service is started so I did several yum removes of packages
(libvirt related and qemu related and vdsm*) and removed the /etc/ and
/var entries where needed. Then I ran add host and end up with the same
error I had before, full deploy log is attached but this is the real and
only set of errors:
2012-12-21 11:00:26 DEBUG otopi.plugins.ovirt_host_deploy.vdsm.bridge
plugin.executeRaw:324 execute: ['/usr/share/vdsm/addNetwork', 'ovirtmgmt',
'', '', u'em1', 'ONBOOT=yes', 'IPADDR=192.168.216.152',
'DNS2=172.19.1.18', 'DNS1=172.19.1.12',
'UUID=e121a99a-994e-479d-8de1-a56c14315545', 'IPV6INIT=no', 'USERCTL=no',
'GATEWAY=192.168.216.254', 'NETMASK=255.255.255.0', 'blockingdhcp=true'],
env=None
2012-12-21 11:00:30 DEBUG otopi.plugins.ovirt_host_deploy.vdsm.bridge
plugin.executeRaw:341 execute-result: ['/usr/share/vdsm/addNetwork',
'ovirtmgmt', '', '', u'em1', 'ONBOOT=yes', 'IPADDR=192.168.216.152',
'DNS2=172.19.1.18', 'DNS1=172.19.1.12',
'UUID=e121a99a-994e-479d-8de1-a56c14315545', 'IPV6INIT=no', 'USERCTL=no',
'GATEWAY=192.168.216.254', 'NETMASK=255.255.255.0', 'blockingdhcp=true'],
rc=255
2012-12-21 11:00:30 DEBUG otopi.plugins.ovirt_host_deploy.vdsm.bridge
plugin.execute:388 execute-output: ['/usr/share/vdsm/addNetwork',
'ovirtmgmt', '', '', u'em1', 'ONBOOT=yes', 'IPADDR=192.168.216.152',
'DNS2=172.19.1.18', 'DNS1=172.19.1.12',
'UUID=e121a99a-994e-479d-8de1-a56c14315545', 'IPV6INIT=no', 'USERCTL=no',
'GATEWAY=192.168.216.254', 'NETMASK=255.255.255.0', 'blockingdhcp=true']
stdout:
2012-12-21 11:00:30 DEBUG otopi.plugins.ovirt_host_deploy.vdsm.bridge
plugin.execute:393 execute-output: ['/usr/share/vdsm/addNetwork',
'ovirtmgmt', '', '', u'em1', 'ONBOOT=yes', 'IPADDR=192.168.216.152',
'DNS2=172.19.1.18', 'DNS1=172.19.1.12',
'UUID=e121a99a-994e-479d-8de1-a56c14315545', 'IPV6INIT=no', 'USERCTL=no',
'GATEWAY=192.168.216.254', 'NETMASK=255.255.255.0', 'blockingdhcp=true']
stderr:
WARNING:Storage.LVM:Cannot create env file [Errno 2] No such file or
directory: '/var/run/vdsm/lvm.env'
WARNING:root:options IPADDR is deprecated. Use ipaddr instead
WARNING:root:options NETMASK is deprecated. Use netmask instead
WARNING:root:options GATEWAY is deprecated. Use gateway instead
WARNING:root:options ONBOOT is deprecated. Use onboot instead
INFO:root:Adding network ovirtmgmt with vlan=, bonding=, nics=['em1'],
bondingOptions=None, mtu=None, bridged=True, options={'blockingdhcp':
'true', 'UUID': 'e121a99a-994e-479d-8de1-a56c14315545', 'USERCTL': 'no',
'DNS2': '172.19.1.18', 'DNS1': '172.19.1.12', 'onboot': 'yes', 'IPV6INIT':
'no'}
libvir: Network Driver error : Network not found: no network with matching
name 'vdsm-ovirtmgmt'
libvir: Network Driver error : Network not found: no network with matching
name 'vdsm-ovirtmgmt'
libvir: Network Driver error : Requested operation is not valid: cannot
set autostart for transient network
Traceback (most recent call last):
File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/usr/share/vdsm/configNetwork.py", line 1489, in <module>
main()
File "/usr/share/vdsm/configNetwork.py", line 1458, in main
addNetwork(bridge, **kwargs)
File "/usr/share/vdsm/configNetwork.py", line 1017, in addNetwork
configWriter.createLibvirtNetwork(network, bridged, iface)
File "/usr/share/vdsm/configNetwork.py", line 200, in createLibvirtNetwork
self._createNetwork(netXml)
File "/usr/share/vdsm/configNetwork.py", line 184, in _createNetwork
net.setAutostart(1)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2148, in
setAutostart
if ret == -1: raise libvirtError ('virNetworkSetAutostart() failed',
net=self)
libvirt.libvirtError: Requested operation is not valid: cannot set
autostart for transient network
2012-12-21 11:00:30 DEBUG otopi.context context._executeMethod:127 method
exception
Traceback (most recent call last):
File "/tmp/ovirt-o5lusc2CYC/pythonlib/otopi/context.py", line 117, in
_executeMethod
method['method']()
File
"/tmp/ovirt-o5lusc2CYC/otopi-plugins/ovirt-host-deploy/vdsm/bridge.py",
line 736, in _misc
parameters=parameters,
File
"/tmp/ovirt-o5lusc2CYC/otopi-plugins/ovirt-host-deploy/vdsm/bridge.py",
line 492, in _createBridge
parameters
File "/tmp/ovirt-o5lusc2CYC/pythonlib/otopi/plugin.py", line 398, in
execute
command=args[0],
RuntimeError: Command '/usr/share/vdsm/addNetwork' failed to execute
2012-12-21 11:00:30 ERROR otopi.context context._executeMethod:136 Failed
to execute stage 'Misc configuration': Command
'/usr/share/vdsm/addNetwork' failed to execute
2012-12-21 11:00:30 DEBUG otopi.transaction transaction.abort:131 aborting
'Yum Transaction'
This is a host that started life as a standard Fed17 install from LiveCD
and it worked with ovirt-3.1 but now I can't get it to work with
oVirt-nightlies. I have two other hosts which started from the same
install and they work, atleast everything gets installed and the hosts are
visisble and up in oVirt. Did a diff of rpm -aq | sort of a working and
nonworking host but the diff is minimal, some extra packages (mc/..) some
very small version diffs (1.2.3-x instead of y)
Any help appreciated,
Joop
12 years, 2 months
[Users] host power managment failures
by Dead Horse
Current running engine build --> commit:
61c11aecc40e755d08b6c34c6fe1c0a07fa94de8
Host power management is having some issues:
2013-01-28 13:15:49,320 ERROR [org.ovirt.engine.core.bll.FenceExecutor]
(ajp--127.0.0.1-8702-11) Illegal value in PM Proxy Preferences string ,
skipped.
2013-01-28 13:15:49,321 ERROR [org.ovirt.engine.core.bll.FenceExecutor]
(ajp--127.0.0.1-8702-11) Failed to run Power Management command on Host ,
no running proxy Host was found.
- DHC
12 years, 2 months
[Users] HowTo: Spice ActiveX Plugin/Virt Viewer Console on oVirt 3.1
by Dead Horse
I have seen this question asked many times on this list and the spice-devel
list. Now having figured out how to make it work I will provide the answer
to the rest of the community.
*NOTE* this only applies to Windows/Internet Explorer users. There exists
no other option other than the html5 spice console (still beta/in
development) for windows users ATM. This will also only work with Internet
Explorer.
Basic Steps:
- Install an oVirt server.
- Add at least one node
- Setup storage/iso/export domains
Advanced steps
- Start by downloading: http://elmarco.fedorapeople.org/spice.cab
- Create a directory for it oVirt looks by default in /usr/share/spice
- Rename spice.cab to SpiceX.cab and copy it into /usr/share/spice
- Now edit /usr/share/ovirt-engine/engine.ear/root.war/WEB-INF and add the
following:
<!-- SpiceX.cab -->
<servlet>
<servlet-name>SpiceX.cab</servlet-name>
<servlet-class>org.ovirt.engine.core.FileServlet</servlet-class>
<init-param>
<param-name>type</param-name>
<param-value>application/octet-stream</param-value>
</init-param>
<init-param>
<param-name>file</param-name>
<param-value>/usr/share/spice/SpiceX.cab</param-value>
</init-param>
</servlet>
<servlet-mapping>
<servlet-name>SpiceX.cab</servlet-name>
<url-pattern>/spice/SpiceX.cab</url-pattern>
</servlet-mapping>
- Next create an html file within
/usr/share/ovirt-engine/engine.ear/root.war
- In the example below an html file called "spice.html"
- Copy/Paste the below into spice.html:
<!DOCTYPE html>
<html>
<head>
<title>SPICE Plugin Installer</title>
<script type="text/javascript">
function installSpice()
{
try {
document.getElementById('SpiceX').innerHTML = '<OBJECT id="SpiceX"
codebase="/spice/SpiceX.cab"
classid="clsid:ACD6D89C-938D-49B4-8E81-DDBD13F4B48A" width="0"
height="0"></OBJECT>';
} catch (ex) {
alert("Epic Fail!: " + ex.Description);
}
}
</script>
</head>
<body>
<p>
<p><b id='SpiceX'>Spice ActiveX Plugin</b> </p>
<button onclick='installSpice()'>Install Spice Plugin</button>
</p>
</body>
</html>
- Save the file
- You will now need to restart the ovirt-engine service EG: systemctl
restart ovirt-engine.service OR service ovirt-engine restart
- The installer page will now be available at http://<url to ovirt
server>/spice.html EX: http://ovirt.azeroth.net/spice.html
- Navigate to that page and click the install button
- IE will prompt you to deploy/install the SpiceX cabinet file
- It may gripe about an unsigned or untrusted source, acknowledge this an
proceed anyways
- If the install succeeds the text "Spice ActiveX Plugin" on the page will
change to blank (it's actually the plugin with null values)
- The console button in the user and webadmin portals will now launch the
new virt-viwer spice based console!
Happy Spice Consoling to your VM's from Windows!
*NOTE*
To uninstall the plugin:
- The below removes the add-on from IE (EG: removes knowledge of
"application/x-spice")
- pop a command terminal and type:
On Windows XP: regsvr32 /u "C:\Documents and Settings\Administrator\Local
Settings\Application Data\virt-viewer\bin\SpiceX.dll"
On Windows 7 regsvr32 /u C:\Documents and Settings\Administrator\Local
Settings\AppData\virt-viewer\bin\SpiceX.dll"
- Next we need to remove the rest of virt-viewer
- Go to add/remove programs and uninstall virt-viewer, this removes the
rest of virt-viewer from the system
12 years, 2 months
[Users] OpenLDAP Simple Authentication in Ovirt Engine
by Thierry Kauffmann
>> Hi,
>>
>> I am currently testing Ovirt 3.1 standalone on Fedora 17.
>>
>> Until now, I could only use the default user admin@internal.
>>
>> Our Directory at the University is OpenLDAP. We use it for authentication
>> WITHOUT Kerberos : Simple authentication.
>>
>> I wonder how to use this backend to authenticate users and manage groups
>> in Ovirt.
>>
>> Has anyone already set this up ?
>> How to configure Ovirt to use Simple Authentication (No Kerberos).
>>
>> Cheers,
>>
>> --
>> Thierry Kauffmann
>> Chef du Service Informatique // Facult? des Sciences // Universit? de
>> Montpellier 2
>>
>> [image: SIF - Service Informatique de la Facult? des Sciences]<http://sif.info-ufr.univ-montp2.fr/> [image:
>> UM2 - Universit? de Montpellier 2] <http://www.univ-montp2.fr/> Service
>> informatique de la Facult? des Sciences (SIF)
>> Universit? de Montpellier 2
>> CC437 // Place Eug?ne Bataillon // 34095 Montpellier Cedex 5
>>
>> T?l : 04 67 14 31 58
>> email : thierry.kauffmann(a)univ-montp2.fr
>> web : http://sif.info-ufr.univ-montp2.fr/
>> http://www.fdsweb.univ-montp2.fr/
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
> Hi,
>
> This is a response from an older thread from Yair Zaslavsky:
>
> " there is no code allowing to add simple-authentication domains to
> Manage-Domains.
> In the past we did have the ability to do that, but there are several
> problematic issues."
>
> Best regards,
Hi,
correct-me if I am wrong but this wiki page
(http://www.ovirt.org/DomainInfrastructure) states clearly :
> 1. Authenticating Active Directory, IPA and RHDS using either simple
> or gssapi authentication
> 2. Querying the directory using the LDAP protocol
> 3. Auto deducing the LDAP provider type
> 4. Easily adding new LDAP provider types
> 5. Easily adding new query types
>
So what ?
--
signature-TK Thierry Kauffmann
Chef du Service Informatique // Faculté des Sciences // Université de
Montpellier 2
SIF - Service Informatique de la Faculté des Sciences
<http://sif.info-ufr.univ-montp2.fr/> UM2 - Université de Montpellier 2
<http://www.univ-montp2.fr/> Service informatique de la Faculté des
Sciences (SIF)
Université de Montpellier 2
CC437 // Place Eugène Bataillon // 34095 Montpellier Cedex 5
Tél : 04 67 14 31 58
email : thierry.kauffmann(a)univ-montp2.fr
<mailto:thierry.kauffmann@univ-montp2.fr>
web : http://sif.info-ufr.univ-montp2.fr/
http://www.fdsweb.univ-montp2.fr/
12 years, 2 months
[Users] UI Plugin iframe dialogs
by René Koch (ovido)
Hi,
I'm still working on my Nagios integration plugin and came across a
limitation of the UI plugin framework caused by iframes.
UI framework creates an iframe for each plugin, so the plugin code is
separated from the main oVirt webadmin code (and other plugins). When
creating a new (big) jQuery dialog in an sub tab-iframe (sub tab of
selected vm or host) it can't be displayed without scrolling in the sub
tab or resizing the sub tab (that's clear as it's displayed in an too
small iframe).
So it would be great if it would be possible to display dialogs in the
middle of the main windows and overlap the iframe (don't know if this is
possible). In short terms I want to create a dialog which behaves like
e.g. the "Setup Host Networks" or "Add Permission to User" dialogs ->
click on a link in the plugin iframe and dialog opens in the middle of
the website not the middle of the iframe.
What I found out so far is that:
1. I must be aware of the same origin policy (that's no problem)
2. I need to put my jQuery-dialog-code in the main oVirt windows and
then I can call it from within the iframe (that's my problem)
So my questions are:
Is it possible to place code outside of the iframe?
If not - are there plans to allow this in future releases?
Or maybe is there a workaround?
Thanks a lot,
René
12 years, 2 months
[Users] OS-independent ovirt-engine distribution archive
by Jiri Belka
Hello,
I'm very slowly working to make ovirt-engine running on a BSD system.
My problem is that I do it for fun and as my time resources are not big
I could not choose building ovirt-engine from sources as it would push
me to "port" (make packages) for all java dependencies. Yes, during
build maven cannot download Internet and build dependencies (for
security reasons).
Would be possible to have (another) distribution archive of
ovirt-engine which would be OS/distro-independent, so I could just
extract and copy it to filesystem for local jboss? (RPM packages can be
extracted with 'rpm2cpio' but their owners decided to make life very
complicated [many symlinks etc.]).
OS/distro-independent distribution archive (.zip, .tgz) would make life
much easiers for people wanting to have ovirt-engine running on a
non-RPM based Linux distro or on a BSD/Solaris system.
jbelka
12 years, 2 months
[Users] Flush old logs
by Nicolas Ecarnot
Hi,
I'd like first to thank people of this mailing list for there help and
advices - I learned a lot by reading your archive.
Here's a simple question : In the "Alerts" message queue at the bottom
of the screen, I see an error message dating from two weeks (about the
failure to verify to restart status of a host).
I'm pretty sure this shouldn't be there anymore, as the power management
of this host has already been tested and approved many times since.
I haven't find any trivial way to flush this message queue, and I was
wondering whether it was stored in a database?
Is there a way to clear these error messages?
(Am I right with the db location?)
Regards,
--
Nicolas Ecarnot
12 years, 3 months
[Users] VM migrations failing
by Dead Horse
Engine Build --> Commit: 82bdc46dfdb46b000f67f0cd4e51fc39665bf13b
VDSM Build: --> Commit: da89a27492cc7d5a84e4bb87652569ca8e0fb20e + patch
--> http://gerrit.ovirt.org/#/c/11492/
Engine Side:
2013-01-30 10:56:38,439 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-70) Rerun vm 887d764a-f835-4112-9eda-836a772ea5eb.
Called from vds lostisles
2013-01-30 10:56:38,506 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(pool-3-thread-49) START, MigrateStatusVDSCommand(HostName = lostisles,
HostId = e042b03b-dd4e-414c-be1a-b2c65ac000f5,
vmId=887d764a-f835-4112-9eda-836a772ea5eb), log id: 6556e75b
2013-01-30 10:56:38,510 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(pool-3-thread-49) Failed in MigrateStatusVDS method
2013-01-30 10:56:38,510 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(pool-3-thread-49) Error code migrateErr and error message
VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error =
Fatal error during migration
2013-01-30 10:56:38,511 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(pool-3-thread-49) Command
org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return
value
StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=12,
mMessage=Fatal error during migration]]
VDSM Side:
Thread-43670::ERROR::2013-01-30 10:56:37,052::vm::200::vm.Vm::(_recover)
vmId=`887d764a-f835-4112-9eda-836a772ea5eb`::this function is not supported
by the connection driver: virDomainMigrateToURI2
Thread-43670::ERROR::2013-01-30 10:56:37,513::vm::288::vm.Vm::(run)
vmId=`887d764a-f835-4112-9eda-836a772ea5eb`::Failed to migrate
Traceback (most recent call last):
File "/usr/share/vdsm/vm.py", line 273, in run
self._startUnderlyingMigration()
File "/usr/share/vdsm/libvirtvm.py", line 504, in
_startUnderlyingMigration
None, maxBandwidth)
File "/usr/share/vdsm/libvirtvm.py", line 540, in f
ret = attr(*args, **kwargs)
File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line
111, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1103, in
migrateToURI2
if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed',
dom=self)
libvirtError: this function is not supported by the connection driver:
virDomainMigrateToURI2
GuestMonitor-sl63::DEBUG::2013-01-30
10:56:38,235::libvirtvm::307::vm.Vm::(_getDiskLatency)
vmId=`887d764a-f835-4112-9eda-836a772ea5eb`::Disk vda latency not available
- DHC
12 years, 3 months
[Users] Community feedback on the new UI-plugin Framework
by Oved Ourfalli
Hey all,
We had an oVirt workshop this week, which included a few sessions about the new oVirt UI Plugin framework, including a Hackaton and a BOF session.
I've gathered some feedback we got from the different participants about the framework, and what they would like to see in the future of it.
1. People liked the fact that it is a simple framework, allowing you to do nice extensions rapidly, without the need to know complex technologies (simple javascript knowledge is all you need to know).
2. People want the framework to provide tools for adding UI components (main/sub tabs, dialogs, etc.) that aren't URL based, but are based on components we currently have in oVirt, such as grids, key-value pairs (such as the general sub-tab), action buttons in these custom tabs and etc.
The main reason for that is to easily develop a plugin with an oVirt-like look-and-feel. Chris Morrissey from Netapp showed a very nice plugin he wrote that did have an oVirt-like look-and-feel, but it wasn't easy.... and it required him to to develop something specific for the plugin to interact with, in the 3rd party application (something similar to the work we did in the oVirt-Foreman UI-plugin).
3. Support adding tasks to the system - plugins may trigger asynchronous tasks behind the scene, both oVirt and external ones. oVirt tasks and their progress will be reflected in the tasks management view, but if the flows contain external tasks as well, then it would be hard to track through the oVirt UI.
4. Plugin management
* The ability to see what plugins are installed... install new plugins and remove existing ones.
* Change the plugin configuration through webadmin
* Distinguish between public plugin configuration entries (entries the user to change), to private ones (entries it can't).
I guess that this point will be relevant for engine-plugins as well (once support for such plugins will be available) so we should consider providing a similar solution for both. Also, Chris pointed out that it should be taken into consideration as well when working on supporting HA-oVirt-engine, as plugins are vital part of the oVirt environment.
If you find the feedback above true, or you have other comments that weren't mentioned here, please share it with us!
Thank you,
Oved
P.S:
I guess the slides will be uploaded sometime next week (I guess someone would have asked it soon... so now you have your answer :-) )
12 years, 3 months
[Users] UI Plugin issue when switching main tabs
by René Koch
Hi,
I'm working on an UI plugin to integrate Nagios/Icinga into oVirt Engine and made some progress, but have an issue when switching the main tabs.
I use VirtualMachineSelectionChange to create URL with name of vm (and HostSelectionChange for hosts).
Name is used in my backend code (Perl) for fetching monitoring status.
Here's the code of VirtualMachineSelectionChange:
VirtualMachineSelectionChange: function() {
var vmName = arguments[0].name;
alert(vmName);
// Reload VM Sub Tab
api.setTabContentUrl('vms-monitoring', conf.url + '?subtab=vms&name=' + encodeURIComponent(vmName));
}
Everything works fine as long as I stay in Virtual Machine main tab.
When switching to e.g. Disks and back to Virtual Machines again the JavaScript code of start.html isn't processed anymore (or cached (?) as the my generated URL with last vm name will still be sent back to my Perl backend) - added alert() to test this.
oVirt Engine version: ovirt-engine-3.2.0-1.20130118.gitd102d6f.fc18.noarch
Full code of start.hml: http://pastebin.com/iEY6dA6F
Thanks a lot for your help,
René
12 years, 3 months
Re: [Users] adding multiple interfaces with different networks
by Kevin Maziere Aubry
Hi
I have exactly the same issue.
This mean that a 1Gb (at least) interface must be dedicated to the
ovirtmgmt interface, which is not a good idea.
Kevin
2012/12/26 Jonathan Horne <jhorne(a)skopos.us>
> 2012-12-26 16:48:56,416 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> (ajp--0.0.0.0-8009-8) [2d2d6184] Failed in SetupNetworksVDS method
> 2012-12-26 16:48:56,417 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> (ajp--0.0.0.0-8009-8) [2d2d6184] Error code ERR_BAD_BONDING and error
> message VDSGenericException: VDSErrorException: Failed to SetupNetworksVDS,
> error = bonding 'bond2' is already member of network 'ovirtmgmt'
> 2012-12-26 16:48:56,418 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> (ajp--0.0.0.0-8009-8) [2d2d6184]
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to SetupNetworksVDS, error =
> bonding 'bond2' is already member of network 'ovirtmgmt'
> 2012-12-26 16:48:56,418 ERROR
> [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (ajp--0.0.0.0-8009-8)
> [2d2d6184] Command SetupNetworksVDS execution failed. Exception:
> RuntimeException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to SetupNetworksVDS, error =
> bonding 'bond2' is already member of network 'ovirtmgmt'
>
>
> so im guessing… i can't have my vlan3204 or vlan3202 share an interface
> with ovirtmgmt?
>
> [root@d0lppc021 ~]# rpm -qa|grep ovirt
> ovirt-engine-webadmin-portal-3.1.0-3.19.el6.noarch
> ovirt-engine-cli-3.1.0.7-1.el6.noarch
> ovirt-image-uploader-3.1.0-16.el6.noarch
> ovirt-engine-backend-3.1.0-3.19.el6.noarch
> ovirt-engine-tools-common-3.1.0-3.19.el6.noarch
> ovirt-iso-uploader-3.1.0-16.el6.noarch
> ovirt-engine-genericapi-3.1.0-3.19.el6.noarch
> ovirt-engine-config-3.1.0-3.19.el6.noarch
> ovirt-log-collector-3.1.0-16.el6.noarch
> ovirt-engine-restapi-3.1.0-3.19.el6.noarch
> ovirt-engine-userportal-3.1.0-3.19.el6.noarch
> ovirt-engine-notification-service-3.1.0-3.19.el6.noarch
> ovirt-engine-dbscripts-3.1.0-3.19.el6.noarch
> ovirt-engine-3.1.0-3.19.el6.noarch
> ovirt-engine-jbossas711-1-0.x86_64
> ovirt-engine-setup-3.1.0-3.19.el6.noarch
> ovirt-engine-sdk-3.1.0.5-1.el6.noarch
>
>
>
> ------------------------------
> This is a PRIVATE message. If you are not the intended recipient, please
> delete without copying and kindly advise us by e-mail of the mistake in
> delivery. NOTE: Regardless of content, this e-mail shall not operate to
> bind SKOPOS to any order or other contract unless pursuant to explicit
> written agreement or government initiative expressly permitting the use of
> e-mail for such purpose.
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
--
Kevin Mazière
Responsable Infrastructure
Alter Way – Hosting
1 rue Royal - 227 Bureaux de la Colline
92213 Saint-Cloud Cedex
Tél : +33 (0)1 41 16 38 41
Mob : +33 (0)7 62 55 57 05
http://www.alterway.fr
12 years, 3 months
[Users] ovirt 3.2 migrations failing
by Jonathan Horne
--_000_9BE6F493F83A594DA60C45E6A09DC5AC0170B5FCAUSP01DAG0201co_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
i just built up 2 nodes and a manager on 3.2 dreyou packages, and now that =
i have a VM up and installed with rhev agent, the VM is unable to migrate. =
the failure is pretty much immediate.
i don't know where to begin troubleshooting this, can someone help me get g=
oing in the right direction? just let me know what logs are appropriate an=
d i will post them up.
thanks,
jonathan
________________________________
This is a PRIVATE message. If you are not the intended recipient, please de=
lete without copying and kindly advise us by e-mail of the mistake in deliv=
ery. NOTE: Regardless of content, this e-mail shall not operate to bind SKO=
POS to any order or other contract unless pursuant to explicit written agre=
ement or government initiative expressly permitting the use of e-mail for s=
uch purpose.
--_000_9BE6F493F83A594DA60C45E6A09DC5AC0170B5FCAUSP01DAG0201co_
Content-Type: text/html; charset="us-ascii"
Content-ID: <800473489CC37140A49FAFFF2819AE02(a)collaborationhost.net>
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
</head>
<body style=3D"word-wrap:break-word; color:rgb(0,0,0); font-size:14px; font=
-family:Calibri,sans-serif">
<div>
<div>
<div>i just built up 2 nodes and a manager on 3.2 dreyou packages, and now =
that i have a VM up and installed with rhev agent, the VM is unable to migr=
ate. the failure is pretty much immediate.</div>
<div><br>
</div>
<div>i don't know where to begin troubleshooting this, can someone help me =
get going in the right direction? just let me know what logs are appr=
opriate and i will post them up.</div>
<div><br>
</div>
<div>thanks,</div>
<div>jonathan</div>
<div>
<div></div>
</div>
</div>
</div>
<br>
<hr>
<font color=3D"Gray" face=3D"Arial" size=3D"1">This is a PRIVATE message. I=
f you are not the intended recipient, please delete without copying and kin=
dly advise us by e-mail of the mistake in delivery. NOTE: Regardless of con=
tent, this e-mail shall not operate to
bind SKOPOS to any order or other contract unless pursuant to explicit wri=
tten agreement or government initiative expressly permitting the use of e-m=
ail for such purpose.</font>
</body>
</html>
--_000_9BE6F493F83A594DA60C45E6A09DC5AC0170B5FCAUSP01DAG0201co_--
12 years, 3 months
[Users] Problem with libvirt
by Juan Jose
Hello everybody,
I have installed and configured oVirt 3.1 engine in a Fedora 17 with a
Fedora 17 node connected. Ihave defined a NFS domain for my VM and another
for ISOs. I try to start a Fedora 17 Server with Run once and the machi
start without problems, after that I preceed with the installation in its
wirtual disk but when I arrive to define partitions in the virtual disk the
machine is freeze and I start to receive engine errors and the default data
center go in non responsive status.
I can see this messages in /var/log/ovirt-engine/engine.log, which I attach
to this message:
....
2013-01-31 11:43:23,957 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-47) [75664f2b] Recieved a Device without an address
when processing VM da09284e-3189-428b-a879-6201f7a5ca87 devices, skipping
device: {shared=false, volumeID=1d0e9fdf-c4bc-4894-8ff1-7a5e185d57a4,
index=0, propagateErrors=off, format=raw, type=disk, truesize=8589938688,
reqsize=0, bootOrder=2, iface=virtio,
volumeChain=[Ljava.lang.Object;@1ea2bdf9,
imageID=49e21bfc-384b-4bea-8013-f02b1be137c7,
domainID=57d184a0-908b-49b5-926f-cd413b9e6526, specParams={},
optional=false, needExtend=false,
path=/rhev/data-center/d6e7e8b8-49c7-11e2-a261-000a5e429f63/57d184a0-908b-49b5-926f-cd413b9e6526/images/49e21bfc-384b-4bea-8013-f02b1be137c7/1d0e9fdf-c4bc-4894-8ff1-7a5e185d57a4,
device=disk, poolID=d6e7e8b8-49c7-11e2-a261-000a5e429f63, readonly=false,
deviceId=49e21bfc-384b-4bea-8013-f02b1be137c7, apparentsize=8589934592}.
2013-01-31 11:43:23,960 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-47) [75664f2b] VM
da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
removed unexpetedly from libvirt:
VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=4dca1c64-dbf8-4e31-b359-82cf0e259f65,Device=qxl,Type=video,BootOrder=0,SpecParams={vram=65536},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=false,alias=
2013-01-31 11:43:23,961 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-47) [75664f2b] VM
da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
removed unexpetedly from libvirt:
VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=aba73f2f-e951-4eba-9da4-8fb58315df2c,Device=memballoon,Type=balloon,BootOrder=0,SpecParams={model=virtio},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=true,alias=
2013-01-31 11:43:23,962 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-47) [75664f2b] VM
da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
removed unexpetedly from libvirt:
VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=9bfb770c-13fa-4bf6-9f1f-414927bc31b0,Device=cdrom,Type=disk,BootOrder=0,SpecParams={path=},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=true,alias=
2013-01-31 11:43:23,963 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-47) [75664f2b] VM
da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
removed unexpetedly from libvirt:
VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=614bc0b4-64d8-4058-8bf8-83db62617e00,Device=bridge,Type=interface,BootOrder=0,SpecParams={},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=false,alias=
2013-01-31 11:43:23,964 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-47) [75664f2b] VM
da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
removed unexpetedly from libvirt:
VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=49e21bfc-384b-4bea-8013-f02b1be137c7,Device=disk,Type=disk,BootOrder=0,SpecParams={},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=false,alias=
2013-01-31 11:43:26,063 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-24) [7d021319] VM Fedora17
da09284e-3189-428b-a879-6201f7a5ca87 moved from WaitForLaunch --> PoweringUp
2013-01-31 11:43:26,064 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand]
(QuartzScheduler_Worker-24) [7d021319] START, FullListVdsCommand(vdsId =
7d3491e8-49ce-11e2-8b2e-000a5e429f63, vds=null,
vmIds=[da09284e-3189-428b-a879-6201f7a5ca87]), log id: f68f564
2013-01-31 11:43:26,086 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand]
(QuartzScheduler_Worker-24) [7d021319] FINISH, FullListVdsCommand, return:
[Lorg.ovirt.engine.core.vdsbroker.xmlrpc.XmlRpcStruct;@33c68023, log id:
f68f564
2013-01-31 11:43:26,091 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-24) [7d021319] VM
da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
removed unexpetedly from libvirt:
VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=aba73f2f-e951-4eba-9da4-8fb58315df2c,Device=memballoon,Type=balloon,BootOrder=0,SpecParams={model=virtio},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=true,alias=
2013-01-31 11:43:26,092 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-24) [7d021319] VM
da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
removed unexpetedly from libvirt:
VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=9bfb770c-13fa-4bf6-9f1f-414927bc31b0,Device=cdrom,Type=disk,BootOrder=0,SpecParams={path=},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=true,alias=
2013-01-31 11:43:31,721 INFO
[org.ovirt.engine.core.bll.SetVmTicketCommand] (ajp--0.0.0.0-8009-11)
[28d7a789] Running command: SetVmTicketCommand internal: false. Entities
affected : ID: da09284e-3189-428b-a879-6201f7a5ca87 Type: VM
2013-01-31 11:43:31,724 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand]
(ajp--0.0.0.0-8009-11) [28d7a789] START, SetVmTicketVDSCommand(vdsId =
7d3491e8-49ce-11e2-8b2e-000a5e429f63,
vmId=da09284e-3189-428b-a879-6201f7a5ca87, ticket=qmcnuOICblb3,
validTime=120,m userName=admin@internal,
userId=fdfc627c-d875-11e0-90f0-83df133b58cc), log id: 6eaacb95
2013-01-31 11:43:31,758 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand]
(ajp--0.0.0.0-8009-11) [28d7a789] FINISH, SetVmTicketVDSCommand, log id:
6eaacb95
...
2013-01-31 11:49:13,392 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(QuartzScheduler_Worker-81) [164eaa47] domain
57d184a0-908b-49b5-926f-cd413b9e6526 in problem. vds: host1
2013-01-31 11:49:54,121 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-73) [73213e4f] vds::refreshVdsStats Failed
getVdsStats, vds = 7d3491e8-49ce-11e2-8b2e-000a5e429f63 : host1, error =
VDSNetworkException: VDSNetworkException:
2013-01-31 11:49:54,172 WARN [org.ovirt.engine.core.vdsbroker.VdsManager]
(QuartzScheduler_Worker-73) [73213e4f]
ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds =
7d3491e8-49ce-11e2-8b2e-000a5e429f63 : host1, VDS Network Error, continuing.
VDSNetworkException:
....
In the events windows after VM freezing, I have below events:
2013-Jan-31, 11:50:52 Failed to elect Host as Storage Pool Manager for
Data Center Default. Setting status to Non-Operational.
2013-Jan-31, 11:50:52 VM Fedora17 was set to the Unknown status.
2013-Jan-31, 11:50:52 Host host1 is non-responsive.
2013-Jan-31, 11:49:55 Invalid status on Data Center Default. Setting Data
Center status to Non-Responsive (On host host1, Error: Network error during
communication with the Host.).
2013-Jan-31, 11:44:25 VM Fedora17 started on Host host1
Any suggest about the problem?. It seem a libvirt problem, I will continue
investigating.
Many thanks in avanced,
Juanjo.
12 years, 3 months
[Users] Glusterfs HA doubts
by Adrian Gibanel
In oVirt 3.1 GlusterFS support was added. It was an easy way to replicate your virtual machine storage without too much hassle.
There are two main howtos:
* http://www.middleswarth.net/content/installing-ovirt-31-and-glusterfs-usi... (Robert Middleswarth)
* http://blog.jebpages.com/archives/ovirt-3-1-glusterized/ (Jason Brooks).
1) What about performance?
I've done some tests with rsync backups (even using the suggested --inplace rsync switch) that implies small files. These backups were done into local mounted glusterfs volumes. Backups instead of lasting about 2 hours they lasted like 15 hours long.
Is there maybe something that only happens with small files and with big files performance is ok?
2) How to know the current status?
In DRBD you know it checking a proc file if I remember it well. I remember too that GlusterFS doesn't have an equivalent thing and there's no evident way to know if all the files are synced.
If you have tried it how do you know if both sets of virtual disks images are synced?
3) Mount dns resolution
If you check Jason Brooks howto you will see that it uses a hostname for refering to nfs mount. If you want to perform HA you need your storage to be mounted and if the server1 host is down it doesn't help that the nfs mount point associated to the storage is server1:/vms/ and not server2:/vms/. Checking Middleswarth howto I think that he does the same thing.
Let's explain a bit more so that understand. My example setup is the one where you have two host machines where you run a set of virtual machines on one and the other one doesn't have any virtual machine running. Where is the virtual machines storage located? It's located at the glusterfs volume.
So the first one of the machines mounts the glusterfs volume as nfs (It's an example).
If it uses its own hostname for the nfs mount then if itself goes down the second host isn't going to mount it when it's restarted in the HA mode.
So the first one of the machines mounts the glusterfs volume as nfs (It's an example).
If it uses the second host hostname for the nfs mount then if the second host goes down the virtual machine cannot access its virtual disks.
A workaround for this situation which I have thought is to use /etc/hosts on both machines so that:
whatever.domain.com
resolves in both hosts to the host self's ip.
I think that glusterfs has a way of mounting their share through "-t glusterfs" that somehow can ignore these hostnames problems but I haven't read it too much about it so I'm not too sure.
4) So my doubts basically are:
* Has anyone setup a two host glusterfs HA oVirt cluster where storage is shared by a replicated Glusterfs volume that is shared and stored by both of them?
* Does HA work when one of the host goes down?
* Or does it complain about hostname as I suspect?
* Any tips to ensure the best performance?
Thank you.
--
--
Adrián Gibanel
I.T. Manager
+34 675 683 301
www.btactic.com
Ens podeu seguir a/Nos podeis seguir en:
i
Abans d´imprimir aquest missatge, pensa en el medi ambient. El medi ambient és cosa de tothom. / Antes de imprimir el mensaje piensa en el medio ambiente. El medio ambiente es cosa de todos.
AVIS:
El contingut d'aquest missatge i els seus annexos és confidencial. Si no en sou el destinatari, us fem saber que està prohibit utilitzar-lo, divulgar-lo i/o copiar-lo sense tenir l'autorització corresponent. Si heu rebut aquest missatge per error, us agrairem que ho feu saber immediatament al remitent i que procediu a destruir el missatge .
AVISO:
El contenido de este mensaje y de sus anexos es confidencial. Si no es el destinatario, les hacemos saber que está prohibido utilizarlo, divulgarlo y/o copiarlo sin tener la autorización correspondiente. Si han recibido este mensaje por error, les agradeceríamos que lo hagan saber inmediatamente al remitente y que procedan a destruir el mensaje .
12 years, 3 months
[Users] 3.2 beta and f18 host on dell R815 problem
by Gianluca Cecchi
during install of server I get this
Host installation failed. Fix installation issues and try to Re-Install
In deploy log
2013-01-31 12:17:30 DEBUG
otopi.plugins.ovirt_host_deploy.vdsm.hardware
hardware._isVirtualizationEnabled:144 virtualization support
GenuineIntel (cpu: False, bios: True)
2013-01-31 12:17:30 DEBUG otopi.context context._executeMethod:127
method exception
Traceback (most recent call last):
File "/tmp/ovirt-SfEARpd3h4/pythonlib/otopi/context.py", line 117,
in _executeMethod
method['method']()
File "/tmp/ovirt-SfEARpd3h4/otopi-plugins/ovirt-host-deploy/vdsm/hardware.py",
line 170, in _validate_virtualization
_('Hardware does not support virtualization')
RuntimeError: Hardware does not support virtualization
2013-01-31 12:17:30 ERROR otopi.context context._executeMethod:136
Failed to execute stage 'Setup validation': Hardware does not support
virtualization
note the GenuineIntel above... ??
But actually it is AMD
[root@f18ovn03 ~]# lsmod|grep kvm
kvm_amd 59623 0
kvm 431794 1 kvm_amd
cat /proc/cpuinfo
...
processor : 47
vendor_id : AuthenticAMD
cpu family : 16
model : 9
model name : AMD Opteron(tm) Processor 6174
stepping : 1
microcode : 0x10000d9
cpu MHz : 800.000
cache size : 512 KB
physical id : 3
siblings : 12
core id : 5
cpu cores : 12
apicid : 59
initial apicid : 59
fpu : yes
fpu_exception : yes
cpuid level : 5
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov
pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt
pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nopl
nonstop_tsc extd_apicid amd_dcm pni monitor cx16 popcnt lahf_lm
cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch
osvw ibs skinit wdt nodeid_msr hw_pstate npt lbrv svm_lock nrip_save
pausefilter
bogomips : 4400.44
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 48 bits physical, 48 bits virtual
power management: ts ttp tm stc 100mhzsteps hwpstate
Any hint?
Gianluca
12 years, 3 months
[Users] 3.2 beta: Amd Opteron 6174 wrongly detected as 8 socket
by Gianluca Cecchi
Hello,
after deploy of a node that has 4 sockets with 12cores each, it is
wrongly detected in web admin gui.
See:
https://docs.google.com/file/d/0BwoPbcrMv8mvdjdYNjVfT2NWY0U/edit
It says 8 sockets each with 6 cores....
Output of
# virsh capabilities
here:
https://docs.google.com/file/d/0BwoPbcrMv8mveG5OaVBZN1VENlU/edit
output of cpuid here:
https://docs.google.com/file/d/0BwoPbcrMv8mvUFFRYkZEX0lmRG8/edit
also run this
[root@f18ovn03 ~]# vdsClient -s 0 getVdsCaps
HBAInventory = {'iSCSI': [{'InitiatorName':
'iqn.1994-05.com.redhat:f9baf5a8f6c3'}], 'FC': []}
ISCSIInitiatorName = iqn.1994-05.com.redhat:f9baf5a8f6c3
bondings = {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask':
'', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr':
'', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr':
'00:00:00:00:00:00'}}
bridges = {'ovirtmgmt': {'addr': '192.168.1.102', 'cfg': {'DOMAIN':
'localdomain.local', 'UUID': '60d40d4a-d8ab-4f5b-bd48-2e807df36be4',
'DNS3': '82.113.193.3', 'IPADDR0': '192.168.1.102', 'DNS1':
'192.168.1.103', 'PREFIX0': '24', 'DEFROUTE': 'yes',
'IPV4_FAILURE_FATAL': 'no', 'DELAY': '0', 'NM_CONTROLLED': 'no',
'BOOTPROTO': 'none', 'GATEWAY0': '192.168.1.1', 'DNS2': '8.8.8.8',
'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes', 'IPV6INIT':
'no'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off',
'ports': ['em1']}}
clusterLevels = ['3.0', '3.1', '3.2']
cpuCores = 48
cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,amd_dcm,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,nodeid_msr,hw_pstate,npt,lbrv,svm_lock,nrip_save,pausefilter,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2
cpuModel = AMD Opteron(tm) Processor 6174
cpuSockets = 8
cpuSpeed = 800.000
cpuThreads = 48
emulatedMachines = ['pc-1.2', 'none', 'pc', 'pc-1.1', 'pc-1.0',
'pc-0.15', 'pc-0.14', 'pc-0.13', 'pc-0.12', 'pc-0.11', 'pc-0.10',
'isapc', 'pc-1.2', 'none', 'pc', 'pc-1.1', 'pc-1.0', 'pc-0.15',
'pc-0.14', 'pc-0.13', 'pc-0.12', 'pc-0.11', 'pc-0.10', 'isapc']
guestOverhead = 65
hooks = {}
kvmEnabled = true
lastClient = 192.168.1.111
lastClientIface = ovirtmgmt
management_ip =
memSize = 64418
netConfigDirty = False
networks = {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr':
'192.168.1.102', 'cfg': {'DOMAIN': 'localdomain.local', 'UUID':
'60d40d4a-d8ab-4f5b-bd48-2e807df36be4', 'DNS3': '82.113.193.3',
'IPADDR0': '192.168.1.102', 'DNS1': '192.168.1.103', 'PREFIX0': '24',
'DEFROUTE': 'yes', 'IPV4_FAILURE_FATAL': 'no', 'DELAY': '0',
'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'GATEWAY0': '192.168.1.1',
'DNS2': '8.8.8.8', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT':
'yes', 'IPV6INIT': 'no'}, 'mtu': '1500', 'netmask': '255.255.255.0',
'stp': 'off', 'bridged': True, 'gateway': '0.0.0.0', 'ports':
['em1']}}
nics = {'em4': {'addr': '', 'cfg': {'PEERROUTES': 'yes', 'UUID':
'bed68125-4345-4995-ba49-a6e5580c58dd', 'NAME': 'em4', 'TYPE':
'Ethernet', 'IPV6_PEERDNS': 'yes', 'DEFROUTE': 'yes', 'PEERDNS':
'yes', 'IPV4_FAILURE_FATAL': 'no', 'HWADDR': '00:25:64:F9:76:82',
'BOOTPROTO': 'dhcp', 'IPV6_AUTOCONF': 'yes', 'IPV6_FAILURE_FATAL':
'no', 'IPV6_PEERROUTES': 'yes', 'IPV6_DEFROUTE': 'yes', 'ONBOOT':
'yes', 'IPV6INIT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr':
'00:25:64:f9:76:82', 'speed': 0}, 'em1': {'addr': '', 'cfg':
{'BRIDGE': 'ovirtmgmt', 'DOMAIN': 'localdomain.local', 'DEVICE':
'em1', 'UUID': '60d40d4a-d8ab-4f5b-bd48-2e807df36be4', 'DNS3':
'82.113.193.3', 'IPADDR0': '192.168.1.102', 'DNS1': '192.168.1.103',
'PREFIX0': '24', 'DEFROUTE': 'yes', 'IPV4_FAILURE_FATAL': 'no',
'NM_CONTROLLED': 'no', 'GATEWAY0': '192.168.1.1', 'DNS2': '8.8.8.8',
'HWADDR': '00:25:64:f9:76:7c', 'ONBOOT': 'yes', 'IPV6INIT': 'no'},
'mtu': '1500', 'netmask': '', 'hwaddr': '00:25:64:f9:76:7c', 'speed':
1000}, 'em3': {'addr': '', 'cfg': {'PEERROUTES': 'yes', 'UUID':
'2984885c-fbd8-4ad1-a393-00f0a205ae79', 'NAME': 'em3', 'TYPE':
'Ethernet', 'IPV6_PEERDNS': 'yes', 'DEFROUTE': 'yes', 'PEERDNS':
'yes', 'IPV4_FAILURE_FATAL': 'no', 'HWADDR': '00:25:64:F9:76:80',
'BOOTPROTO': 'dhcp', 'IPV6_AUTOCONF': 'yes', 'IPV6_FAILURE_FATAL':
'no', 'IPV6_PEERROUTES': 'yes', 'IPV6_DEFROUTE': 'yes', 'ONBOOT':
'yes', 'IPV6INIT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr':
'00:25:64:f9:76:80', 'speed': 0}, 'em2': {'addr': '', 'cfg':
{'PEERROUTES': 'yes', 'UUID': 'ebd889bc-57ae-4ee9-8db2-4595309ee81c',
'NAME': 'em2', 'TYPE': 'Ethernet', 'IPV6_PEERDNS': 'yes', 'DEFROUTE':
'yes', 'PEERDNS': 'yes', 'IPV4_FAILURE_FATAL': 'no', 'HWADDR':
'00:25:64:F9:76:7E', 'BOOTPROTO': 'dhcp', 'IPV6_AUTOCONF': 'yes',
'IPV6_FAILURE_FATAL': 'no', 'IPV6_PEERROUTES': 'yes', 'IPV6_DEFROUTE':
'yes', 'ONBOOT': 'yes', 'IPV6INIT': 'yes'}, 'mtu': '1500', 'netmask':
'', 'hwaddr': '00:25:64:f9:76:7e', 'speed': 0}}
operatingSystem = {'release': '1', 'version': '18', 'name': 'Fedora'}
packages2 = {'kernel': {'release': '204.fc18.x86_64', 'buildtime':
1358955869.0, 'version': '3.7.4'}, 'spice-server': {'release':
'1.fc18', 'buildtime': 1356035501, 'version': '0.12.2'}, 'vdsm':
{'release': '6.fc18', 'buildtime': 1359564723, 'version': '4.10.3'},
'qemu-kvm': {'release': '2.fc18', 'buildtime': 1358351894, 'version':
'1.2.2'}, 'libvirt': {'release': '3.fc18', 'buildtime': 1355788803,
'version': '0.10.2.2'}, 'qemu-img': {'release': '2.fc18', 'buildtime':
1358351894, 'version': '1.2.2'}, 'mom': {'release': '1.fc18',
'buildtime': 1349470214, 'version': '0.3.0'}}
reservedMem = 321
software_revision = 6
software_version = 4.10
supportedENGINEs = ['3.0', '3.1']
supportedProtocols = ['2.2', '2.3']
uuid = 4C4C4544-0056-5910-8047-CAC04F4E344A
version_name = Snow Man
vlans = {}
vmTypes = ['kvm']
(this time the host is the inended one... ;-)
Gianluca
12 years, 3 months
[Users] Fwd: Re: 3.2 beta and f18 host on dell R815 problem
by Gianluca Cecchi
Find attach
Putty.log=output of cpuid command
Engine.log after patching and retry.
No file under host-deploy, it gives syntax error
>> > ---------- Forwarded message ----------
>> > From: Alon Bar-Lev <alonbl(a)redhat.com>
>> > Date: Thu, Jan 31, 2013 at 1:48 PM
>> > Subject: Re: [Users] 3.2 beta and f18 host on dell R815 problem
>> > To: Gianluca Cecchi <gianluca.cecchi(a)gmail.com>
>> >
>> >
>> > Sorry, had error.
>> >
>> > ----- Original Message -----
>> >> From: "Alon Bar-Lev" <alonbl(a)redhat.com>
>> >> To: "Gianluca Cecchi" <gianluca.cecchi(a)gmail.com>
>> >> Sent: Thursday, January 31, 2013 2:40:55 PM
>> >> Subject: Re: [Users] 3.2 beta and f18 host on dell R815 problem
>> >>
>> >> Hi!
>> >>
>> >> Can you please try to replace the attach file at:
>> >>
/usr/share/ovirt-host-deploy/plugins/ovirt-host-deploy/vdsm/hardware.py
>> >>
>> >> Retry and send me the log?
>> >>
>> >> I added some more debug to see what went wrong.
>> >>
>> >> Thanks!
>> >> Alon
>> >>
>> >>
>> >> ----- Original Message -----
>> >> > From: "Gianluca Cecchi" <gianluca.cecchi(a)gmail.com>
>> >> > To: "users" <users(a)ovirt.org>
>> >> > Sent: Thursday, January 31, 2013 1:57:38 PM
>> >> > Subject: Re: [Users] 3.2 beta and f18 host on dell R815 problem
>> >> >
>> >> > Output of command
>> >> > # virsh capabilities
>> >> > on this host
>> >> >
>> >> > https://docs.google.com/file/d/0BwoPbcrMv8mveG5OaVBZN1VENlU/edit
>> >> > _______________________________________________
>> >> > Users mailing list
>> >> > Users(a)ovirt.org
>> >> > http://lists.ovirt.org/mailman/listinfo/users
>> >> >
>> >>
>
>
12 years, 3 months
[Users] latest vdsm cannot read ib device speeds causing storage attach fail
by Dead Horse
Any ideas on this one? (from VDSM log):
Thread-25::DEBUG::2013-01-22
15:35:29,065::BindingXMLRPC::914::vds::(wrapper) client [3.57.111.30]::call
getCapabilities with () {}
Thread-25::ERROR::2013-01-22 15:35:29,113::netinfo::159::root::(speed)
cannot read ib0 speed
Traceback (most recent call last):
File "/usr/lib64/python2.6/site-packages/vdsm/netinfo.py", line 155, in
speed
s = int(file('/sys/class/net/%s/speed' % dev).read())
IOError: [Errno 22] Invalid argument
Causes VDSM to fail to attach storage
Engine side sees:
ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper]
(QuartzScheduler_Worker-96) [553ef26e] The connection with details
192.168.0.1:/ovirt/ds failed because of error code 100 and error message
is: general exception
2013-01-22 15:35:30,160 INFO
[org.ovirt.engine.core.bll.SetNonOperationalVdsCommand]
(QuartzScheduler_Worker-96) [1ab78378] Running command:
SetNonOperationalVdsCommand internal: true. Entities affected : ID:
8970b3fe-1faf-11e2-bc1f-00151712f280 Type: VDS
2013-01-22 15:35:30,200 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(QuartzScheduler_Worker-96) [1ab78378] START,
SetVdsStatusVDSCommand(HostName = kezan, HostId =
8970b3fe-1faf-11e2-bc1f-00151712f280, status=NonOperational,
nonOperationalReason=STORAGE_DOMAIN_UNREACHABLE), log id: 4af5c4cd
2013-01-22 15:35:30,211 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(QuartzScheduler_Worker-96) [1ab78378] FINISH, SetVdsStatusVDSCommand, log
id: 4af5c4cd
2013-01-22 15:35:30,242 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(QuartzScheduler_Worker-96) [1ab78378] Try to add duplicate audit log
values with the same name. Type: VDS_SET_NONOPERATIONAL_DOMAIN. Value:
storagepoolname
Engine = latest master
VDSM = latest master
node = el6
12 years, 3 months
[Users] Choosing which network to run VM migration on ovirt 3.1
by Yuval M
Hi,
I'm running the following setup:
2 hosts,
each has 2 physical NICs,
the first NIC (which is bridged to ovirtmgmt) is a 100Mbps ethernet card
connected to a switch (and to the internet)
the 2nd NIC is a fast Infiniband card which is connected back-to-back to
the other host.
both links are running fine, and I managed to have the 2nd host mount the
storage via the fast link.
The problem is that VM migration takes place over the slow link.
How do I configure the cluster so that the migration uses the fast link?
I've already created a network using the web interface. the migration still
uses the slow link.
Thanks,
Yuval
12 years, 3 months
[Users] Cannot create vm's from user portal
by Dead Horse
Seeing this error in engine log when attempting to create a new VM from the
userportal:
2013-01-28 15:35:47,158 ERROR
[org.ovirt.engine.core.bll.GetClustersWithPermittedActionQuery]
(ajp--127.0.0.1-8702-5) Query GetClustersWithPermittedActionQuery failed.
Exception message is PreparedStatementCallback; bad SQL grammar [select *
from fn_perms_get_vds_groups_with_permitted_action(?, ?)]; nested
exception is org.postgresql.util.PSQLException: ERROR: missing FROM-clause
entry for table "vds_groups"
Where: PL/pgSQL function "fn_perms_get_vds_groups_with_permitted_action"
line 3 at RETURN QUERY
2013-01-28 15:36:16,726 ERROR
[org.ovirt.engine.core.bll.GetClustersWithPermittedActionQuery]
(ajp--127.0.0.1-8702-7) Query GetClustersWithPermittedActionQuery failed.
Exception message is PreparedStatementCallback; bad SQL grammar [select *
from fn_perms_get_vds_groups_with_permitted_action(?, ?)]; nested
exception is org.postgresql.util.PSQLException: ERROR: missing FROM-clause
entry for table "vds_groups"
Where: PL/pgSQL function "fn_perms_get_vds_groups_with_permitted_action"
line 3 at RETURN QUERY
Current running engine build --> commit:
61c11aecc40e755d08b6c34c6fe1c0a07fa94de8
- DHC
12 years, 3 months
[Users] oVirt 3.1 VM is not responding
by José Ferradeira
------=_Part_476_18436217.1359564233863
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi,=20
I'm testing oVirt 3.1, with one node 2.5.5-0.1fc17 and a NAS built with ope=
nfiler 2.99, working on a Gigabit net.=20
I configured a NFS share on openfiler, and everything looks nice.=20
I'm installing a Windows 2008 R2 as a VM, time to time i get a warning mess=
age: ! VM is not responding, but the windows installation keeps going, a bi=
t slow.=20
Any idea what is happening?=20
Thanks=20
Jose=20
the vdsm.log :=20
Thread-2405::DEBUG::2013-01-30 16:42:21,532::resourceManager::212::Resource=
Manager.Request::(grant) ResName=3D`Storage.7b44684c-5f34-11e2-beba-00138fb=
e3093`ReqID=3D`e9aefcf3-b4ed-4d7c-9d95-723f824c30ee`::Granted request=20
Thread-2405::DEBUG::2013-01-30 16:42:21,532::task::817::TaskManager.Task::(=
resourceAcquired) Task=3D`c581aa06-4e50-41dd-a836-756827865cc6`::_resources=
Acquired: Storage.7b44684c-5f34-11e2-beba-00138fbe3093 (shared)=20
Thread-2405::DEBUG::2013-01-30 16:42:21,532::task::978::TaskManager.Task::(=
_decref) Task=3D`c581aa06-4e50-41dd-a836-756827865cc6`::ref 1 aborting Fals=
e=20
Thread-2405::INFO::2013-01-30 16:42:21,664::logUtils::39::dispatcher::(wrap=
per) Run and protect: getStoragePoolInfo, Return response: {'info': {'spm_i=
d': 1, 'master_uuid': 'cc060953-baf5-4b86-a66c-2f135b8fbb30', 'name': 'aclo=
udDC', 'version': '0', 'domains': '39484795-666f-44b3-9cf5-21bbb02531ad:Act=
ive,0b6a87e8-53d9-46f3-bc81-20aab92f08aa:Active,cc060953-baf5-4b86-a66c-2f1=
35b8fbb30:Active', 'pool_status': 'connected', 'isoprefix': '/rhev/data-cen=
ter/7b44684c-5f34-11e2-beba-00138fbe3093/0b6a87e8-53d9-46f3-bc81-20aab92f08=
aa/images/11111111-1111-1111-1111-111111111111', 'type': 'NFS', 'master_ver=
': 10575, 'lver': 4}, 'dominfo': {'0b6a87e8-53d9-46f3-bc81-20aab92f08aa': {=
'status': 'Active', 'diskfree': '17986224128', 'alerts': [], 'disktotal': '=
23648534528'}, '39484795-666f-44b3-9cf5-21bbb02531ad': {'status': 'Active',=
'diskfree': '467744063488', 'alerts': [], 'disktotal': '476676358144'}, 'c=
c060953-baf5-4b86-a66c-2f135b8fbb30': {'status': 'Active', 'diskfree': '313=
802031104', 'alerts': [], 'disktotal': '982927736832'}}}=20
Thread-2405::DEBUG::2013-01-30 16:42:21,664::task::1172::TaskManager.Task::=
(prepare) Task=3D`c581aa06-4e50-41dd-a836-756827865cc6`::finished: {'info':=
{'spm_id': 1, 'master_uuid': 'cc060953-baf5-4b86-a66c-2f135b8fbb30', 'name=
': 'acloudDC', 'version': '0', 'domains': '39484795-666f-44b3-9cf5-21bbb025=
31ad:Active,0b6a87e8-53d9-46f3-bc81-20aab92f08aa:Active,cc060953-baf5-4b86-=
a66c-2f135b8fbb30:Active', 'pool_status': 'connected', 'isoprefix': '/rhev/=
data-center/7b44684c-5f34-11e2-beba-00138fbe3093/0b6a87e8-53d9-46f3-bc81-20=
aab92f08aa/images/11111111-1111-1111-1111-111111111111', 'type': 'NFS', 'ma=
ster_ver': 10575, 'lver': 4}, 'dominfo': {'0b6a87e8-53d9-46f3-bc81-20aab92f=
08aa': {'status': 'Active', 'diskfree': '17986224128', 'alerts': [], 'diskt=
otal': '23648534528'}, '39484795-666f-44b3-9cf5-21bbb02531ad': {'status': '=
Active', 'diskfree': '467744063488', 'alerts': [], 'disktotal': '4766763581=
44'}, 'cc060953-baf5-4b86-a66c-2f135b8fbb30': {'status': 'Active', 'diskfre=
e': '313802031104', 'alerts': [], 'disktotal': '982927736832'}}}=20
Thread-2405::DEBUG::2013-01-30 16:42:21,664::task::588::TaskManager.Task::(=
_updateState) Task=3D`c581aa06-4e50-41dd-a836-756827865cc6`::moving from st=
ate preparing -> state finished=20
Thread-2405::DEBUG::2013-01-30 16:42:21,664::resourceManager::809::Resource=
Manager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storag=
e.7b44684c-5f34-11e2-beba-00138fbe3093': < ResourceRef 'Storage.7b44684c-5f=
34-11e2-beba-00138fbe3093', isValid: 'True' obj: 'None'>}=20
Thread-2405::DEBUG::2013-01-30 16:42:21,664::resourceManager::844::Resource=
Manager.Owner::(cancelAll) Owner.cancelAll requests {}=20
Thread-2405::DEBUG::2013-01-30 16:42:21,665::resourceManager::538::Resource=
Manager::(releaseResource) Trying to release resource 'Storage.7b44684c-5f3=
4-11e2-beba-00138fbe3093'=20
Thread-2405::DEBUG::2013-01-30 16:42:21,665::resourceManager::553::Resource=
Manager::(releaseResource) Released resource 'Storage.7b44684c-5f34-11e2-be=
ba-00138fbe3093' (0 active users)=20
Thread-2405::DEBUG::2013-01-30 16:42:21,665::resourceManager::558::Resource=
Manager::(releaseResource) Resource 'Storage.7b44684c-5f34-11e2-beba-00138f=
be3093' is free, finding out if anyone is waiting for it.=20
Thread-2405::DEBUG::2013-01-30 16:42:21,665::resourceManager::565::Resource=
Manager::(releaseResource) No one is waiting for resource 'Storage.7b44684c=
-5f34-11e2-beba-00138fbe3093', Clearing records.=20
Thread-2405::DEBUG::2013-01-30 16:42:21,665::task::978::TaskManager.Task::(=
_decref) Task=3D`c581aa06-4e50-41dd-a836-756827865cc6`::ref 0 aborting Fals=
e=20
Thread-2406::DEBUG::2013-01-30 16:42:23,304::task::588::TaskManager.Task::(=
_updateState) Task=3D`8926a671-3076-42e5-8521-f8ec6ba67748`::moving from st=
ate init -> state preparing=20
Thread-2406::INFO::2013-01-30 16:42:23,304::logUtils::37::dispatcher::(wrap=
per) Run and protect: repoStats(options=3DNone)=20
Thread-2406::INFO::2013-01-30 16:42:23,304::logUtils::39::dispatcher::(wrap=
per) Run and protect: repoStats, Return response: {'39484795-666f-44b3-9cf5=
-21bbb02531ad': {'delay': '0.00179004669189', 'lastCheck': 1359564135.40165=
, 'code': 0, 'valid': True}, '0b6a87e8-53d9-46f3-bc81-20aab92f08aa': {'dela=
y': '0.00332403182983', 'lastCheck': 1359564141.220177, 'code': 0, 'valid':=
True}, 'cc060953-baf5-4b86-a66c-2f135b8fbb30': {'delay': '0.00157809257507=
', 'lastCheck': 1359564143.248153, 'code': 0, 'valid': True}}=20
Thread-2406::DEBUG::2013-01-30 16:42:23,305::task::1172::TaskManager.Task::=
(prepare) Task=3D`8926a671-3076-42e5-8521-f8ec6ba67748`::finished: {'394847=
95-666f-44b3-9cf5-21bbb02531ad': {'delay': '0.00179004669189', 'lastCheck':=
1359564135.40165, 'code': 0, 'valid': True}, '0b6a87e8-53d9-46f3-bc81-20aa=
b92f08aa': {'delay': '0.00332403182983', 'lastCheck': 1359564141.220177, 'c=
ode': 0, 'valid': True}, 'cc060953-baf5-4b86-a66c-2f135b8fbb30': {'delay': =
'0.00157809257507', 'lastCheck': 1359564143.248153, 'code': 0, 'valid': Tru=
e}}=20
Thread-2406::DEBUG::2013-01-30 16:42:23,305::task::588::TaskManager.Task::(=
_updateState) Task=3D`8926a671-3076-42e5-8521-f8ec6ba67748`::moving from st=
ate preparing -> state finished=20
Thread-2406::DEBUG::2013-01-30 16:42:23,305::resourceManager::809::Resource=
Manager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}=20
Thread-2406::DEBUG::2013-01-30 16:42:23,305::resourceManager::844::Resource=
Manager.Owner::(cancelAll) Owner.cancelAll requests {}=20
Thread-2406::DEBUG::2013-01-30 16:42:23,305::task::978::TaskManager.Task::(=
_decref) Task=3D`8926a671-3076-42e5-8521-f8ec6ba67748`::ref 0 aborting Fals=
e=20
Thread-2407::DEBUG::2013-01-30 16:42:23,368::libvirtvm::240::vm.Vm::(_getDi=
skStats) vmId=3D`35fb5c70-3ac8-4b5e-8e48-8cffe9fdcf53`::Disk hdc stats not =
available=20
Thread-2407::DEBUG::2013-01-30 16:42:23,368::libvirtvm::240::vm.Vm::(_getDi=
skStats) vmId=3D`35fb5c70-3ac8-4b5e-8e48-8cffe9fdcf53`::Disk fda stats not =
available=20
VM Channels Listener::DEBUG::2013-01-30 16:42:27,492::vmChannels::60::vds::=
(_handle_timeouts) Timeout on fileno 18.=20
Thread-788::DEBUG::2013-01-30 16:42:28,907::task::588::TaskManager.Task::(_=
updateState) Task=3D`aec127b3-b02b-4bc3-b0e9-542c90b4c91e`::moving from sta=
te init -> state preparing=20
Thread-788::INFO::2013-01-30 16:42:28,907::logUtils::37::dispatcher::(wrapp=
er) Run and protect: getVolumeSize(sdUUID=3D'39484795-666f-44b3-9cf5-21bbb0=
2531ad', spUUID=3D'7b44684c-5f34-11e2-beba-00138fbe3093', imgUUID=3D'2ff4c7=
43-6e5a-42af-8b99-131c8849bd3f', volUUID=3D'7304b24f-5c4d-41b3-b12f-b916bc9=
f4299', options=3DNone)=20
Thread-788::DEBUG::2013-01-30 16:42:28,908::resourceManager::175::ResourceM=
anager.Request::(__init__) ResName=3D`Storage.39484795-666f-44b3-9cf5-21bbb=
02531ad`ReqID=3D`9ba33a25-cdd2-4fc0-9f50-55a82178ad48`::Request was made in=
'/usr/share/vdsm/storage/resourceManager.py' line '485' at 'registerResour=
ce'=20
Thread-788::DEBUG::2013-01-30 16:42:28,908::resourceManager::486::ResourceM=
anager::(registerResource) Trying to register resource 'Storage.39484795-66=
6f-44b3-9cf5-21bbb02531ad' for lock type 'shared'=20
Thread-788::DEBUG::2013-01-30 16:42:28,908::resourceManager::528::ResourceM=
anager::(registerResource) Resource 'Storage.39484795-666f-44b3-9cf5-21bbb0=
2531ad' is free. Now locking as 'shared' (1 active user)=20
Thread-788::DEBUG::2013-01-30 16:42:28,908::resourceManager::212::ResourceM=
anager.Request::(grant) ResName=3D`Storage.39484795-666f-44b3-9cf5-21bbb025=
31ad`ReqID=3D`9ba33a25-cdd2-4fc0-9f50-55a82178ad48`::Granted request=20
Thread-788::DEBUG::2013-01-30 16:42:28,908::task::817::TaskManager.Task::(r=
esourceAcquired) Task=3D`aec127b3-b02b-4bc3-b0e9-542c90b4c91e`::_resourcesA=
cquired: Storage.39484795-666f-44b3-9cf5-21bbb02531ad (shared)=20
Thread-788::DEBUG::2013-01-30 16:42:28,909::task::978::TaskManager.Task::(_=
decref) Task=3D`aec127b3-b02b-4bc3-b0e9-542c90b4c91e`::ref 1 aborting False=
=20
Thread-788::DEBUG::2013-01-30 16:42:28,911::fileVolume::535::Storage.Volume=
::(validateVolumePath) validate path for 7304b24f-5c4d-41b3-b12f-b916bc9f42=
99=20
Thread-788::DEBUG::2013-01-30 16:42:28,913::fileVolume::535::Storage.Volume=
::(validateVolumePath) validate path for 7304b24f-5c4d-41b3-b12f-b916bc9f42=
99=20
Thread-788::INFO::2013-01-30 16:42:28,914::logUtils::39::dispatcher::(wrapp=
er) Run and protect: getVolumeSize, Return response: {'truesize': '32562257=
92', 'apparentsize': '107374182400'}=20
Thread-788::DEBUG::2013-01-30 16:42:28,914::task::1172::TaskManager.Task::(=
prepare) Task=3D`aec127b3-b02b-4bc3-b0e9-542c90b4c91e`::finished: {'truesiz=
e': '3256225792', 'apparentsize': '107374182400'}=20
Thread-788::DEBUG::2013-01-30 16:42:28,915::task::588::TaskManager.Task::(_=
updateState) Task=3D`aec127b3-b02b-4bc3-b0e9-542c90b4c91e`::moving from sta=
te preparing -> state finished=20
Thread-788::DEBUG::2013-01-30 16:42:28,915::resourceManager::809::ResourceM=
anager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage=
.39484795-666f-44b3-9cf5-21bbb02531ad': < ResourceRef 'Storage.39484795-666=
f-44b3-9cf5-21bbb02531ad', isValid: 'True' obj: 'None'>}=20
Thread-788::DEBUG::2013-01-30 16:42:28,915::resourceManager::844::ResourceM=
anager.Owner::(cancelAll) Owner.cancelAll requests {}=20
Thread-788::DEBUG::2013-01-30 16:42:28,915::resourceManager::538::ResourceM=
anager::(releaseResource) Trying to release resource 'Storage.39484795-666f=
-44b3-9cf5-21bbb02531ad'=20
Thread-788::DEBUG::2013-01-30 16:42:28,915::resourceManager::553::ResourceM=
anager::(releaseResource) Released resource 'Storage.39484795-666f-44b3-9cf=
5-21bbb02531ad' (0 active users)=20
Thread-788::DEBUG::2013-01-30 16:42:28,915::resourceManager::558::ResourceM=
anager::(releaseResource) Resource 'Storage.39484795-666f-44b3-9cf5-21bbb02=
531ad' is free, finding out if anyone is waiting for it.=20
Thread-788::DEBUG::2013-01-30 16:42:28,915::resourceManager::565::ResourceM=
anager::(releaseResource) No one is waiting for resource 'Storage.39484795-=
666f-44b3-9cf5-21bbb02531ad', Clearing records.=20
Thread-788::DEBUG::2013-01-30 16:42:28,915::task::978::TaskManager.Task::(_=
decref) Task=3D`aec127b3-b02b-4bc3-b0e9-542c90b4c91e`::ref 0 aborting False=
=20
VM Channels Listener::DEBUG::2013-01-30 16:42:29,494::vmChannels::60::vds::=
(_handle_timeouts) Timeout on fileno 322.=20
Thread-2412::DEBUG::2013-01-30 16:42:31,843::BindingXMLRPC::156::vds::(wrap=
per) [192.168.5.180]=20
Thread-2412::DEBUG::2013-01-30 16:42:31,843::task::588::TaskManager.Task::(=
_updateState) Task=3D`4095451f-5992-4276-8bbc-3cbd5c192905`::moving from st=
ate init -> state preparing=20
Thread-2412::INFO::2013-01-30 16:42:31,844::logUtils::37::dispatcher::(wrap=
per) Run and protect: getSpmStatus(spUUID=3D'7b44684c-5f34-11e2-beba-00138f=
be3093', options=3DNone)=20
Thread-2412::INFO::2013-01-30 16:42:31,844::logUtils::39::dispatcher::(wrap=
per) Run and protect: getSpmStatus, Return response: {'spm_st': {'spmId': 1=
, 'spmStatus': 'SPM', 'spmLver': 4}}=20
Thread-2412::DEBUG::2013-01-30 16:42:31,844::task::1172::TaskManager.Task::=
(prepare) Task=3D`4095451f-5992-4276-8bbc-3cbd5c192905`::finished: {'spm_st=
': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 4}}=20
Thread-2412::DEBUG::2013-01-30 16:42:31,844::task::588::TaskManager.Task::(=
_updateState) Task=3D`4095451f-5992-4276-8bbc-3cbd5c192905`::moving from st=
ate preparing -> state finished=20
Thread-2412::DEBUG::2013-01-30 16:42:31,844::resourceManager::809::Resource=
Manager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}=20
Thread-2412::DEBUG::2013-01-30 16:42:31,844::resourceManager::844::Resource=
Manager.Owner::(cancelAll) Owner.cancelAll requests {}=20
Thread-2412::DEBUG::2013-01-30 16:42:31,844::task::978::TaskManager.Task::(=
_decref) Task=3D`4095451f-5992-4276-8bbc-3cbd5c192905`::ref 0 aborting Fals=
e=20
Thread-2413::DEBUG::2013-01-30 16:42:31,936::BindingXMLRPC::156::vds::(wrap=
per) [192.168.5.180]=20
Thread-2413::DEBUG::2013-01-30 16:42:31,937::task::588::TaskManager.Task::(=
_updateState) Task=3D`42cacf2e-311a-4ca4-8158-6b3dd8f00a94`::moving from st=
ate init -> state preparing=20
Thread-2413::INFO::2013-01-30 16:42:31,937::logUtils::37::dispatcher::(wrap=
per) Run and protect: getStoragePoolInfo(spUUID=3D'7b44684c-5f34-11e2-beba-=
00138fbe3093', options=3DNone)=20
Thread-2413::DEBUG::2013-01-30 16:42:31,937::resourceManager::175::Resource=
Manager.Request::(__init__) ResName=3D`Storage.7b44684c-5f34-11e2-beba-0013=
8fbe3093`ReqID=3D`e72c6606-db58-41d0-8072-c4fff9e0e417`::Request was made i=
n '/usr/share/vdsm/storage/resourceManager.py' line '485' at 'registerResou=
rce'=20
Thread-2413::DEBUG::2013-01-30 16:42:31,937::resourceManager::486::Resource=
Manager::(registerResource) Trying to register resource 'Storage.7b44684c-5=
f34-11e2-beba-00138fbe3093' for lock type 'shared'=20
Thread-2413::DEBUG::2013-01-30 16:42:31,937::resourceManager::528::Resource=
Manager::(registerResource) Resource 'Storage.7b44684c-5f34-11e2-beba-00138=
fbe3093' is free. Now locking as 'shared' (1 active user)=20
Thread-2413::DEBUG::2013-01-30 16:42:31,938::resourceManager::212::Resource=
Manager.Request::(grant) ResName=3D`Storage.7b44684c-5f34-11e2-beba-00138fb=
e3093`ReqID=3D`e72c6606-db58-41d0-8072-c4fff9e0e417`::Granted request=20
Thread-2413::DEBUG::2013-01-30 16:42:31,939::task::817::TaskManager.Task::(=
resourceAcquired) Task=3D`42cacf2e-311a-4ca4-8158-6b3dd8f00a94`::_resources=
Acquired: Storage.7b44684c-5f34-11e2-beba-00138fbe3093 (shared)=20
Thread-2413::DEBUG::2013-01-30 16:42:31,939::task::978::TaskManager.Task::(=
_decref) Task=3D`42cacf2e-311a-4ca4-8158-6b3dd8f00a94`::ref 1 aborting Fals=
e=20
Thread-2413::INFO::2013-01-30 16:42:32,073::logUtils::39::dispatcher::(wrap=
per) Run and protect: getStoragePoolInfo, Return response: {'info': {'spm_i=
d': 1, 'master_uuid': 'cc060953-baf5-4b86-a66c-2f135b8fbb30', 'name': 'aclo=
udDC', 'version': '0', 'domains': '39484795-666f-44b3-9cf5-21bbb02531ad:Act=
ive,0b6a87e8-53d9-46f3-bc81-20aab92f08aa:Active,cc060953-baf5-4b86-a66c-2f1=
35b8fbb30:Active', 'pool_status': 'connected', 'isoprefix': '/rhev/data-cen=
ter/7b44684c-5f34-11e2-beba-00138fbe3093/0b6a87e8-53d9-46f3-bc81-20aab92f08=
aa/images/11111111-1111-1111-1111-111111111111', 'type': 'NFS', 'master_ver=
': 10575, 'lver': 4}, 'dominfo': {'0b6a87e8-53d9-46f3-bc81-20aab92f08aa': {=
'status': 'Active', 'diskfree': '17986224128', 'alerts': [], 'disktotal': '=
23648534528'}, '39484795-666f-44b3-9cf5-21bbb02531ad': {'status': 'Active',=
'diskfree': '467744063488', 'alerts': [], 'disktotal': '476676358144'}, 'c=
c060953-baf5-4b86-a66c-2f135b8fbb30': {'status': 'Active', 'diskfree': '313=
801244672', 'alerts': [], 'disktotal': '982927736832'}}}=20
Thread-2413::DEBUG::2013-01-30 16:42:32,073::task::1172::TaskManager.Task::=
(prepare) Task=3D`42cacf2e-311a-4ca4-8158-6b3dd8f00a94`::finished: {'info':=
{'spm_id': 1, 'master_uuid': 'cc060953-baf5-4b86-a66c-2f135b8fbb30', 'name=
': 'acloudDC', 'version': '0', 'domains': '39484795-666f-44b3-9cf5-21bbb025=
31ad:Active,0b6a87e8-53d9-46f3-bc81-20aab92f08aa:Active,cc060953-baf5-4b86-=
a66c-2f135b8fbb30:Active', 'pool_status': 'connected', 'isoprefix': '/rhev/=
data-center/7b44684c-5f34-11e2-beba-00138fbe3093/0b6a87e8-53d9-46f3-bc81-20=
aab92f08aa/images/11111111-1111-1111-1111-111111111111', 'type': 'NFS', 'ma=
ster_ver': 10575, 'lver': 4}, 'dominfo': {'0b6a87e8-53d9-46f3-bc81-20aab92f=
08aa': {'status': 'Active', 'diskfree': '17986224128', 'alerts': [], 'diskt=
otal': '23648534528'}, '39484795-666f-44b3-9cf5-21bbb02531ad': {'status': '=
Active', 'diskfree': '467744063488', 'alerts': [], 'disktotal': '4766763581=
44'}, 'cc060953-baf5-4b86-a66c-2f135b8fbb30': {'status': 'Active', 'diskfre=
e': '313801244672', 'alerts': [], 'disktotal': '982927736832'}}}=20
Thread-2413::DEBUG::2013-01-30 16:42:32,073::task::588::TaskManager.Task::(=
_updateState) Task=3D`42cacf2e-311a-4ca4-8158-6b3dd8f00a94`::moving from st=
ate preparing -> state finished=20
Thread-2413::DEBUG::2013-01-30 16:42:32,073::resourceManager::809::Resource=
Manager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storag=
e.7b44684c-5f34-11e2-beba-00138fbe3093': < ResourceRef 'Storage.7b44684c-5f=
34-11e2-beba-00138fbe3093', isValid: 'True' obj: 'None'>}=20
Thread-2413::DEBUG::2013-01-30 16:42:32,073::resourceManager::844::Resource=
Manager.Owner::(cancelAll) Owner.cancelAll requests {}=20
Thread-2413::DEBUG::2013-01-30 16:42:32,074::resourceManager::538::Resource=
Manager::(releaseResource) Trying to release resource 'Storage.7b44684c-5f3=
4-11e2-beba-00138fbe3093'=20
Thread-2413::DEBUG::2013-01-30 16:42:32,074::resourceManager::553::Resource=
Manager::(releaseResource) Released resource 'Storage.7b44684c-5f34-11e2-be=
ba-00138fbe3093' (0 active users)=20
Thread-2413::DEBUG::2013-01-30 16:42:32,074::resourceManager::558::Resource=
Manager::(releaseResource) Resource 'Storage.7b44684c-5f34-11e2-beba-00138f=
be3093' is free, finding out if anyone is waiting for it.=20
Thread-2413::DEBUG::2013-01-30 16:42:32,074::resourceManager::565::Resource=
Manager::(releaseResource) No one is waiting for resource 'Storage.7b44684c=
-5f34-11e2-beba-00138fbe3093', Clearing records.=20
Thread-2413::DEBUG::2013-01-30 16:42:32,075::task::978::TaskManager.Task::(=
_decref) Task=3D`42cacf2e-311a-4ca4-8158-6b3dd8f00a94`::ref 0 aborting Fals=
e=20
Thread-2414::DEBUG::2013-01-30 16:42:33,894::task::588::TaskManager.Task::(=
_updateState) Task=3D`e5af4772-7be5-43c0-a907-8a3f3fe0cebd`::moving from st=
ate init -> state preparing=20
Thread-2414::INFO::2013-01-30 16:42:33,894::logUtils::37::dispatcher::(wrap=
per) Run and protect: repoStats(options=3DNone)=20
Thread-2414::INFO::2013-01-30 16:42:33,894::logUtils::39::dispatcher::(wrap=
per) Run and protect: repoStats, Return response: {'39484795-666f-44b3-9cf5=
-21bbb02531ad': {'delay': '0.00196099281311', 'lastCheck': 1359564145.53994=
, 'code': 0, 'valid': True}, '0b6a87e8-53d9-46f3-bc81-20aab92f08aa': {'dela=
y': '0.00284719467163', 'lastCheck': 1359564151.359623, 'code': 0, 'valid':=
True}, 'cc060953-baf5-4b86-a66c-2f135b8fbb30': {'delay': '0.0015709400177'=
, 'lastCheck': 1359564153.38349, 'code': 0, 'valid': True}}=20
Thread-2414::DEBUG::2013-01-30 16:42:33,895::task::1172::TaskManager.Task::=
(prepare) Task=3D`e5af4772-7be5-43c0-a907-8a3f3fe0cebd`::finished: {'394847=
95-666f-44b3-9cf5-21bbb02531ad': {'delay': '0.00196099281311', 'lastCheck':=
1359564145.53994, 'code': 0, 'valid': True}, '0b6a87e8-53d9-46f3-bc81-20aa=
b92f08aa': {'delay': '0.00284719467163', 'lastCheck': 1359564151.359623, 'c=
ode': 0, 'valid': True}, 'cc060953-baf5-4b86-a66c-2f135b8fbb30': {'delay': =
'0.0015709400177', 'lastCheck': 1359564153.38349, 'code': 0, 'valid': True}=
}=20
Thread-2414::DEBUG::2013-01-30 16:42:33,895::task::588::TaskManager.Task::(=
_updateState) Task=3D`e5af4772-7be5-43c0-a907-8a3f3fe0cebd`::moving from st=
ate preparing -> state finished=20
Thread-2414::DEBUG::2013-01-30 16:42:33,895::resourceManager::809::Resource=
Manager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}=20
Thread-2414::DEBUG::2013-01-30 16:42:33,895::resourceManager::844::Resource=
Manager.Owner::(cancelAll) Owner.cancelAll requests {}=20
Thread-2414::DEBUG::2013-01-30 16:42:33,895::task::978::TaskManager.Task::(=
_decref) Task=3D`e5af4772-7be5-43c0-a907-8a3f3fe0cebd`::ref 0 aborting Fals=
e=20
Thread-2415::DEBUG::2013-01-30 16:42:33,996::libvirtvm::240::vm.Vm::(_getDi=
skStats) vmId=3D`35fb5c70-3ac8-4b5e-8e48-8cffe9fdcf53`::Disk hdc stats not =
available=20
Thread-2415::DEBUG::2013-01-30 16:42:33,997::libvirtvm::240::vm.Vm::(_getDi=
skStats) vmId=3D`35fb5c70-3ac8-4b5e-8e48-8cffe9fdcf53`::Disk fda stats not =
available=20
Thread-424::DEBUG::2013-01-30 16:42:34,508::task::588::TaskManager.Task::(_=
updateState) Task=3D`413cbd4e-2ddc-4a21-9526-39e970dc48d3`::moving from sta=
te init -> state preparing=20
Thread-424::INFO::2013-01-30 16:42:34,509::logUtils::37::dispatcher::(wrapp=
er) Run and protect: getVolumeSize(sdUUID=3D'39484795-666f-44b3-9cf5-21bbb0=
2531ad', spUUID=3D'7b44684c-5f34-11e2-beba-00138fbe3093', imgUUID=3D'c7524c=
bd-5f92-41ba-a9e9-8724dd2c4c11', volUUID=3D'5904f9d7-09c3-404d-8887-71a32ff=
96735', options=3DNone)=20
Thread-424::DEBUG::2013-01-30 16:42:34,509::resourceManager::175::ResourceM=
anager.Request::(__init__) ResName=3D`Storage.39484795-666f-44b3-9cf5-21bbb=
02531ad`ReqID=3D`e4e2d280-6941-469f-be31-ac06bfb73f99`::Request was made in=
'/usr/share/vdsm/storage/resourceManager.py' line '485' at 'registerResour=
ce'=20
Thread-424::DEBUG::2013-01-30 16:42:34,509::resourceManager::486::ResourceM=
anager::(registerResource) Trying to register resource 'Storage.39484795-66=
6f-44b3-9cf5-21bbb02531ad' for lock type 'shared'=20
Thread-424::DEBUG::2013-01-30 16:42:34,509::resourceManager::528::ResourceM=
anager::(registerResource) Resource 'Storage.39484795-666f-44b3-9cf5-21bbb0=
2531ad' is free. Now locking as 'shared' (1 active user)=20
Thread-424::DEBUG::2013-01-30 16:42:34,509::resourceManager::212::ResourceM=
anager.Request::(grant) ResName=3D`Storage.39484795-666f-44b3-9cf5-21bbb025=
31ad`ReqID=3D`e4e2d280-6941-469f-be31-ac06bfb73f99`::Granted request=20
Thread-424::DEBUG::2013-01-30 16:42:34,509::task::817::TaskManager.Task::(r=
esourceAcquired) Task=3D`413cbd4e-2ddc-4a21-9526-39e970dc48d3`::_resourcesA=
cquired: Storage.39484795-666f-44b3-9cf5-21bbb02531ad (shared)=20
Thread-424::DEBUG::2013-01-30 16:42:34,510::task::978::TaskManager.Task::(_=
decref) Task=3D`413cbd4e-2ddc-4a21-9526-39e970dc48d3`::ref 1 aborting False=
=20
Thread-424::DEBUG::2013-01-30 16:42:34,511::fileVolume::535::Storage.Volume=
::(validateVolumePath) validate path for 5904f9d7-09c3-404d-8887-71a32ff967=
35=20
---------------------------------------------=20
Logicworks Tecnologias de Inform=C3=A1tica=20
http://www.logicworks.pt=20
------=_Part_476_18436217.1359564233863
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><head><style type=3D'text/css'>p { margin: 0; }</style></head><body><=
div style=3D'font-family: verdana,helvetica,sans-serif; font-size: 10pt; co=
lor: #330066'>Hi,<br><br>I'm testing oVirt 3.1, with one node 2.5.5-0.1fc17=
and a NAS built with openfiler 2.99, working on a Gigabit net.<br>I config=
ured a NFS share on openfiler, and everything looks nice.<br>I'm installing=
a Windows 2008 R2 as a VM, time to time i get a warning message: ! VM is n=
ot responding, but the windows installation keeps going, a bit slow.<br><br=
>Any idea what is happening?<br><br>Thanks<br><br>Jose<br><br>the vdsm.log =
:<br><br>Thread-2405::DEBUG::2013-01-30 16:42:21,532::resourceManager::212:=
:ResourceManager.Request::(grant) ResName=3D`Storage.7b44684c-5f34-11e2-beb=
a-00138fbe3093`ReqID=3D`e9aefcf3-b4ed-4d7c-9d95-723f824c30ee`::Granted requ=
est<br>Thread-2405::DEBUG::2013-01-30 16:42:21,532::task::817::TaskManager.=
Task::(resourceAcquired) Task=3D`c581aa06-4e50-41dd-a836-756827865cc6`::_re=
sourcesAcquired: Storage.7b44684c-5f34-11e2-beba-00138fbe3093 (shared)<br>T=
hread-2405::DEBUG::2013-01-30 16:42:21,532::task::978::TaskManager.Task::(_=
decref) Task=3D`c581aa06-4e50-41dd-a836-756827865cc6`::ref 1 aborting False=
<br>Thread-2405::INFO::2013-01-30 16:42:21,664::logUtils::39::dispatcher::(=
wrapper) Run and protect: getStoragePoolInfo, Return response: {'info': {'s=
pm_id': 1, 'master_uuid': 'cc060953-baf5-4b86-a66c-2f135b8fbb30', 'name': '=
acloudDC', 'version': '0', 'domains': '39484795-666f-44b3-9cf5-21bbb02531ad=
:Active,0b6a87e8-53d9-46f3-bc81-20aab92f08aa:Active,cc060953-baf5-4b86-a66c=
-2f135b8fbb30:Active', 'pool_status': 'connected', 'isoprefix': '/rhev/data=
-center/7b44684c-5f34-11e2-beba-00138fbe3093/0b6a87e8-53d9-46f3-bc81-20aab9=
2f08aa/images/11111111-1111-1111-1111-111111111111', 'type': 'NFS', 'master=
_ver': 10575, 'lver': 4}, 'dominfo': {'0b6a87e8-53d9-46f3-bc81-20aab92f08aa=
': {'status': 'Active', 'diskfree': '17986224128', 'alerts': [], 'disktotal=
': '23648534528'}, '39484795-666f-44b3-9cf5-21bbb02531ad': {'status': 'Acti=
ve', 'diskfree': '467744063488', 'alerts': [], 'disktotal': '476676358144'}=
, 'cc060953-baf5-4b86-a66c-2f135b8fbb30': {'status': 'Active', 'diskfree': =
'313802031104', 'alerts': [], 'disktotal': '982927736832'}}}<br>Thread-2405=
::DEBUG::2013-01-30 16:42:21,664::task::1172::TaskManager.Task::(prepare) T=
ask=3D`c581aa06-4e50-41dd-a836-756827865cc6`::finished: {'info': {'spm_id':=
1, 'master_uuid': 'cc060953-baf5-4b86-a66c-2f135b8fbb30', 'name': 'acloudD=
C', 'version': '0', 'domains': '39484795-666f-44b3-9cf5-21bbb02531ad:Active=
,0b6a87e8-53d9-46f3-bc81-20aab92f08aa:Active,cc060953-baf5-4b86-a66c-2f135b=
8fbb30:Active', 'pool_status': 'connected', 'isoprefix': '/rhev/data-center=
/7b44684c-5f34-11e2-beba-00138fbe3093/0b6a87e8-53d9-46f3-bc81-20aab92f08aa/=
images/11111111-1111-1111-1111-111111111111', 'type': 'NFS', 'master_ver': =
10575, 'lver': 4}, 'dominfo': {'0b6a87e8-53d9-46f3-bc81-20aab92f08aa': {'st=
atus': 'Active', 'diskfree': '17986224128', 'alerts': [], 'disktotal': '236=
48534528'}, '39484795-666f-44b3-9cf5-21bbb02531ad': {'status': 'Active', 'd=
iskfree': '467744063488', 'alerts': [], 'disktotal': '476676358144'}, 'cc06=
0953-baf5-4b86-a66c-2f135b8fbb30': {'status': 'Active', 'diskfree': '313802=
031104', 'alerts': [], 'disktotal': '982927736832'}}}<br>Thread-2405::DEBUG=
::2013-01-30 16:42:21,664::task::588::TaskManager.Task::(_updateState) Task=
=3D`c581aa06-4e50-41dd-a836-756827865cc6`::moving from state preparing ->=
; state finished<br>Thread-2405::DEBUG::2013-01-30 16:42:21,664::resourceMa=
nager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {=
} resources {'Storage.7b44684c-5f34-11e2-beba-00138fbe3093': < ResourceR=
ef 'Storage.7b44684c-5f34-11e2-beba-00138fbe3093', isValid: 'True' obj: 'No=
ne'>}<br>Thread-2405::DEBUG::2013-01-30 16:42:21,664::resourceManager::8=
44::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}<br>Threa=
d-2405::DEBUG::2013-01-30 16:42:21,665::resourceManager::538::ResourceManag=
er::(releaseResource) Trying to release resource 'Storage.7b44684c-5f34-11e=
2-beba-00138fbe3093'<br>Thread-2405::DEBUG::2013-01-30 16:42:21,665::resour=
ceManager::553::ResourceManager::(releaseResource) Released resource 'Stora=
ge.7b44684c-5f34-11e2-beba-00138fbe3093' (0 active users)<br>Thread-2405::D=
EBUG::2013-01-30 16:42:21,665::resourceManager::558::ResourceManager::(rele=
aseResource) Resource 'Storage.7b44684c-5f34-11e2-beba-00138fbe3093' is fre=
e, finding out if anyone is waiting for it.<br>Thread-2405::DEBUG::2013-01-=
30 16:42:21,665::resourceManager::565::ResourceManager::(releaseResource) N=
o one is waiting for resource 'Storage.7b44684c-5f34-11e2-beba-00138fbe3093=
', Clearing records.<br>Thread-2405::DEBUG::2013-01-30 16:42:21,665::task::=
978::TaskManager.Task::(_decref) Task=3D`c581aa06-4e50-41dd-a836-756827865c=
c6`::ref 0 aborting False<br>Thread-2406::DEBUG::2013-01-30 16:42:23,304::t=
ask::588::TaskManager.Task::(_updateState) Task=3D`8926a671-3076-42e5-8521-=
f8ec6ba67748`::moving from state init -> state preparing<br>Thread-2406:=
:INFO::2013-01-30 16:42:23,304::logUtils::37::dispatcher::(wrapper) Run and=
protect: repoStats(options=3DNone)<br>Thread-2406::INFO::2013-01-30 16:42:=
23,304::logUtils::39::dispatcher::(wrapper) Run and protect: repoStats, Ret=
urn response: {'39484795-666f-44b3-9cf5-21bbb02531ad': {'delay': '0.0017900=
4669189', 'lastCheck': 1359564135.40165, 'code': 0, 'valid': True}, '0b6a87=
e8-53d9-46f3-bc81-20aab92f08aa': {'delay': '0.00332403182983', 'lastCheck':=
1359564141.220177, 'code': 0, 'valid': True}, 'cc060953-baf5-4b86-a66c-2f1=
35b8fbb30': {'delay': '0.00157809257507', 'lastCheck': 1359564143.248153, '=
code': 0, 'valid': True}}<br>Thread-2406::DEBUG::2013-01-30 16:42:23,305::t=
ask::1172::TaskManager.Task::(prepare) Task=3D`8926a671-3076-42e5-8521-f8ec=
6ba67748`::finished: {'39484795-666f-44b3-9cf5-21bbb02531ad': {'delay': '0.=
00179004669189', 'lastCheck': 1359564135.40165, 'code': 0, 'valid': True}, =
'0b6a87e8-53d9-46f3-bc81-20aab92f08aa': {'delay': '0.00332403182983', 'last=
Check': 1359564141.220177, 'code': 0, 'valid': True}, 'cc060953-baf5-4b86-a=
66c-2f135b8fbb30': {'delay': '0.00157809257507', 'lastCheck': 1359564143.24=
8153, 'code': 0, 'valid': True}}<br>Thread-2406::DEBUG::2013-01-30 16:42:23=
,305::task::588::TaskManager.Task::(_updateState) Task=3D`8926a671-3076-42e=
5-8521-f8ec6ba67748`::moving from state preparing -> state finished<br>T=
hread-2406::DEBUG::2013-01-30 16:42:23,305::resourceManager::809::ResourceM=
anager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}<br>Thr=
ead-2406::DEBUG::2013-01-30 16:42:23,305::resourceManager::844::ResourceMan=
ager.Owner::(cancelAll) Owner.cancelAll requests {}<br>Thread-2406::DEBUG::=
2013-01-30 16:42:23,305::task::978::TaskManager.Task::(_decref) Task=3D`892=
6a671-3076-42e5-8521-f8ec6ba67748`::ref 0 aborting False<br>Thread-2407::DE=
BUG::2013-01-30 16:42:23,368::libvirtvm::240::vm.Vm::(_getDiskStats) vmId=
=3D`35fb5c70-3ac8-4b5e-8e48-8cffe9fdcf53`::Disk hdc stats not available<br>=
Thread-2407::DEBUG::2013-01-30 16:42:23,368::libvirtvm::240::vm.Vm::(_getDi=
skStats) vmId=3D`35fb5c70-3ac8-4b5e-8e48-8cffe9fdcf53`::Disk fda stats not =
available<br>VM Channels Listener::DEBUG::2013-01-30 16:42:27,492::vmChanne=
ls::60::vds::(_handle_timeouts) Timeout on fileno 18.<br>Thread-788::DEBUG:=
:2013-01-30 16:42:28,907::task::588::TaskManager.Task::(_updateState) Task=
=3D`aec127b3-b02b-4bc3-b0e9-542c90b4c91e`::moving from state init -> sta=
te preparing<br>Thread-788::INFO::2013-01-30 16:42:28,907::logUtils::37::di=
spatcher::(wrapper) Run and protect: getVolumeSize(sdUUID=3D'39484795-666f-=
44b3-9cf5-21bbb02531ad', spUUID=3D'7b44684c-5f34-11e2-beba-00138fbe3093', i=
mgUUID=3D'2ff4c743-6e5a-42af-8b99-131c8849bd3f', volUUID=3D'7304b24f-5c4d-4=
1b3-b12f-b916bc9f4299', options=3DNone)<br>Thread-788::DEBUG::2013-01-30 16=
:42:28,908::resourceManager::175::ResourceManager.Request::(__init__) ResNa=
me=3D`Storage.39484795-666f-44b3-9cf5-21bbb02531ad`ReqID=3D`9ba33a25-cdd2-4=
fc0-9f50-55a82178ad48`::Request was made in '/usr/share/vdsm/storage/resour=
ceManager.py' line '485' at 'registerResource'<br>Thread-788::DEBUG::2013-0=
1-30 16:42:28,908::resourceManager::486::ResourceManager::(registerResource=
) Trying to register resource 'Storage.39484795-666f-44b3-9cf5-21bbb02531ad=
' for lock type 'shared'<br>Thread-788::DEBUG::2013-01-30 16:42:28,908::res=
ourceManager::528::ResourceManager::(registerResource) Resource 'Storage.39=
484795-666f-44b3-9cf5-21bbb02531ad' is free. Now locking as 'shared' (1 act=
ive user)<br>Thread-788::DEBUG::2013-01-30 16:42:28,908::resourceManager::2=
12::ResourceManager.Request::(grant) ResName=3D`Storage.39484795-666f-44b3-=
9cf5-21bbb02531ad`ReqID=3D`9ba33a25-cdd2-4fc0-9f50-55a82178ad48`::Granted r=
equest<br>Thread-788::DEBUG::2013-01-30 16:42:28,908::task::817::TaskManage=
r.Task::(resourceAcquired) Task=3D`aec127b3-b02b-4bc3-b0e9-542c90b4c91e`::_=
resourcesAcquired: Storage.39484795-666f-44b3-9cf5-21bbb02531ad (shared)<br=
>Thread-788::DEBUG::2013-01-30 16:42:28,909::task::978::TaskManager.Task::(=
_decref) Task=3D`aec127b3-b02b-4bc3-b0e9-542c90b4c91e`::ref 1 aborting Fals=
e<br>Thread-788::DEBUG::2013-01-30 16:42:28,911::fileVolume::535::Storage.V=
olume::(validateVolumePath) validate path for 7304b24f-5c4d-41b3-b12f-b916b=
c9f4299<br>Thread-788::DEBUG::2013-01-30 16:42:28,913::fileVolume::535::Sto=
rage.Volume::(validateVolumePath) validate path for 7304b24f-5c4d-41b3-b12f=
-b916bc9f4299<br>Thread-788::INFO::2013-01-30 16:42:28,914::logUtils::39::d=
ispatcher::(wrapper) Run and protect: getVolumeSize, Return response: {'tru=
esize': '3256225792', 'apparentsize': '107374182400'}<br>Thread-788::DEBUG:=
:2013-01-30 16:42:28,914::task::1172::TaskManager.Task::(prepare) Task=3D`a=
ec127b3-b02b-4bc3-b0e9-542c90b4c91e`::finished: {'truesize': '3256225792', =
'apparentsize': '107374182400'}<br>Thread-788::DEBUG::2013-01-30 16:42:28,9=
15::task::588::TaskManager.Task::(_updateState) Task=3D`aec127b3-b02b-4bc3-=
b0e9-542c90b4c91e`::moving from state preparing -> state finished<br>Thr=
ead-788::DEBUG::2013-01-30 16:42:28,915::resourceManager::809::ResourceMana=
ger.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.39=
484795-666f-44b3-9cf5-21bbb02531ad': < ResourceRef 'Storage.39484795-666=
f-44b3-9cf5-21bbb02531ad', isValid: 'True' obj: 'None'>}<br>Thread-788::=
DEBUG::2013-01-30 16:42:28,915::resourceManager::844::ResourceManager.Owner=
::(cancelAll) Owner.cancelAll requests {}<br>Thread-788::DEBUG::2013-01-30 =
16:42:28,915::resourceManager::538::ResourceManager::(releaseResource) Tryi=
ng to release resource 'Storage.39484795-666f-44b3-9cf5-21bbb02531ad'<br>Th=
read-788::DEBUG::2013-01-30 16:42:28,915::resourceManager::553::ResourceMan=
ager::(releaseResource) Released resource 'Storage.39484795-666f-44b3-9cf5-=
21bbb02531ad' (0 active users)<br>Thread-788::DEBUG::2013-01-30 16:42:28,91=
5::resourceManager::558::ResourceManager::(releaseResource) Resource 'Stora=
ge.39484795-666f-44b3-9cf5-21bbb02531ad' is free, finding out if anyone is =
waiting for it.<br>Thread-788::DEBUG::2013-01-30 16:42:28,915::resourceMana=
ger::565::ResourceManager::(releaseResource) No one is waiting for resource=
'Storage.39484795-666f-44b3-9cf5-21bbb02531ad', Clearing records.<br>Threa=
d-788::DEBUG::2013-01-30 16:42:28,915::task::978::TaskManager.Task::(_decre=
f) Task=3D`aec127b3-b02b-4bc3-b0e9-542c90b4c91e`::ref 0 aborting False<br>V=
M Channels Listener::DEBUG::2013-01-30 16:42:29,494::vmChannels::60::vds::(=
_handle_timeouts) Timeout on fileno 322.<br>Thread-2412::DEBUG::2013-01-30 =
16:42:31,843::BindingXMLRPC::156::vds::(wrapper) [192.168.5.180]<br>Thread-=
2412::DEBUG::2013-01-30 16:42:31,843::task::588::TaskManager.Task::(_update=
State) Task=3D`4095451f-5992-4276-8bbc-3cbd5c192905`::moving from state ini=
t -> state preparing<br>Thread-2412::INFO::2013-01-30 16:42:31,844::logU=
tils::37::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID=3D'7b4=
4684c-5f34-11e2-beba-00138fbe3093', options=3DNone)<br>Thread-2412::INFO::2=
013-01-30 16:42:31,844::logUtils::39::dispatcher::(wrapper) Run and protect=
: getSpmStatus, Return response: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM'=
, 'spmLver': 4}}<br>Thread-2412::DEBUG::2013-01-30 16:42:31,844::task::1172=
::TaskManager.Task::(prepare) Task=3D`4095451f-5992-4276-8bbc-3cbd5c192905`=
::finished: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 4}}<br>T=
hread-2412::DEBUG::2013-01-30 16:42:31,844::task::588::TaskManager.Task::(_=
updateState) Task=3D`4095451f-5992-4276-8bbc-3cbd5c192905`::moving from sta=
te preparing -> state finished<br>Thread-2412::DEBUG::2013-01-30 16:42:3=
1,844::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.rele=
aseAll requests {} resources {}<br>Thread-2412::DEBUG::2013-01-30 16:42:31,=
844::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelA=
ll requests {}<br>Thread-2412::DEBUG::2013-01-30 16:42:31,844::task::978::T=
askManager.Task::(_decref) Task=3D`4095451f-5992-4276-8bbc-3cbd5c192905`::r=
ef 0 aborting False<br>Thread-2413::DEBUG::2013-01-30 16:42:31,936::Binding=
XMLRPC::156::vds::(wrapper) [192.168.5.180]<br>Thread-2413::DEBUG::2013-01-=
30 16:42:31,937::task::588::TaskManager.Task::(_updateState) Task=3D`42cacf=
2e-311a-4ca4-8158-6b3dd8f00a94`::moving from state init -> state prepari=
ng<br>Thread-2413::INFO::2013-01-30 16:42:31,937::logUtils::37::dispatcher:=
:(wrapper) Run and protect: getStoragePoolInfo(spUUID=3D'7b44684c-5f34-11e2=
-beba-00138fbe3093', options=3DNone)<br>Thread-2413::DEBUG::2013-01-30 16:4=
2:31,937::resourceManager::175::ResourceManager.Request::(__init__) ResName=
=3D`Storage.7b44684c-5f34-11e2-beba-00138fbe3093`ReqID=3D`e72c6606-db58-41d=
0-8072-c4fff9e0e417`::Request was made in '/usr/share/vdsm/storage/resource=
Manager.py' line '485' at 'registerResource'<br>Thread-2413::DEBUG::2013-01=
-30 16:42:31,937::resourceManager::486::ResourceManager::(registerResource)=
Trying to register resource 'Storage.7b44684c-5f34-11e2-beba-00138fbe3093'=
for lock type 'shared'<br>Thread-2413::DEBUG::2013-01-30 16:42:31,937::res=
ourceManager::528::ResourceManager::(registerResource) Resource 'Storage.7b=
44684c-5f34-11e2-beba-00138fbe3093' is free. Now locking as 'shared' (1 act=
ive user)<br>Thread-2413::DEBUG::2013-01-30 16:42:31,938::resourceManager::=
212::ResourceManager.Request::(grant) ResName=3D`Storage.7b44684c-5f34-11e2=
-beba-00138fbe3093`ReqID=3D`e72c6606-db58-41d0-8072-c4fff9e0e417`::Granted =
request<br>Thread-2413::DEBUG::2013-01-30 16:42:31,939::task::817::TaskMana=
ger.Task::(resourceAcquired) Task=3D`42cacf2e-311a-4ca4-8158-6b3dd8f00a94`:=
:_resourcesAcquired: Storage.7b44684c-5f34-11e2-beba-00138fbe3093 (shared)<=
br>Thread-2413::DEBUG::2013-01-30 16:42:31,939::task::978::TaskManager.Task=
::(_decref) Task=3D`42cacf2e-311a-4ca4-8158-6b3dd8f00a94`::ref 1 aborting F=
alse<br>Thread-2413::INFO::2013-01-30 16:42:32,073::logUtils::39::dispatche=
r::(wrapper) Run and protect: getStoragePoolInfo, Return response: {'info':=
{'spm_id': 1, 'master_uuid': 'cc060953-baf5-4b86-a66c-2f135b8fbb30', 'name=
': 'acloudDC', 'version': '0', 'domains': '39484795-666f-44b3-9cf5-21bbb025=
31ad:Active,0b6a87e8-53d9-46f3-bc81-20aab92f08aa:Active,cc060953-baf5-4b86-=
a66c-2f135b8fbb30:Active', 'pool_status': 'connected', 'isoprefix': '/rhev/=
data-center/7b44684c-5f34-11e2-beba-00138fbe3093/0b6a87e8-53d9-46f3-bc81-20=
aab92f08aa/images/11111111-1111-1111-1111-111111111111', 'type': 'NFS', 'ma=
ster_ver': 10575, 'lver': 4}, 'dominfo': {'0b6a87e8-53d9-46f3-bc81-20aab92f=
08aa': {'status': 'Active', 'diskfree': '17986224128', 'alerts': [], 'diskt=
otal': '23648534528'}, '39484795-666f-44b3-9cf5-21bbb02531ad': {'status': '=
Active', 'diskfree': '467744063488', 'alerts': [], 'disktotal': '4766763581=
44'}, 'cc060953-baf5-4b86-a66c-2f135b8fbb30': {'status': 'Active', 'diskfre=
e': '313801244672', 'alerts': [], 'disktotal': '982927736832'}}}<br>Thread-=
2413::DEBUG::2013-01-30 16:42:32,073::task::1172::TaskManager.Task::(prepar=
e) Task=3D`42cacf2e-311a-4ca4-8158-6b3dd8f00a94`::finished: {'info': {'spm_=
id': 1, 'master_uuid': 'cc060953-baf5-4b86-a66c-2f135b8fbb30', 'name': 'acl=
oudDC', 'version': '0', 'domains': '39484795-666f-44b3-9cf5-21bbb02531ad:Ac=
tive,0b6a87e8-53d9-46f3-bc81-20aab92f08aa:Active,cc060953-baf5-4b86-a66c-2f=
135b8fbb30:Active', 'pool_status': 'connected', 'isoprefix': '/rhev/data-ce=
nter/7b44684c-5f34-11e2-beba-00138fbe3093/0b6a87e8-53d9-46f3-bc81-20aab92f0=
8aa/images/11111111-1111-1111-1111-111111111111', 'type': 'NFS', 'master_ve=
r': 10575, 'lver': 4}, 'dominfo': {'0b6a87e8-53d9-46f3-bc81-20aab92f08aa': =
{'status': 'Active', 'diskfree': '17986224128', 'alerts': [], 'disktotal': =
'23648534528'}, '39484795-666f-44b3-9cf5-21bbb02531ad': {'status': 'Active'=
, 'diskfree': '467744063488', 'alerts': [], 'disktotal': '476676358144'}, '=
cc060953-baf5-4b86-a66c-2f135b8fbb30': {'status': 'Active', 'diskfree': '31=
3801244672', 'alerts': [], 'disktotal': '982927736832'}}}<br>Thread-2413::D=
EBUG::2013-01-30 16:42:32,073::task::588::TaskManager.Task::(_updateState) =
Task=3D`42cacf2e-311a-4ca4-8158-6b3dd8f00a94`::moving from state preparing =
-> state finished<br>Thread-2413::DEBUG::2013-01-30 16:42:32,073::resour=
ceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll reques=
ts {} resources {'Storage.7b44684c-5f34-11e2-beba-00138fbe3093': < Resou=
rceRef 'Storage.7b44684c-5f34-11e2-beba-00138fbe3093', isValid: 'True' obj:=
'None'>}<br>Thread-2413::DEBUG::2013-01-30 16:42:32,073::resourceManage=
r::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}<br>T=
hread-2413::DEBUG::2013-01-30 16:42:32,074::resourceManager::538::ResourceM=
anager::(releaseResource) Trying to release resource 'Storage.7b44684c-5f34=
-11e2-beba-00138fbe3093'<br>Thread-2413::DEBUG::2013-01-30 16:42:32,074::re=
sourceManager::553::ResourceManager::(releaseResource) Released resource 'S=
torage.7b44684c-5f34-11e2-beba-00138fbe3093' (0 active users)<br>Thread-241=
3::DEBUG::2013-01-30 16:42:32,074::resourceManager::558::ResourceManager::(=
releaseResource) Resource 'Storage.7b44684c-5f34-11e2-beba-00138fbe3093' is=
free, finding out if anyone is waiting for it.<br>Thread-2413::DEBUG::2013=
-01-30 16:42:32,074::resourceManager::565::ResourceManager::(releaseResourc=
e) No one is waiting for resource 'Storage.7b44684c-5f34-11e2-beba-00138fbe=
3093', Clearing records.<br>Thread-2413::DEBUG::2013-01-30 16:42:32,075::ta=
sk::978::TaskManager.Task::(_decref) Task=3D`42cacf2e-311a-4ca4-8158-6b3dd8=
f00a94`::ref 0 aborting False<br>Thread-2414::DEBUG::2013-01-30 16:42:33,89=
4::task::588::TaskManager.Task::(_updateState) Task=3D`e5af4772-7be5-43c0-a=
907-8a3f3fe0cebd`::moving from state init -> state preparing<br>Thread-2=
414::INFO::2013-01-30 16:42:33,894::logUtils::37::dispatcher::(wrapper) Run=
and protect: repoStats(options=3DNone)<br>Thread-2414::INFO::2013-01-30 16=
:42:33,894::logUtils::39::dispatcher::(wrapper) Run and protect: repoStats,=
Return response: {'39484795-666f-44b3-9cf5-21bbb02531ad': {'delay': '0.001=
96099281311', 'lastCheck': 1359564145.53994, 'code': 0, 'valid': True}, '0b=
6a87e8-53d9-46f3-bc81-20aab92f08aa': {'delay': '0.00284719467163', 'lastChe=
ck': 1359564151.359623, 'code': 0, 'valid': True}, 'cc060953-baf5-4b86-a66c=
-2f135b8fbb30': {'delay': '0.0015709400177', 'lastCheck': 1359564153.38349,=
'code': 0, 'valid': True}}<br>Thread-2414::DEBUG::2013-01-30 16:42:33,895:=
:task::1172::TaskManager.Task::(prepare) Task=3D`e5af4772-7be5-43c0-a907-8a=
3f3fe0cebd`::finished: {'39484795-666f-44b3-9cf5-21bbb02531ad': {'delay': '=
0.00196099281311', 'lastCheck': 1359564145.53994, 'code': 0, 'valid': True}=
, '0b6a87e8-53d9-46f3-bc81-20aab92f08aa': {'delay': '0.00284719467163', 'la=
stCheck': 1359564151.359623, 'code': 0, 'valid': True}, 'cc060953-baf5-4b86=
-a66c-2f135b8fbb30': {'delay': '0.0015709400177', 'lastCheck': 1359564153.3=
8349, 'code': 0, 'valid': True}}<br>Thread-2414::DEBUG::2013-01-30 16:42:33=
,895::task::588::TaskManager.Task::(_updateState) Task=3D`e5af4772-7be5-43c=
0-a907-8a3f3fe0cebd`::moving from state preparing -> state finished<br>T=
hread-2414::DEBUG::2013-01-30 16:42:33,895::resourceManager::809::ResourceM=
anager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}<br>Thr=
ead-2414::DEBUG::2013-01-30 16:42:33,895::resourceManager::844::ResourceMan=
ager.Owner::(cancelAll) Owner.cancelAll requests {}<br>Thread-2414::DEBUG::=
2013-01-30 16:42:33,895::task::978::TaskManager.Task::(_decref) Task=3D`e5a=
f4772-7be5-43c0-a907-8a3f3fe0cebd`::ref 0 aborting False<br>Thread-2415::DE=
BUG::2013-01-30 16:42:33,996::libvirtvm::240::vm.Vm::(_getDiskStats) vmId=
=3D`35fb5c70-3ac8-4b5e-8e48-8cffe9fdcf53`::Disk hdc stats not available<br>=
Thread-2415::DEBUG::2013-01-30 16:42:33,997::libvirtvm::240::vm.Vm::(_getDi=
skStats) vmId=3D`35fb5c70-3ac8-4b5e-8e48-8cffe9fdcf53`::Disk fda stats not =
available<br>Thread-424::DEBUG::2013-01-30 16:42:34,508::task::588::TaskMan=
ager.Task::(_updateState) Task=3D`413cbd4e-2ddc-4a21-9526-39e970dc48d3`::mo=
ving from state init -> state preparing<br>Thread-424::INFO::2013-01-30 =
16:42:34,509::logUtils::37::dispatcher::(wrapper) Run and protect: getVolum=
eSize(sdUUID=3D'39484795-666f-44b3-9cf5-21bbb02531ad', spUUID=3D'7b44684c-5=
f34-11e2-beba-00138fbe3093', imgUUID=3D'c7524cbd-5f92-41ba-a9e9-8724dd2c4c1=
1', volUUID=3D'5904f9d7-09c3-404d-8887-71a32ff96735', options=3DNone)<br>Th=
read-424::DEBUG::2013-01-30 16:42:34,509::resourceManager::175::ResourceMan=
ager.Request::(__init__) ResName=3D`Storage.39484795-666f-44b3-9cf5-21bbb02=
531ad`ReqID=3D`e4e2d280-6941-469f-be31-ac06bfb73f99`::Request was made in '=
/usr/share/vdsm/storage/resourceManager.py' line '485' at 'registerResource=
'<br>Thread-424::DEBUG::2013-01-30 16:42:34,509::resourceManager::486::Reso=
urceManager::(registerResource) Trying to register resource 'Storage.394847=
95-666f-44b3-9cf5-21bbb02531ad' for lock type 'shared'<br>Thread-424::DEBUG=
::2013-01-30 16:42:34,509::resourceManager::528::ResourceManager::(register=
Resource) Resource 'Storage.39484795-666f-44b3-9cf5-21bbb02531ad' is free. =
Now locking as 'shared' (1 active user)<br>Thread-424::DEBUG::2013-01-30 16=
:42:34,509::resourceManager::212::ResourceManager.Request::(grant) ResName=
=3D`Storage.39484795-666f-44b3-9cf5-21bbb02531ad`ReqID=3D`e4e2d280-6941-469=
f-be31-ac06bfb73f99`::Granted request<br>Thread-424::DEBUG::2013-01-30 16:4=
2:34,509::task::817::TaskManager.Task::(resourceAcquired) Task=3D`413cbd4e-=
2ddc-4a21-9526-39e970dc48d3`::_resourcesAcquired: Storage.39484795-666f-44b=
3-9cf5-21bbb02531ad (shared)<br>Thread-424::DEBUG::2013-01-30 16:42:34,510:=
:task::978::TaskManager.Task::(_decref) Task=3D`413cbd4e-2ddc-4a21-9526-39e=
970dc48d3`::ref 1 aborting False<br>Thread-424::DEBUG::2013-01-30 16:42:34,=
511::fileVolume::535::Storage.Volume::(validateVolumePath) validate path fo=
r 5904f9d7-09c3-404d-8887-71a32ff96735<br><br><br><div><span name=3D"x"></s=
pan><span style=3D"color: rgb(253, 188, 85);">-----------------------------=
----------------</span><br style=3D"color: rgb(253, 188, 85);"><span style=
=3D"color: rgb(253, 188, 85);">Logicworks Tecnologias de Inform=C3=A1tica</=
span><br style=3D"color: rgb(253, 188, 85);"><a href=3D"http://www.logicwor=
ks.pt"><span style=3D"color: rgb(253, 188, 85);">http://www.logicworks.pt</=
span></a><span name=3D"x"></span><br></div></div></body></html>
------=_Part_476_18436217.1359564233863--
12 years, 3 months
[Users] oVirt Weekly Meeting Minutes -- 2013-01-30
by Mike Burns
Minutes:
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-01-30-15.00.html
Minutes (text):
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-01-30-15.00.txt
Log:
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-01-30-15.00.log.html
============================
#ovirt: oVirt weekly meeting
============================
Meeting started by mburns at 15:00:43 UTC. The full logs are available
at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-01-30-15.00.log.html
.
Meeting summary
---------------
* agenda and roll call (mburns, 15:01:17)
* Workshop update (mburns, 15:03:28)
* Sunnyvale workshop went great (mburns, 15:05:33)
* almost 100 registered (~30% no-show for first 2 days) (mburns,
15:05:46)
* board meeting had almost full attendance (mburns, 15:06:09)
* got support for proposed marketing, now need to follow up with the
community (mburns, 15:06:26)
* next event: Oved Ourfali presenting on deltacloud integration into
oVirt at Puppet Camp (mburns, 15:09:04)
* FOSDEM -- we are out in force with the virt devroom (mburns,
15:09:23)
* finalized dates for Shanghai workshop at Intel -- May 8-9 (mburns,
15:09:45)
* release status (mburns, 15:11:55)
* test day tomorrow (31-Jan) (mburns, 15:12:06)
* beta announcement ready, to be sent later today (mburns, 15:12:21)
* ovirt-node packages and image in final testing, to be posted today
(mburns, 15:12:50)
* ovirt-node blocked by vdsm bug 905728 (patch:
http://gerrit.ovirt.org/#/c/11527/ ) (mburns, 15:14:03)
* but we can workaround this issue in ovirt-node for the beta
(mburns, 15:14:19)
* Test Day: all are invited to participate (mburns, 15:16:42)
* please sign up on http://www.ovirt.org/Testing/OvirtTestDay
(mburns, 15:17:02)
* ACTION: mburns to review testday wiki node section (mburns,
15:21:39)
* ACTION: mburns to send out beta announcement (mburns, 15:21:50)
* ACTION: mburns to post ovirt-node for beta/test day (mburns,
15:21:59)
* LINK:
https://bugzilla.redhat.com/showdependencytree.cgi?id=881006&hide_resolved=1
(mburns, 15:22:38)
* ^^ a list of the bugs being tracked for the release (mburns,
15:23:05)
* Bug 879180 -- https://bugzilla.redhat.com/show_bug.cgi?id=879180
(mburns, 15:23:51)
* New -- against NetworkManager (mburns, 15:24:01)
* ACTION: ovirt-node bugs on the list are all targeted for the beta,
mburns will update bugs later today (mburns, 15:27:17)
* only bugs not at least on POST in the list (outside ovirt-node bugs)
are 879180 and 884990 (mburns, 15:27:59)
* both of which are in core fedora components (mburns, 15:28:13)
* not ovirt components (mburns, 15:28:48)
* Infra report (mburns, 15:35:27)
* new hardware from AlterWay not set up yet, but in progress (mburns,
15:37:13)
* infra team meetup at FOSDEM ( ewoud quaid dneary Rydekull )
(mburns, 15:38:18)
* Other topics (mburns, 15:39:15)
Meeting ended at 15:46:46 UTC.
Action Items
------------
* mburns to review testday wiki node section
* mburns to send out beta announcement
* mburns to post ovirt-node for beta/test day
* ovirt-node bugs on the list are all targeted for the beta, mburns will
update bugs later today
Action Items, by person
-----------------------
* mburns
* mburns to review testday wiki node section
* mburns to send out beta announcement
* mburns to post ovirt-node for beta/test day
* ovirt-node bugs on the list are all targeted for the beta, mburns
will update bugs later today
* **UNASSIGNED**
* (none)
People Present (lines said)
---------------------------
* mburns (94)
* dneary (18)
* oschreib (12)
* mgoldboi (10)
* danken1 (6)
* Rydekull (6)
* ewoud (5)
* ovirtbot (5)
* Jur (3)
* jb_netapp (2)
* YamaKasY (2)
* teuf_ (1)
* dustins (1)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
12 years, 3 months
Re: [Users] Rest-api to fetch the hosts details ( active vm's , CPU , Physical memory etc.)
by Michael Pasternak
On 01/30/2013 03:09 PM, Romil Gupta wrote:
> thanks for all your guidance , now I m able to fetch the details of a host using
> the below script :
>
> hosts=api.hosts.list()
> for host in hosts:
> print "host name--> %s id--->> %s \n"%(host.name <http://host.name> , host.id <http://host.id>)
> clusterid=api.hosts.get(host.name <http://host.name>).cluster.id <http://cluster.id>
> print clusterid
>
> hostname=api.hosts.get(host.name <http://host.name>)
> statistic=hostname.statistics.list()
> i=0
> while i < 14:
> print statistic[i].name
> print statistic[i].description
> print statistic[i].unit
> print statistic[i].values.value[0].datum
> i=i+1;
>
>
> summary=api.get_summary()
> print summary
>
> How I can print the summary , its only return the Object??
this is summary object structure:
<summary>
<vms>
<total></total>
<active></active>
</vms>
<hosts>
<total></total>
<active></active>
</hosts>
<users>
<total></total>
<active></active>
</users>
<storage_domains>
<total></total>
<active></active>
</storage_domains>
</summary>
you can access properties directly, like this:
summary.hosts.active
>
> Thanks,
> Romil
>
>
> On Wed, Jan 30, 2013 at 4:52 PM, Michael Pasternak <mpastern(a)redhat.com <mailto:mpastern@redhat.com>> wrote:
>
>
> Romil,
>
> On 01/30/2013 12:18 PM, Romil Gupta wrote:
> > Hi,
> >
> > Is this is a right way to get it ??
> >
> > statistics=params.Host(host.name <http://host.name> <http://host.name>).get_statistic()
>
> 1. first you need to fetch the host to see it's statistics (by doing params.Host(...) you creating
> host parameters holder which is needed for adding new host to the system)
>
> 2. get_x() getters used to access object attributes, while collections are exposed as properties, do
>
> 1. myhost = api.hosts.get(name="xxx")
> 2. myhost.statistics.list()
> 3. loop over returned collection of statistics to find what you're looking for
>
> - note, statistic objects are complex types, you can look for data at:
>
> statistics[i].unit // the unit of the holder data
> statistics[i].values.value[0].datum // actual data
>
> > print statistics
> >
> > summary=params.Host(host.name <http://host.name> <http://host.name>).get_summary()
>
> summary() is an api method, do:
>
> 1. api = API(url='', username='', password='')
> 2. api.get_summary()
>
>
> > print summary
> >
> >
> > Output is : none
> >
> > Thanks
> > Romil
> >
> >
> > On Wed, Jan 30, 2013 at 2:04 PM, Michael Pasternak <mpastern(a)redhat.com <mailto:mpastern@redhat.com> <mailto:mpastern@redhat.com <mailto:mpastern@redhat.com>>> wrote:
> >
> >
> > Hi Romil,
> >
> > On 01/30/2013 10:17 AM, Romil Gupta wrote:
> > > Hi all ,
> > >
> > > how I can get the hosts details like Active VM's ,
> >
> > host doesn't have running vms attribute, instead you
> > can see in the guest on which host it's running,
> >
> > general system summary you can see at api.get_summary()
> >
> > Number of CPU's , CPU name , CPU type ,
> >
> > these are host attributes
> >
> > Physical Memory (used , free ) , swap size and other parameters
> >
> > these are host.statistics attributes
> >
> > > using ovirt-engine-sdk-3.2.0.5-1.
> > >
> > >
> > >
> > > Regards,
> > > Romil
> > >
> > > --
> > > I don't wish to be everything to everyone, but I would like to be something to someone.
> >
> >
> > --
> >
> > Michael Pasternak
> > RedHat, ENG-Virtualization R&D
> >
> >
> >
> >
> > --
> > I don't wish to be everything to everyone, but I would like to be something to someone.
>
>
> --
>
> Michael Pasternak
> RedHat, ENG-Virtualization R&D
>
>
>
>
> --
> I don't wish to be everything to everyone, but I would like to be something to someone.
--
Michael Pasternak
RedHat, ENG-Virtualization R&D
12 years, 3 months
Re: [Users] Rest-api to fetch the hosts details ( active vm's , CPU , Physical memory etc.)
by Michael Pasternak
Romil,
On 01/30/2013 12:18 PM, Romil Gupta wrote:
> Hi,
>
> Is this is a right way to get it ??
>
> statistics=params.Host(host.name <http://host.name>).get_statistic()
1. first you need to fetch the host to see it's statistics (by doing params.Host(...) you creating
host parameters holder which is needed for adding new host to the system)
2. get_x() getters used to access object attributes, while collections are exposed as properties, do
1. myhost = api.hosts.get(name="xxx")
2. myhost.statistics.list()
3. loop over returned collection of statistics to find what you're looking for
- note, statistic objects are complex types, you can look for data at:
statistics[i].unit // the unit of the holder data
statistics[i].values.value[0].datum // actual data
> print statistics
>
> summary=params.Host(host.name <http://host.name>).get_summary()
summary() is an api method, do:
1. api = API(url='', username='', password='')
2. api.get_summary()
> print summary
>
>
> Output is : none
>
> Thanks
> Romil
>
>
> On Wed, Jan 30, 2013 at 2:04 PM, Michael Pasternak <mpastern(a)redhat.com <mailto:mpastern@redhat.com>> wrote:
>
>
> Hi Romil,
>
> On 01/30/2013 10:17 AM, Romil Gupta wrote:
> > Hi all ,
> >
> > how I can get the hosts details like Active VM's ,
>
> host doesn't have running vms attribute, instead you
> can see in the guest on which host it's running,
>
> general system summary you can see at api.get_summary()
>
> Number of CPU's , CPU name , CPU type ,
>
> these are host attributes
>
> Physical Memory (used , free ) , swap size and other parameters
>
> these are host.statistics attributes
>
> > using ovirt-engine-sdk-3.2.0.5-1.
> >
> >
> >
> > Regards,
> > Romil
> >
> > --
> > I don't wish to be everything to everyone, but I would like to be something to someone.
>
>
> --
>
> Michael Pasternak
> RedHat, ENG-Virtualization R&D
>
>
>
>
> --
> I don't wish to be everything to everyone, but I would like to be something to someone.
--
Michael Pasternak
RedHat, ENG-Virtualization R&D
12 years, 3 months
[Users] ovirt-engine-sdk-java 1.0.0.3-1 released
by Michael Pasternak
- added persistent authentication support
- added support for the methods overloads based on url/headers params
- added delete methods overloads with body as parameters holder
- to host added [display.address] property for overriding display address
- user can specify own ticket now in vm.ticket() via [action.ticket.value]
More details can be found at [1].
[1] http://www.ovirt.org/Java-sdk-changelog
--
Michael Pasternak
RedHat, ENG-Virtualization R&D
12 years, 3 months
[Users] Engine upgrade failure broken in master
by Dead Horse
Commit: 72a51f5e21f38bf259a460948670eac92e97ca24
Breaks engine upgrades:
2013-01-29 12:40:52::ERROR::engine-upgrade::1177::root:: Traceback (most
recent call last):
File "/usr/bin/engine-upgrade", line 1170, in <module>
main(options)
File "/usr/bin/engine-upgrade", line 1071, in main
if zombieTasksFound():
File "/usr/bin/engine-upgrade", line 766, in zombieTasksFound
msg="Can't get zombie async tasks",
File "/usr/share/ovirt-engine/scripts/common_utils.py", line 459, in
execCmd
env=env,
File "/usr/lib64/python2.7/subprocess.py", line 679, in __init__
errread, errwrite)
File "/usr/lib64/python2.7/subprocess.py", line 1249, in _execute_child
raise child_exception
OSError: [Errno 8] Exec format error
Upgrade was attempted on engine built from commit:
61c11aecc40e755d08b6c34c6fe1c0a07fa94de8
Building from commit: 82bdc46dfdb46b000f67f0cd4e51fc39665bf13b and
upgrading works as expected.
- DHC
12 years, 3 months
[Users] node iso installer drops into dracut with Supermicro Virtual CD drive
by Jorick Astrego
Hi,
I'm still having the same problems booting from a virtual CD problems
with the latest node iso
(ovirt-node-iso-2.6.0-20130125090303git3839439.566.fc18.iso).
The node boots into the installer and then drops into a dracut shell:
dracut-initqueue[488]: Warning: Could not boot.
dracut-initqueue[488]: Warning: /dev/disk/by-label/ovirt-node-iso does
not exist
dracut-initqueue[488]: Warning: /dev/mapper/live-rw does not exist
dracut:/# blkid
/dev/sr0: UUID="2013-01-28-01-35-21-00" LABEL="ovirt-node-iso"
TYPE="iso9660" PTTYPE="dos"
I'll use foreman to deploy it for now but it's making casual testing a
bit harder.
--
Kind Regards,
Jorick Astrego
Netbulae B.V.
12 years, 3 months
[Users] Rest-api to fetch the hosts details ( active vm's , CPU , Physical memory etc.)
by Romil Gupta
Hi all ,
how I can get the hosts details like Active VM's , Number of CPU's , CPU
name , CPU type , Physical Memory (used , free ) , swap size and
other parameters using ovirt-engine-sdk-3.2.0.5-1.
Regards,
Romil
--
I don't wish to be everything to everyone, but I would like to be something
to someone.
12 years, 3 months
[Users] engine Failed to decrypt Data error
by Dead Horse
I see this repeating error in the engine logs quite a bit, any ideas on
what causes it?
2013-01-28 13:13:40,483 ERROR
[org.ovirt.engine.core.engineencryptutils.EncryptionUtils]
(QuartzScheduler_Worker-23) Failed to decrypt Data must not be longer than
256 bytes
2013-01-28 13:13:52,747 ERROR
[org.ovirt.engine.core.engineencryptutils.EncryptionUtils]
(QuartzScheduler_Worker-81) Failed to decrypt Data must not be longer than
256 bytes
2013-01-28 13:13:52,747 ERROR
[org.ovirt.engine.core.engineencryptutils.EncryptionUtils]
(QuartzScheduler_Worker-84) Failed to decrypt Blocktype mismatch: 0
2013-01-28 13:13:52,761 ERROR
[org.ovirt.engine.core.engineencryptutils.EncryptionUtils]
(QuartzScheduler_Worker-85) Failed to decrypt Data must start with zero
2013-01-28 13:14:00,964 ERROR
[org.ovirt.engine.core.engineencryptutils.EncryptionUtils]
(QuartzScheduler_Worker-23) Failed to decrypt Data must not be longer than
256 bytes
2013-01-28 13:14:00,964 ERROR
[org.ovirt.engine.core.engineencryptutils.EncryptionUtils]
(QuartzScheduler_Worker-20) Failed to decrypt Data must not be longer than
256 bytes
2013-01-28 13:14:02,983 ERROR
[org.ovirt.engine.core.engineencryptutils.EncryptionUtils]
(QuartzScheduler_Worker-29) Failed to decrypt Data must not be longer than
256 bytes
2013-01-28 13:14:02,983 ERROR
[org.ovirt.engine.core.engineencryptutils.EncryptionUtils]
(QuartzScheduler_Worker-34) Failed to decrypt Data must not be longer than
256 bytes
- DHC
12 years, 3 months
[Users] oVirt 3.2 Release delayed
by Mike Burns
The oVirt 3.2 Release has been delayed due to delays getting a stable
oVirt Node available.
New Dates
* General availability: 2013-02-06
* Beta release: 2013-01-24
* Test Day: 2013-01-29
Thanks
Mike
12 years, 3 months
[Users] Cannot run VM. Low disk space on relevant Storage Domain
by Ricky Schneberger
This is a MIME-formatted message. If you see this text it means that your
E-mail software does not support MIME-formatted messages.
--=_bauhaus.teknikservice.nu-11509-1359279587-0001-2
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 7bit
X-Mime-Autoconverted: from 8bit to 7bit by courier 0.68
Hi,
I have put up a test environment with oVirt 3.1 and found out (because
of limited storage) that when trying to run a vm i got "Error while
executing action: Cannot run VM. Low disk space on relevant Storage Domain."
Disk space on the storage domain is 322GB free (9% of total).
Can I change these limits? How are they calculated? By percentage?
I dont use quotas anywhere.
Regards
//Ricky
--=_bauhaus.teknikservice.nu-11509-1359279587-0001-2
Content-Type: application/pgp-keys; name="0xB88C0B63.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
filename="0xB88C0B63.asc"
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.11 (GNU/Linux)
mQGiBEN3nx4RBACx6yQ1m83u8PBPG6iivIWICsZJJn8JP6ccGCeS03bQpxXjk8kx
V9r3pbY2lJUfu6tlVd/0G/RDC2wixLirYIkelYd5QbL8mk3JIwA96zlIxVRBThmW
mig+nI189/vXTNPQKC33xm+/g7kkckD/e/jR2jGycyZiEfvmWQZRpEbnEwCgkOpp
Erfa3KmTZ4mk3ulTW4Q8KaUEAJmCZBuuy/CjhxpK4addENpsl7WT90aNZbrtzFIk
W1xxkEqRwP2BvNyvTv8EHFsmJehxwn9InQ5gSS2LMGKhKapfT5gMHnmyH+YEdcLj
h40L64ftXIyEFenfx8UN85cEKmHqhmrYZkCoRKUeakxZaKs2JZDmytrlLLoj5vAP
wV06BACvKm4eGGkTzx8jKRsZF0Y0B532KhV7tr7OhYE654cPQLdok/0exO1n6wR8
2nljL5G8lMEpB1LQp2Xdrz/6z7vbI0w1Qhxddg71EDIuRIJWADOMO6uf20Vg/Wh8
MGwXCGVtH1QZ4FY4tza5PpP6EqKcb5dqC4TyPYz0J4xpxlc5gbQ8Umlja3kgU2No
bmViZXJnZXIgKGFsdGVybmF0aXZlIHdvcmstZW1haWwpIDxyaWNreUBhY3RuZXQu
c2U+iGAEExECACAFAkh0smkCGyMGCwkIBwMCBBUCCAMEFgIDAQIeAQIXgAAKCRA5
qnzVuIwLY704AJ9sXKYXUMsxbEjmthggRkuPCLY8aQCdEMyWeiELV9BpWegz42cy
4WRC51u0J1JpY2sgU2NobmViZXJnZXIgPHJpY2t5QHNjaG5lYmVyZ2VyLnNlPohb
BBMRAgAbBQJDd58eBgsJCAcDAgMVAgMDFgIBAh4BAheAAAoJEDmqfNW4jAtjU8YA
n1t4P7UmK/MpTQ7FdtALG1Ul45x+AJwOaL66GsKKJLTfVDFvqLu+sFKj54heBBMR
AgAeBgsJCAcDAgMVAgMDFgIBAh4BAheABQJNZ2orAhkBAAoJEDmqfNW4jAtjuMUA
mwaHDOZF0CcAmZBopNfQJt7AZaGTAJwI+aN7wAg9VOFZcQ6W1YDAmOueOYhGBBAR
AgAGBQJLmeIYAAoJEC1o4PaBggLPb68An3ds1SETjFt7FuWNR9RRxxspPZNeAJ0R
kDWbt5WzBJEEEia3PmaK9zssgYhGBBARAgAGBQJDe5cHAAoJEO2/HhEm8iS4m9kA
n35gnntj0uU2E7qgRb27c+XDgsGNAKCSQIrEF36S8RK8SilPLLjhnrHnJYkBIQQQ
AQIADAUCQ3fQFwUDABJ1AAAKCRCXELibyletfNcVB/YyWm59UliFxj+94qpCr7DJ
CeuFZE7Yn3lzwnEOU3Y7pOD1s9BwGPeO5iKej4TCT2JtsWFao1HGSztrE3SrWkc/
rg0MYTIZQc/+wbi+6fTSedZVqxf9teW3FdRQXjOZ9JohJLdU0XxT23TPuSmk0ibo
lCD35hg93vEfePrC2+lACp41a2UP3VNbqUM4rL6F4PfmgyIqgY5FtTWJjNJN34Uv
Tb35YIud+UkQ6eKZggKwBPfma3nwEie1S4f9FskwTGyPNyKGr8Spl2r4snwmFk/Z
KP4k615ob8T8Wbza8TGeS5kYw+Za7MF+Oi+jkdXF/aZSdHBNb9pBcKafcbdEfsWJ
ASIEEAECAAwFAkR/OX0FAwASdQAACgkQlxC4m8pXrXyv3wf+I+Xr2BP1+0yRPh07
yNQQBDT+LaqmRZ0sxyauABbqQzs3R2GAdm/92CuRSXz7jgBiiRd7ZlzA/wm2ApBF
Q8jg9DCxSRNR8KYevTq38ZabMeU3tRkqH5rN++rTmdMbP+MtfoYtTb9+1URXuUBw
Ib4NJWV3QGbSSZLJyA/evha4KBu5A16W8LLb0U9CiSAM7y2ES5Qceg1GXVMVbF6q
TURf+yUxgJjSBBY3zPkTZVF9mxYWgDlzONj6tRUWThqZHtSSBoKfe63hlppLczND
x0TVtuRAFGDlC5HOWF+dAAm5T/Wv9YBmjZi/mcjPvbxhXN5aNff28AMVLW6DqBPX
cotg+4kBIgQQAQIADAUCRJBdEQUDABJ1AAAKCRCXELibyletfD0iB/4yljOGMCTh
u3Q2XQe8RYn3qMuzUOBpdDimoU0kewdkkDoDXF1Z0uTJPA8pdq6beIv5VXqaEGc3
TKFYuX7xMO6fkWryUKf7Vdbb3xNlChQvxBQwPRNdd2B3w+tBcRgJPUJalshl0IvO
DuNbasRyEWbm+LbJF8dZOe5Jr2KTLDPmWURF3faC9YzcfZGbH/Kl5W1SCAigdnzd
62JuAJ8fa/K2zITFF0XkHsXe4jnQSPHUD/sN60zTkeBQdvdjvdtqInRQEJnbJXz6
3Z3Ey3O0736rI9C7MtI9oCCa8U5ed3syPetjf8i3HuVkV8NdmgM+8Cy3MPpWGqm5
oIHDuyS1rh5tiQEiBBABAgAMBQJEoilQBQMAEnUAAAoJEJcQuJvKV618XHsIALtb
J9ZVm4e2KW9C9INBLytXXkiT+SJiwcFiZpbFYWQHGckw0YAyzXTXdxK+/qwCWJWV
z9lgMybf8JQ5bnFptKfnYVphT7R6s+1uzr4phdO1HHxI6p794m9bk+jFNVp4Uj67
JL7OYek2kn+ZZKf7caxOGrmjqzEN1CP8fZyP3p/ClbnH2hhV8OWrIyetP7+O4/a+
I/mHmdvDpwEiKfpHFXJOiuedKJCZ/4sZHFckdsNi8JJWK7/BZQEAlUesEVg38LV+
z7T7VV3fZRW2mbUz1qaOmZyKgUTBQkgH0zsp9OSrggiAsRUxfhqJLjqCWiiUa4Bb
ScPUzM227/5TzR/f/OCJASIEEAECAAwFAkSzTN0FAwASdQAACgkQlxC4m8pXrXxe
mwf+OLvUlDuw403e0R3O9mY3gzqPmQR8G2N6ze7cNQsFHQlSzsQ9HJlmbc0JmU9E
OkhqLDCGtSd/hZ7hLMK8xRkfanA5kHhBWLGOd+5lqZhJ1MvRY+EhBQ5iMZlhE5Op
luRGe5LiKK/esSgL5dr5AnCDl16GI7J8kxYtZcJWGXVj8jEHFyIH7s/E5RS+CJwP
3EAR643o+jtcY0Ut1977+rM18g0l9SRn/BD0AYDAHqYibNxMfVLcbT21MBwBQg/1
aXHBlGb/QmQbclKBWbwATPggc/4zo7kRizFxmVn7cEUPcyz+vA+eqsvuJO5SXZTm
gcz/ZbbwVMs6VmD1WrqDir+FIokBIgQQAQIADAUCRMRwwgUDABJ1AAAKCRCXELib
yletfLVuB/sGnNs1GYiwZJqiopKp+g+xDnRb4fAvy50hfnrZStaAJAHNhJgT3ekR
nLvmi4WWvWZm2d/Kdk/CUuC3LptFLo+nbptbcx6y6f3ulBgyoI+Nza9+fxdv5ieb
TNPi6Dk41xatmeEr2ZjQN9x41I1+Ta3pNEFL7XzbClLRtoTL9/qmSUcJtdHQ8Eux
9CD0LkUeqBgQGrH5mZgSNUMUkQJ1eoeKKsYXwXalH8ruphhiXiF3Zyl9Tc/LjLVU
5apJpTq/od/cN/F91Y+IGUX4zAZO/2qSJ9pDzsj9mOpwwTFCMy1f9f+z5It48Wig
u1nGDOBqyflwSQ1K4q/+pLWaFSpP2QcDiQEiBBABAgAMBQJE1j0WBQMAEnUAAAoJ
EJcQuJvKV618EVIH+gOZ35Wq0r0IwRIV37Cnx+N1MVy6YSEAhzIsEgMZwMSa7u08
8aGHEBvsP8D98Yh33v7Se2g23Ce5pWPYp65gskwxCD6v2K4uCEuJNcxLBq9cIHJv
DOuSLnbqVDomzT1qrrVaFiiLrTOMvqHpa4goxbVpjM8m1Yjpqwy6/vs92dYFVlpM
zBIpgcN+bDqUuwoHHemCc2h2Sc6EzcGxzOoHvPboo1lUi6JknUaQaJbUx5VkBEiz
t3oeV6f1yzvfWwmwWQ7TTHeBqvGSJwZhoyAIWxdP1Cw9GVgq3tSZyEAyNF4ODNic
qudk53jotsUIC3mJK3TWDWlXGNScu/Ffc6KexgiJASIEEAECAAwFAkToCVUFAwAS
dQAACgkQlxC4m8pXrXzyaAf8ChDiv1O1z4Oo7QHcEYP4HXWxx1sDICPzH2oIec2J
9qCIhTlIbKn7y2Og5CfUYvCPqiQOoxn+pYZm+jL1BHvQwYcKdxKnuMmE/OaLzzPv
MlskF2HhRxviN+nFYxWtc+WXNJX+GYJKT23b82P08NZisP/qLnILRur4YV0X3jd9
GbeYGflDdVAKlzsgJo3wWfBYVTVMzmlCeV5ZdE9V9Z1lQht2foD0KiXb+jGcWjFD
fMNK2bhofx/f/eowpFXVKuvQTLFxA3od77ngSnUVCoSXb4yF6hFLhfnMYvIdG4MI
rRIMWm65hBZa6YuEk1UC3FvPK0PCz2gvUI69J5SkkOpdfYkBIgQQAQIADAUCRPnV
igUDABJ1AAAKCRCXELibyletfEkOB/43lBTeg7oa6mDFJ3aL2u/RnzbIb0YYfMUm
1RCQe/3/EMIh+MxlTfbU6UBuc0nzXmdalyVOu1aB5+GbDMi7cKMPMcknd7ItrrmI
JNevcSjcXpCsESHjMcUdB1eUoq7VCirIes1+wLeilszDnNUVcC8FI0P9pAyki+w1
+YzyJX8jupnSOwYcqKuq8+AiLsoavKSvjukfJCMTjEtsCjOb2Z/hTBf37+xBVCKW
NIQR5Xjjp6d8bO9xrqv/PvjLzuhKYDUAM4PVVb9NXDvLAWBF2xKDxBUjpFZiln+/
ic4YpyMAFiH34moPIWM+LSJ0y826gQybpMqXvP8wU4y2PxtyajmpiQEiBBABAgAM
BQJFC6GOBQMAEnUAAAoJEJcQuJvKV618jdgH/2GHXwcE11zfI0Lmn0mLSawcb76b
n+wAZEohxWjxoGAjMZdEsuyFDJP/PnkAAT/5MzgyquxGF1+/WnhtEtsm54hqw44i
8/4fwbII7XZk85h+y3srmipOhhThqnmY8PkSz/+1qaq9UzifeqG82mJ16Y0QwG9J
tgbPntr11zN0h1+RWpZVnl2I9bFeZSDDV3AltYkmzkhtTa90AaA78jKanvsvfHyr
Scr+thgEO2bqJa5LAmtnQ9+Rg+mSSa7eJMhh1CzafOD3AaW9ebQIpkwqSz9VuBga
y2y07NUyULp2uSXQW/uVVH+fcrxNGn6E3WnzoLpEOpzVVZ7vFvxhTczu+YGJASIE
EAECAAwFAkV0fvAFAwASdQAACgkQlxC4m8pXrXy0bwf/YTPtF8P3WZC2YrVms3+v
WzdcreVa0anypXliwNrbZsqI/tUZF8LdNu3echo5kyJM8NVdqhIMU2vNB8Xo1Y8B
bE0U+hIWyv7TBl+CYru5HC8RFAvC69AYQbOLJIMsp/DqFLG8JuryxIh4zcRwNHHr
zSyAhq20O2Yn2v47kvl/fadsaMBt4YLXoLphJn7MRD2Je1qsCBjJzinNewq7Tf+y
fMSSz3b7GV3r5jOdTx5T8WWda+H/pgGLXG050nlZmmrmBFQJS85Jal4mTBpgKwLJ
SBG6HrUkmQ/CqiAlxWpmpL88wAtKBr9drCjJ/9pp1Na4onNA14gZKWqSm6ZvHPG4
KYkBIgQQAQIADAUCRYWijAUDABJ1AAAKCRCXELibyletfOvCCACtsRTvZw621Zal
RBHMsT1JAS8NRMJBkiesTi71q+/OXStQ7LSUGjSyxyir3c2ssblGT0UdZdeZLrx9
wL7618RSitxSswFAPTgStg3Exs3XqUcmgEAfQKYpIV5UZR237+Re4ViNaU6OxnxF
6DlArL02Mnq3EbfQRg33XRd1/EYk3zdMxzwMcnmBRXNnl/Ey0yJnxZ8GuXpIhRU0
3X8ZhpuCzsSReUQtfRPErQk9U6mp0r1QXr8XAetqb5ya6SVaZfO4gfHjIzpyQg3T
SVhwPQOL+nxxEz2EADHk7LQF/ZoGP3L2s++T/HETnlUmXCyCcFn+7nV3nTYUorPo
iVnG97pdiQEiBBABAgAMBQJFlsYRBQMAEnUAAAoJEJcQuJvKV618uqcIAKSeEor3
dw3DLmEOUQO7BM2vv2UBFdnDEQKhbsMIJsTtqZ3xBBOlbY3g6eym8qvucS4XSTwi
M4y1dJfLBsaT0q2oggla9320ysEaCRrJ1Sm+li8aZwjo8T0UfUGx8LQh53wvbP+i
mvH9Utzlo+RD2wrTyvyvxS6tK4wYuF08d44AAopSspgZD/d2Gf0E7WOKP4kQRcA0
7luXzkG7XdFOVmLL81GJEXkMiLSmMdIsmG4Ljw2YyIiTZ9APsV1o3iOKPlEATL4c
UnOYzV/5En7jlv95/m2wNYU442Ewm811QzuKdE/OsoQqnh1z9QwtcswWpwDIvr3s
SzJAck9BpmEg96eIWwQTEQIAGwUCQ3efHgYLCQgHAwIDFQIDAxYCAQIeAQIXgAAK
CRA5qnzVuIwLY1PGAKCF953pK4A/YN0+vXWqFKItF1Uj1QCeMwYzWiiZYJsnmyjj
8/cCta4YC6y0JFJpY2t5IFNjaG5lYmVyZ2VyIDxyaWNreUBhY3QyMDAwLnNlPohg
BBMRAgAgBQJFtoMUAhsjBgsJCAcDAgQVAggDBBYCAwECHgECF4AACgkQOap81biM
C2MMIQCfYBqhJT3nKJyimLZHNiqxklmM698An0xcrcoX/ltrAvTXZzipABm1Wn4I
tCpSaWNreSBTY2huZWJlcmdlciA8cmlja3lAdGVrbmlrc2VydmljZS5udT6IYAQT
EQIAIAUCRbaDKAIbIwYLCQgHAwIEFQIIAwQWAgMBAh4BAheAAAoJEDmqfNW4jAtj
OaYAoI+lG1KAFfylizmJ8Qee1dPjWurqAJ9aFY/iU7RA52dMXEb0PlnqyHAP3LQr
Umlja3kgU2NobmViZXJnZXIgPHJpY2t5QG1qb2xrY2VudHJhbGVuLnNlPohgBBMR
AgAgBQJGWBznAhsjBgsJCAcDAgQVAggDBBYCAwECHgECF4AACgkQOap81biMC2Mw
UQCdHjzBE528RJ7ZD/WU7Ht5WjZsq2YAnAs6DOHZXtWSsFWv13j9brOEWzWPuQIN
BEN3n0kQCAC585+HR4V/5M8/HHb6uRUdLhBzkVjWCiB7eln3E50HRnTfberPbAvK
ZZ9Ske4OS90I5PgGj4zjDleHX8/pGXgaSV+8CwFk9eDqoLKzFKvVJaSp8CX1JrO2
975oUNpFqOy2ejX0jXvphBJIquvBInjOPBpGOkRHIMN9p0rlVhMNywo+t8hEj9PI
nupnUqEUziZONSURPPH5J4yYHl0TLkcRNd2OvnDXgdF5ja+FMB5DhC6gOyylhCGU
8271KAYmE8T3TSFWM+sT6eXJSkMxDmTwQJsFVyNFKrSSukEljyraiZjY9xOj3G7A
tnqu8PVH968fK3jI5Ee0jfGZeDwduJovAAQNB/4rpdE7z3Xa37J+cGwOKrjA9A13
FwoHYa3i1KdfnU2WcxkIRh+BTzjvhdHLzLu2IBHFpGxCrJjO096xR1axhM901k/Q
irLaafqf/jt7QCFcC+kKxVk3rN1RWpfHeJxV62q4szS0k0CU8WHAeaF6/JmKnNq0
CNADBFj9XnQrrPV7RmwbF56cFDOqZpZG0BvWWrhdHOOgoi0Cz24rkMrP47IP9TTP
u64MZVflCJlEsBFOI1Qpf5XvW0GLnLeGrZfY+qTDwP8c0YoVsq3KfWGMLkc5lL7i
snA69d7GPmHpGzm5QqYvJAcoTwKvIHaqyx/JgYHV75imrRlsKKigcRdKrcQqiEYE
GBECAAYFAkN3n0kACgkQOap81biMC2NbzQCfXZJckZnUcuURLggeNvttInRgdloA
nAhULJ90SW2EyHG05qqI/iOeskY4
=3DA0DR
-----END PGP PUBLIC KEY BLOCK-----
--=_bauhaus.teknikservice.nu-11509-1359279587-0001-2--
12 years, 3 months
[Users] 3.1 to 3.2 migration
by Alexandru Vladulescu
Hi everybody,
This might seem to be a stupid question but I might just give it a shot
and ask you if has anybody tried so far to migrate a 3.1 stable to a 3.2
alpha release ? On my side I have no luck.
Might have found a bug as well, but that is what you need to confirm to
me. I had the jboss setup running on port http 8080 and for https 8443.
After the upgrade, everything I try besides the port 80 and 443 doesn't
work. If I try to reconfigure the previous used ports, I find java
listening on port 8080 for http, but when I try to log in and switch to
https on admin portal there is nothing listening out there and I get
"Page cannot be displayed".
If we cannot consider migration, would it be sufficient to insert the
dump from the 3.1 into 3.2 current alpha release ?
Alex.
12 years, 3 months
[Users] default mutipath.conf config for fedora 18 invalid
by Gianluca Cecchi
Hello,
configuring All-In-One on Fedora 18 puts these lines in multipath.conf
(at least on ovrt-njghtly for f18 of some days ago)
# RHEV REVISION 0.9
...
defaults {
polling_interval 5
getuid_callout "/lib/udev/scsi_id --whitelisted
--device=/dev/%n"
...
device {
vendor "HITACHI"
product "DF.*"
getuid_callout "/lib/udev/scsi_id --whitelisted
--device=/dev/%n"
...
Actually Fedora 18 has device-mapper-multipath 0.49 without getuid_callout;
from changelog:
multipath no longer uses the getuid callout. It now gets the
wwid from the udev database or the environment variables
so the two getuid_callouts lines have to be removed for f18
multipath -l gives
Jan 16 00:30:15 | multipath.conf +5, invalid keyword: getuid_callout
Jan 16 00:30:15 | multipath.conf +18, invalid keyword: getuid_callout
I think it has to be considered.
Gianluca
12 years, 3 months
[Users] al-in-one failed to attach storage with latest f18 nightly
by Gianluca Cecchi
Hello,
just setup from latest nightly
3.2.0-1.20130125.git032a91f.fc18
installation seems ok but then I get these messages
in host events
2013-Jan-27, 14:10
Failed to connect Host local_host to Storage Pool local_datacenter
2013-Jan-27, 14:10
Host local_host cannot access one of the Storage Domains attached to
the Data Center local_datacenter. Setting Host state to
Non-Operational.
2013-Jan-27, 14:10
Failed to Reconstruct Master Domain for Data Center local_datacenter.
2013-Jan-27, 14:10
Detected new Host local_host. Host state was set to Up.
2013-Jan-27, 14:10
Host local_host was autorecovered.
In Action items:
Host failed to attach one of the Storage Domains attached to it.
The first errors in vdsm.log after boot are these ones:
Thread-20::DEBUG::2013-01-27
14:10:05,063::resourceManager::640::ResourceManager::(releaseResource)
Resource 'Storage.ac13ff4f-37fe-437e-876f-c2aa2c09a9c
8' is free, finding out if anyone is waiting for it.
Thread-20::DEBUG::2013-01-27
14:10:05,063::resourceManager::648::ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.ac13ff4f-37fe-437e-876f-c2aa2c09a9c8', Clearing records.
Thread-20::ERROR::2013-01-27
14:10:05,063::task::833::TaskManager.Task::(_setError)
Task=`d90f779b-e815-4d9a-af9e-ac4a997a2f06`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 840, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 933, in connectStoragePool
masterVersion, options)
File "/usr/share/vdsm/storage/hsm.py", line 980, in _connectStoragePool
res = pool.connect(hostID, scsiKey, msdUUID, masterVersion)
File "/usr/share/vdsm/storage/sp.py", line 688, in connect
self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
File "/usr/share/vdsm/storage/sp.py", line 1221, in __rebuild
masterVersion=masterVersion)
File "/usr/share/vdsm/storage/sp.py", line 1573, in getMasterDomain
raise se.StoragePoolMasterNotFound(self.spUUID, msdUUID)
StoragePoolMasterNotFound: Cannot find master domain:
'spUUID=ac13ff4f-37fe-437e-876f-c2aa2c09a9c8,
msdUUID=6e1b254f-2edb-48c9-811a-f1082e30c5a4'
Thread-20::DEBUG::2013-01-27
14:10:05,066::task::852::TaskManager.Task::(_run)
Task=`d90f779b-e815-4d9a-af9e-ac4a997a2f06`::Task._run:
d90f779b-e815-4d9a-af9e-ac4a997a2f06
('ac13ff4f-37fe-437e-876f-c2aa2c09a9c8', 1,
'ac13ff4f-37fe-437e-876f-c2aa2c09a9c8',
'6e1b254f-2edb-48c9-811a-f1082e30c5a4', 1) {} failed - stopping task
Thread-20::DEBUG::2013-01-27
14:10:05,066::task::1177::TaskManager.Task::(stop)
Task=`d90f779b-e815-4d9a-af9e-ac4a997a2f06`::stopping in state
preparing (force False)
Thread-20::DEBUG::2013-01-27
14:10:05,066::task::957::TaskManager.Task::(_decref)
Task=`d90f779b-e815-4d9a-af9e-ac4a997a2f06`::ref 1 aborting True
Thread-20::INFO::2013-01-27
14:10:05,066::task::1134::TaskManager.Task::(prepare)
Task=`d90f779b-e815-4d9a-af9e-ac4a997a2f06`::aborting: Task is
aborted: 'Cannot find master domain' - code 304
Thread-20::DEBUG::2013-01-27
14:10:05,066::task::1139::TaskManager.Task::(prepare)
Task=`d90f779b-e815-4d9a-af9e-ac4a997a2f06`::Prepare: aborted: Cannot
find master domain
Thread-20::DEBUG::2013-01-27
14:10:05,066::task::957::TaskManager.Task::(_decref)
Task=`d90f779b-e815-4d9a-af9e-ac4a997a2f06`::ref 0 aborting True
Thread-20::DEBUG::2013-01-27
14:10:05,067::task::892::TaskManager.Task::(_doAbort)
Task=`d90f779b-e815-4d9a-af9e-ac4a997a2f06`::Task._doAbort: force
False
Thread-20::DEBUG::2013-01-27
14:10:05,067::resourceManager::976::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-20::DEBUG::2013-01-27
14:10:05,067::task::568::TaskManager.Task::(_updateState)
Task=`d90f779b-e815-4d9a-af9e-ac4a997a2f06`::moving from state
preparing -> state aborting
Thread-20::DEBUG::2013-01-27
14:10:05,067::task::523::TaskManager.Task::(__state_aborting)
Task=`d90f779b-e815-4d9a-af9e-ac4a997a2f06`::_aborting: recover policy
none
Thread-20::DEBUG::2013-01-27
14:10:05,067::task::568::TaskManager.Task::(_updateState)
Task=`d90f779b-e815-4d9a-af9e-ac4a997a2f06`::moving from state
aborting -> state failed
Thread-20::DEBUG::2013-01-27
14:10:05,067::resourceManager::939::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-20::DEBUG::2013-01-27
14:10:05,067::resourceManager::976::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-20::ERROR::2013-01-27
14:10:05,068::dispatcher::67::Storage.Dispatcher.Protect::(run)
{'status': {'message': "Cannot find master domain:
'spUUID=ac13ff4f-37fe-437e-876f-c2aa2c09a9c8,
msdUUID=6e1b254f-2edb-48c9-811a-f1082e30c5a4'", 'code': 304}}
Thread-27::DEBUG::2013-01-27
14:11:33,529::BindingXMLRPC::926::vds::(wrapper) client
[192.168.1.101]::call getCapabilities with () {}
Thread-27::DEBUG::2013-01-27
14:11:33,564::BindingXMLRPC::933::vds::(wrapper) return
getCapabilities with {'status': {'message': 'Done', 'code': 0},
'info': {'HBAInventory': {'iSCSI': [{'InitiatorName':
'iqn.1994-05.com.redhat:e6aa759a959'}], 'FC': []}, 'packages2':
{'kernel': {'release': '204.fc18.x86_64', 'buildtime': 1358349772.0,
'version': '3.7.2'}, 'spice-server': {'release': '1.fc18',
'buildtime': 1356035501L, 'version': '0.12.2'}, 'vdsm': {'release':
'0.119.git4caf7d4.fc18', 'buildtime': 1359107301L, 'version':
'4.10.3'}, 'qemu-kvm': {'release': '1.fc18', 'buildtime': 1355702442L,
'version': '1.2.2'}, 'libvirt': {'release': '3.fc18', 'buildtime':
1355788803L, 'version': '0.10.2.2'}, 'qemu-img': {'release': '1.fc18',
'buildtime': 1355702442L, 'version': '1.2.2'}, 'mom': {'release':
'1.fc18', 'buildtime': 1349470214L, 'version': '0.3.0'}}, 'cpuModel':
'AMD Athlon(tm) II X4 630 Processor', 'hooks': {}, 'cpuSockets': '1',
'vmTypes': ['kvm'], 'supportedProtocols': ['2.2', '2.3'], 'networks':
{'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr': '192.168.1.101', 'cfg':
{'IPADDR': '192.168.1.101', 'GATEWAY': '192.168.1.254', 'DELAY': '0',
'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO':
'none', 'STP': 'no', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge',
'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp':
'off', 'bridged': True, 'gateway': '192.168.1.254', 'ports':
['p10p1']}}, 'bridges': {'ovirtmgmt': {'addr': '192.168.1.101', 'cfg':
{'IPADDR': '192.168.1.101', 'GATEWAY': '192.168.1.254', 'DELAY': '0',
'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO':
'none', 'STP': 'no', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge',
'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp':
'off', 'ports': ['p10p1']}}, 'uuid':
'E0E1001E-8C00-002A-6F9A-90E6BAC9F1E1', 'lastClientIface':
'ovirtmgmt', 'nics': {'p10p1': {'addr': '', 'cfg': {'BRIDGE':
'ovirtmgmt', 'NM_CONTROLLED': 'no', 'HWADDR': '90:e6:ba:c9:f1:e1',
'STP': 'no', 'DEVICE': 'p10p1', 'ONBOOT': 'yes'}, 'mtu': '1500',
'netmask': '', 'hwaddr': '90:e6:ba:c9:f1:e1', 'speed': 100}},
'software_revision': '0.119', 'clusterLevels': ['3.0', '3.1', '3.2'],
'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,hw_pstate,npt,lbrv,svm_lock,nrip_save,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2',
'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:e6aa759a959',
'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1'],
'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {},
'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr':
'00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500',
'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}},
'software_version': '4.10', 'memSize': '5970', 'cpuSpeed': '800.000',
'version_name': 'Snow Man', 'vlans': {}, 'cpuCores': '4',
'kvmEnabled': 'true', 'guestOverhead': '65', 'management_ip': '',
'cpuThreads': '4', 'emulatedMachines': [u'pc-1.2', u'none', u'pc',
u'pc-1.1', u'pc-1.0', u'pc-0.15', u'pc-0.14', u'pc-0.13', u'pc-0.12',
u'pc-0.11', u'pc-0.10', u'isapc'], 'operatingSystem': {'release': '1',
'version': '18', 'name': 'Fedora'}, 'lastClient': '192.168.1.101'}}
info on host
[root@tekkaman vdsm]# vdsClient -s 0 getVdsCaps
HBAInventory = {'iSCSI': [{'InitiatorName':
'iqn.1994-05.com.redhat:e6aa759a959'}], 'FC': []}
ISCSIInitiatorName = iqn.1994-05.com.redhat:e6aa759a959
bondings = {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500',
'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0':
{'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [],
'hwaddr': '00:00:00:00:00:00'}}
bridges = {'ovirtmgmt': {'addr': '192.168.1.101', 'cfg':
{'IPADDR': '192.168.1.101', 'ONBOOT': 'yes', 'DELAY': '0',
'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO':
'none', 'STP': 'no', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge',
'GATEWAY': '192.168.1.254'}, 'mtu': '1500', 'netmask':
'255.255.255.0', 'stp': 'off', 'ports': ['p10p1']}}
clusterLevels = ['3.0', '3.1', '3.2']
cpuCores = 4
cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,hw_pstate,npt,lbrv,svm_lock,nrip_save,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2
cpuModel = AMD Athlon(tm) II X4 630 Processor
cpuSockets = 1
cpuSpeed = 800.000
cpuThreads = 4
emulatedMachines = ['pc-1.2', 'none', 'pc', 'pc-1.1', 'pc-1.0',
'pc-0.15', 'pc-0.14', 'pc-0.13', 'pc-0.12', 'pc-0.11', 'pc-0.10',
'isapc']
guestOverhead = 65
hooks = {}
kvmEnabled = true
lastClient = 192.168.1.101
lastClientIface = ovirtmgmt
management_ip =
memSize = 5970
netConfigDirty = False
networks = {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr':
'192.168.1.101', 'cfg': {'IPADDR': '192.168.1.101', 'ONBOOT': 'yes',
'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0',
'BOOTPROTO': 'none', 'STP': 'no', 'DEVICE': 'ovirtmgmt', 'TYPE':
'Bridge', 'GATEWAY': '192.168.1.254'}, 'mtu': '1500', 'netmask':
'255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway':
'192.168.1.254', 'ports': ['p10p1']}}
nics = {'p10p1': {'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt',
'NM_CONTROLLED': 'no', 'DEVICE': 'p10p1', 'STP': 'no', 'HWADDR':
'90:e6:ba:c9:f1:e1', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '',
'hwaddr': '90:e6:ba:c9:f1:e1', 'speed': 100}}
operatingSystem = {'release': '1', 'version': '18', 'name': 'Fedora'}
packages2 = {'kernel': {'release': '204.fc18.x86_64', 'buildtime':
1358349772.0, 'version': '3.7.2'}, 'spice-server': {'release':
'1.fc18', 'buildtime': 1356035501, 'version': '0.12.2'}, 'vdsm':
{'release': '0.119.git4caf7d4.fc18', 'buildtime': 1359107301,
'version': '4.10.3'}, 'qemu-kvm': {'release': '1.fc18', 'buildtime':
1355702442, 'version': '1.2.2'}, 'libvirt': {'release': '3.fc18',
'buildtime': 1355788803, 'version': '0.10.2.2'}, 'qemu-img':
{'release': '1.fc18', 'buildtime': 1355702442, 'version': '1.2.2'},
'mom': {'release': '1.fc18', 'buildtime': 1349470214, 'version':
'0.3.0'}}
reservedMem = 321
software_revision = 0.119
software_version = 4.10
supportedENGINEs = ['3.0', '3.1']
supportedProtocols = ['2.2', '2.3']
uuid = E0E1001E-8C00-002A-6F9A-90E6BAC9F1E1
version_name = Snow Man
vlans = {}
vmTypes = ['kvm']
Thanks,
Gianluca
12 years, 3 months
[Users] Run once for windows xp vm very slow: correct?
by Gianluca Cecchi
Hello,
I have a windows XP vm on f18 oVirt all-in-one and rpm from nightly
repo 3.2.0-1.20130123.git2ad65d0.
disk and nic are VirtIO.
When I run it normally (spice) I almost immediately get the icon to
open spice connection and the status of VM becomes Powering Up.
And in spice window I can see the boot process, that completes in less
than 2 minutes
When I select Run once it remains for about 10 minutes in executing
phase: see this image for timings comparison:
https://docs.google.com/file/d/0BwoPbcrMv8mvb3FIeHExVHFibms/edit
and in vm line, the status appears as down, so that I don't get the
icon to connect to console.
Only when it completes after 10 minutes, I get console link and I find
the VM already at its final desktop prompt
Is this expected or should I send anything to debug/investigate?
12 years, 4 months
[Users] Local storage domain fails to attach after host reboot
by Patrick Hurrelmann
Hi list,
after rebooting one host (single host dc with local storage) the local
storage domain can't be attached again. The host was set to maintenance
mode and all running vms were shutdown prior the reboot.
Vdsm keeps logging the following errors:
Thread-1266::ERROR::2013-01-24
17:51:46,042::task::853::TaskManager.Task::(_setError)
Task=`a0c11f61-8bcf-4f76-9923-43e8b9cc1424`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 861, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 817, in connectStoragePool
return self._connectStoragePool(spUUID, hostID, scsiKey, msdUUID,
masterVersion, options)
File "/usr/share/vdsm/storage/hsm.py", line 859, in _connectStoragePool
res = pool.connect(hostID, scsiKey, msdUUID, masterVersion)
File "/usr/share/vdsm/storage/sp.py", line 641, in connect
self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
File "/usr/share/vdsm/storage/sp.py", line 1109, in __rebuild
self.masterDomain = self.getMasterDomain(msdUUID=msdUUID,
masterVersion=masterVersion)
File "/usr/share/vdsm/storage/sp.py", line 1448, in getMasterDomain
raise se.StoragePoolMasterNotFound(self.spUUID, msdUUID)
StoragePoolMasterNotFound: Cannot find master domain:
'spUUID=c9b86219-0d51-44c3-a7de-e0fe07e2c9e6,
msdUUID=00ed91f3-43be-41be-8c05-f3786588a1ad'
and
Thread-1268::ERROR::2013-01-24
17:51:49,073::task::853::TaskManager.Task::(_setError)
Task=`95b7f58b-afe0-47bd-9ebd-21d3224f5165`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 861, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 528, in getSpmStatus
pool = self.getPool(spUUID)
File "/usr/share/vdsm/storage/hsm.py", line 265, in getPool
raise se.StoragePoolUnknown(spUUID)
StoragePoolUnknown: Unknown pool id, pool not connected:
('c9b86219-0d51-44c3-a7de-e0fe07e2c9e6',)
while engine logs:
2013-01-24 17:51:46,050 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(QuartzScheduler_Worker-43) [49026692] Command
org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand
return value
Class Name:
org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
mStatus Class Name:
org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
mCode 304
mMessage Cannot find master domain:
'spUUID=c9b86219-0d51-44c3-a7de-e0fe07e2c9e6,
msdUUID=00ed91f3-43be-41be-8c05-f3786588a1ad'
Vdsm and engine logs are also attached. I set the affected host back to
maintenance. How can I recover from this and attach the storage domain
again? If more information is needed, please do not hesitate to request it.
This is on CentOS 6.3 using Dreyou's rpms. Installed versions on host:
vdsm.x86_64 4.10.0-0.44.14.el6
vdsm-cli.noarch 4.10.0-0.44.14.el6
vdsm-python.x86_64 4.10.0-0.44.14.el6
vdsm-xmlrpc.noarch 4.10.0-0.44.14.el6
Engine:
ovirt-engine.noarch 3.1.0-3.19.el6
ovirt-engine-backend.noarch 3.1.0-3.19.el6
ovirt-engine-cli.noarch 3.1.0.7-1.el6
ovirt-engine-config.noarch 3.1.0-3.19.el6
ovirt-engine-dbscripts.noarch 3.1.0-3.19.el6
ovirt-engine-genericapi.noarch 3.1.0-3.19.el6
ovirt-engine-jbossas711.x86_64 1-0
ovirt-engine-notification-service.noarch 3.1.0-3.19.el6
ovirt-engine-restapi.noarch 3.1.0-3.19.el6
ovirt-engine-sdk.noarch 3.1.0.5-1.el6
ovirt-engine-setup.noarch 3.1.0-3.19.el6
ovirt-engine-tools-common.noarch 3.1.0-3.19.el6
ovirt-engine-userportal.noarch 3.1.0-3.19.el6
ovirt-engine-webadmin-portal.noarch 3.1.0-3.19.el6
ovirt-image-uploader.noarch 3.1.0-16.el6
ovirt-iso-uploader.noarch 3.1.0-16.el6
ovirt-log-collector.noarch 3.1.0-16.el6
Thanks and regards
Patrick
--
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg
HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
12 years, 4 months
[Users] best disk type for WIn XP guests
by Gianluca Cecchi
Hello,
I have a WIn XP guest configured with one ide disk.
I would like to pass to virtio. Is it supported/usable for Win XP as a
disk type on oVirt?
What else are using other ones in case, apart IDE?
My attempt is to add a second 1Gb disk configured as virtio and then
if successful change disk type for the first disk too.
But when powering up the guest it finds new hardware for the second
disk, I point it to the directory
WXP\X86 of the iso using virtio-win-1.1.16.vfd
It finds the viostor.xxx files but at the end it fails installing the driver
(see
https://docs.google.com/file/d/0BwoPbcrMv8mvMUQ2SWxYZWhSV0E/edit
)
Any help/suggestion is welcome.
Gianluca
12 years, 4 months
[Users] BFA FC driver no stable on Fedora
by Kevin Maziere Aubry
Hi
I've spent hours to make my Brocade FC card working on Fedora17 or Ovirt
Node build.
In fact the card are randomly seen by the system, which is really painfull.
So I've downloaded, compile and installed the latest driver from Brocade,
and now when I load the module the card is seen.
So I've installed :
bfa_util_linux_noioctl-3.2.0.0-0.noarch
bfa_driver_linux-3.2.0.0-0.noarch
And the module info are :
# modinfo bfa
filename: /lib/modules/3.3.4-5.fc17.x86_64/kernel/drivers/scsi/bfa.ko
version: 3.2.0.0
author: Brocade Communications Systems, Inc.
description: Brocade Fibre Channel HBA Driver fcpim ipfc
license: GPL
srcversion: 5C0FBDF3571ABCA9632B9CA
alias: pci:v00001657d00000022sv*sd*bc0Csc04i00*
alias: pci:v00001657d00000021sv*sd*bc0Csc04i00*
alias: pci:v00001657d00000014sv*sd*bc0Csc04i00*
alias: pci:v00001657d00000017sv*sd*bc*sc*i*
alias: pci:v00001657d00000013sv*sd*bc*sc*i*
depends: scsi_transport_fc
vermagic: 3.3.4-5.fc17.x86_64 SMP mod_unload
parm: os_name:OS name of the hba host machine (charp)
parm: os_patch:OS patch level of the hba host machine (charp)
parm: host_name:Hostname of the hba host machine (charp)
parm: num_rports:Max number of rports supported per port
(physical/logical), default=1024 (int)
parm: num_ioims:Max number of ioim requests, default=2000 (int)
parm: num_tios:Max number of fwtio requests, default=0 (int)
parm: num_tms:Max number of task im requests, default=128 (int)
parm: num_fcxps:Max number of fcxp requests, default=64 (int)
parm: num_ufbufs:Max number of unsolicited frame buffers,
default=64 (int)
parm: reqq_size:Max number of request queue elements, default=256
(int)
parm: rspq_size:Max number of response queue elements, default=64
(int)
parm: num_sgpgs:Number of scatter/gather pages, default=2048 (int)
parm: rport_del_timeout:Rport delete timeout, default=90 secs,
Range[>0] (int)
parm: bfa_lun_queue_depth:Lun queue depth, default=32, Range[>0]
(int)
parm: bfa_io_max_sge:Max io scatter/gather elements , default=255
(int)
parm: log_level:Driver log level, default=3,
Range[Critical:1|Error:2|Warning:3|Info:4] (int)
parm: ioc_auto_recover:IOC auto recovery, default=1,
Range[off:0|on:1] (int)
parm: linkup_delay:Link up delay, default=30 secs for boot port.
Otherwise 10 secs in RHEL4 & 0 for [RHEL5, SLES10, ESX40] Range[>0] (int)
parm: msix_disable_cb:Disable Message Signaled Interrupts for
Brocade-415/425/815/825 cards, default=0, Range[false:0|true:1] (int)
parm: msix_disable_ct:Disable Message Signaled Interrupts if
possible for Brocade-1010/1020/804/1007/1741 cards, default=0,
Range[false:0|true:1] (int)
parm: fdmi_enable:Enables fdmi registration, default=1,
Range[false:0|true:1] (int)
parm: pcie_max_read_reqsz:PCIe max read request size, default=0
(use system setting), Range[128|256|512|1024|2048|4096] (int)
parm: max_xfer_size:default=32MB,
Range[64k|128k|256k|512k|1024k|2048k] (int)
parm: max_rport_logins:Max number of logins to initiator and
target rports on a port (physical/logical), default=1024 (int)
I guess that I could be a possible to update the driver inside the Ovirt
Node build ?
Kevin
--
Kevin Mazière
Responsable Infrastructure
Alter Way – Hosting
1 rue Royal - 227 Bureaux de la Colline
92213 Saint-Cloud Cedex
Tél : +33 (0)1 41 16 38 41
Mob : +33 (0)7 62 55 57 05
http://www.alterway.fr
12 years, 4 months
[Users] How to update VdcBootStrapUrl (not using DB) ?
by Adrian Gibanel
I've recently updated my http://www.ovirt.org/User:Adrian15/oVirt_engine_migration oVirt engine migration howto with the http://www.ovirt.org/User:Adrian15/oVirt_engine_migration#Update_VdcBootS... Update VdcBootStrapUrl section.
My next move is to move this section into the http://www.ovirt.org/How_to_change_engine_host_name How to change engine host name because I think it's a needed step.
But I don't like that currently you have to issue a database update like this:
psql -c "update vdc_options set option_value = 'http://new.manager.com:80/Components/vds/' where option_name = 'VdcBootStrapUrl'" -U postgres engine
So I was wondering if there was a proper way like using a command like vdsClient or something similar. I mean so that in the future the vdc_options table gets renamed that the command is still the same.
I CC jhernand because I think he wrote the original "How to change engine host name" at the mailing list and also answered with the VdcBootStrapUrl update sentence to someone how couldn't add a new host after I think restoring an ovirt-engine.
Thank you.
--
--
Adrián Gibanel
I.T. Manager
+34 675 683 301
www.btactic.com
Ens podeu seguir a/Nos podeis seguir en:
i
Abans d´imprimir aquest missatge, pensa en el medi ambient. El medi ambient és cosa de tothom. / Antes de imprimir el mensaje piensa en el medio ambiente. El medio ambiente es cosa de todos.
AVIS:
El contingut d'aquest missatge i els seus annexos és confidencial. Si no en sou el destinatari, us fem saber que està prohibit utilitzar-lo, divulgar-lo i/o copiar-lo sense tenir l'autorització corresponent. Si heu rebut aquest missatge per error, us agrairem que ho feu saber immediatament al remitent i que procediu a destruir el missatge .
AVISO:
El contenido de este mensaje y de sus anexos es confidencial. Si no es el destinatario, les hacemos saber que está prohibido utilizarlo, divulgarlo y/o copiarlo sin tener la autorización correspondiente. Si han recibido este mensaje por error, les agradeceríamos que lo hagan saber inmediatamente al remitente y que procedan a destruir el mensaje .
12 years, 4 months
Re: [Users] Best practice to resize a WM disk image
by Karli Sjöberg
--_000_5F9E965F5A80BC468BE5F40576769F091023B2DCexchange21_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SGV5LA0KDQpJIHdhbnRlZCB0byByZXBvcnQgdGhhdCB0cnlpbmcgdG8gImRkIiBmcm9tIHRoZSBz
dG9yYWdlLXNpZGUgYWx3YXlzIG1ha2VzIHRoZSBWTcK0cyBPUyBzZWUgdHdvIGl0ZGVudGljYWxs
eSBzbWFsbCBIREQncy4gVGhlIG9ubHkgd29yay1hcm91bmQgScK0dmUgZm91bmQgdGhhdCB3b3Jr
cyBpcyB0byBjcmVhdGUgYSBuZXcsIGJpZ2dlciBkcml2ZSwgYm9vdCB0aGUgVk0gZnJvbSBhIGxp
dmUtQ0QgYW5kICJkZCIgZnJvbSB0aGVyZS4gV2hlbiByZWJvb3RlZCBhZnRlciBjb21wbGV0aW9u
LCB0aGUgVk3CtHMgT1MgdGhlbiBzZWVzIGEgYmlnZ2VyIGRyaXZlIHRoYXQgeW91IGNhbiBleHRl
bmQgeW91ciBmaWxlc3lzdGVtIG9uLiBBIGxpdHRsZSBzbG93ZXIgcHJvY2VkdXJlLCBoYXZpbmcg
dGhlIG1pcnJvcmluZyBnbyBvdmVyIHRoZSBuZXR3b3JrLCBidXQgd29ya3MsIGFuZCB0aGF0wrRz
IHdoYXTCtHMgaW1wb3J0YW50IGluIHRoZSBlbmQ6KQ0KDQovS2FybGkNCg0KbcOlbiAyMDEzLTAx
LTE0IGtsb2NrYW4gMDg6MzcgKzAwMDAgc2tyZXYgS2FybGkgU2rDtmJlcmc6DQpvbnMgMjAxMy0w
MS0wOSBrbG9ja2FuIDEzOjA0IC0wNTAwIHNrcmV2IFllZWxhIEthcGxhbjoNCg0KDQoNCi0tLS0t
IE9yaWdpbmFsIE1lc3NhZ2UgLS0tLS0NCj4gRnJvbTogIkthcmxpIFNqw7ZiZXJnIiA8S2FybGku
U2pvYmVyZ0BzbHUuc2U8bWFpbHRvOkthcmxpLlNqb2JlcmdAc2x1LnNlPj4NCj4gVG86ICJZZWVs
YSBLYXBsYW4iIDx5a2FwbGFuQHJlZGhhdC5jb208bWFpbHRvOnlrYXBsYW5AcmVkaGF0LmNvbT4+
DQo+IENjOiAiUm9ja3kiIDxyb2NreWJhbG9vQGdtYWlsLmNvbTxtYWlsdG86cm9ja3liYWxvb0Bn
bWFpbC5jb20+PiwgVXNlcnNAb3ZpcnQub3JnPG1haWx0bzpVc2Vyc0BvdmlydC5vcmc+DQo+IFNl
bnQ6IFdlZG5lc2RheSwgSmFudWFyeSA5LCAyMDEzIDQ6MzA6MzUgUE0NCj4gU3ViamVjdDogUmU6
IFtVc2Vyc10gQmVzdCBwcmFjdGljZSB0byByZXNpemUgYSBXTSBkaXNrIGltYWdlDQo+DQo+IG9u
cyAyMDEzLTAxLTA5IGtsb2NrYW4gMDk6MTMgLTA1MDAgc2tyZXYgWWVlbGEgS2FwbGFuOg0KPg0K
PiAtLS0tLSBPcmlnaW5hbCBNZXNzYWdlIC0tLS0tDQo+ID4gRnJvbTogIkthcmxpIFNqw7ZiZXJn
IiA8IEthcmxpLlNqb2JlcmdAc2x1LnNlPG1haWx0bzpLYXJsaS5Tam9iZXJnQHNsdS5zZT4gPg0K
PiA+IFRvOiAiWWVlbGEgS2FwbGFuIiA8IHlrYXBsYW5AcmVkaGF0LmNvbTxtYWlsdG86eWthcGxh
bkByZWRoYXQuY29tPiA+DQo+ID4gQ2M6ICJSb2NreSIgPCByb2NreWJhbG9vQGdtYWlsLmNvbTxt
YWlsdG86cm9ja3liYWxvb0BnbWFpbC5jb20+ID4sIFVzZXJzQG92aXJ0Lm9yZzxtYWlsdG86VXNl
cnNAb3ZpcnQub3JnPiA+IFNlbnQ6DQo+ID4gV2VkbmVzZGF5LCBKYW51YXJ5IDksIDIwMTMgMTo1
NjozMiBQTQ0KPiA+IFN1YmplY3Q6IFJlOiBbVXNlcnNdIEJlc3QgcHJhY3RpY2UgdG8gcmVzaXpl
IGEgV00gZGlzayBpbWFnZQ0KPiA+DQo+ID4gdGlzIDIwMTMtMDEtMDgga2xvY2thbiAxMTowMyAt
MDUwMCBza3JldiBZZWVsYSBLYXBsYW46DQo+ID4NCj4gPiBTbywgZmlyc3Qgb2YgYWxsLCB5b3Ug
c2hvdWxkIGtub3cgdGhhdCByZXNpemluZyBhIGRpc2sgaXMgbm90IHlldA0KPiA+IHN1cHBvcnRl
ZCBpbiBvVmlydC4NCj4gPiBJZiB5b3UgZGVjaWRlIHRoYXQgeW91IG11c3QgdXNlIGl0IGFueXdh
eSwgeW91IHNob3VsZCBrbm93IGluDQo+ID4gYWR2YW5jZQ0KPiA+IHRoYXQgaXQncyBub3QgcmVj
b21tZW5kZWQsDQo+ID4gYW5kIHRoYXQgeW91ciBkYXRhIGlzIGF0IHJpc2sgd2hlbiB5b3UgcGVy
Zm9ybSB0aGVzZSBraW5kIG9mDQo+ID4gYWN0aW9ucy4NCj4gPg0KPiA+IFRoZXJlIGFyZSBzZXZl
cmFsIHdheXMgdG8gcGVyZm9ybSB0aGlzLg0KPiA+IE9uZSBvZiB0aGVtIGlzIHRvIGNyZWF0ZSBh
IHNlY29uZCAobGFyZ2VyKSBkaXNrIGZvciB0aGUgdm0sDQo+ID4gcnVuIHRoZSB2bSBmcm9tIGxp
dmUgY2QgYW5kIHVzZSBkZCB0byBjb3B5IHRoZSBmaXJzdCBkaXNrIGNvbnRlbnRzDQo+ID4gaW50
byB0aGUgc2Vjb25kIG9uZSwNCj4gPiBhbmQgZmluYWxseSByZW1vdmUgdGhlIGZpcnN0IGRpc2sg
YW5kIG1ha2Ugc3VyZSB0aGF0IHRoZSBuZXcgZGlzaw0KPiA+IGlzDQo+ID4gY29uZmlndXJlZCBh
cyB5b3VyIHN5c3RlbSBkaXNrLg0KPiA+IEhlcmUgeW91IGd1aWRlIGZvciB0aGUgZGQgb3BlcmF0
aW9uDQo+ID4gdG8gYmUgZG9uZSBmcm9tIHdpdGhpbiB0aGUgZ3Vlc3Qgc3lzdGVtLCBidXQgYm9v
dGVkIGZyb20gbGl2ZS4NCj4gPiBDYW4gdGhpcyBiZSBkb25lIGRpcmVjdGx5IGZyb20gdGhlIE5G
UyBzdG9yYWdlIGl0c2VsZiBpbnN0ZWFkPw0KPiA+DQo+DQo+IEthcmxpLCBpdCBjYW4gYmUgZG9u
ZSBieSB1c2luZyBkZCAob3IgcnN5bmMpLCB3aGVuIHlvdXIgc291cmNlIGlzIHRoZQ0KPiB2b2x1
bWUgb2YgdGhlIGN1cnJlbnQgZGlzayBpbWFnZQ0KPiBhbmQgdGhlIGRlc3RpbmF0aW9uIGlzIHRo
ZSB2b2x1bWUgb2YgdGhlIG5ldyBkaXNrIGltYWdlIGNyZWF0ZWQuDQo+IFlvdSBqdXN0IGhhdmUg
dG8gZmluZCB0aGUgaW1hZ2VzIGluIHRoZSBpbnRlcm5hbHMgb2YgdGhlIHZkc20gaG9zdCwNCj4g
d2hpY2ggaXMgYSBiaXQgbW9yZSB0cmlja3kNCj4gYW5kIGNhbiBjYXVzZSBtb3JlIGRhbWFnZSBp
ZiBkb25lIHdyb25nLiBZb3UgbWVhbiBzaW5jZSB0aGUgVk0ncyBhbmQNCj4gZGlza3MgYXJlIGNh
bGxlZCBsaWtlICJjM2RiZmI1Zi03YjNiLTQ2MDItOTYxZi02MjRjNjk2MTg3MzQiIHlvdQ0KPiBo
YXZlIHRvIHF1ZXJ5IHRoZSBhcGkgdG8gZmlndXJlIG91dCB3aGF0wrRzIHdoYXQsIGJ1dCBvdGhl
ciB0aGFuDQo+IHRoYXQsIHlvdcK0cmUgc2F5aW5nIGl0wrRsbCAianVzdCB3b3JrIiwgc28gdGhh
dMK0cyBnb29kIHRvIGtub3csIHNpbmNlDQo+IEkgdGhpbmsgbGV0dGluZyB0aGUgc3RvcmFnZSBp
dHNlbGYgZG8gdGhlIGRkIGNvcHkgbG9jYWxseSBpcyBnb2luZw0KPiB0byBiZSBtdWNoIG11Y2gg
ZmFzdGVyIHRoYW4gdGhyb3VnaCB0aGUgVk0sIG92ZXIgdGhlIG5ldHdvcmsuDQo+IFRoYW5rcyEN
Cj4gV2lsbCBpdCBtYXR0ZXIgaWYgdGhlIGRpc2tzIGFyZSAiVGhpbiBQcm92aXNpb24iIG9yICJQ
cmVhbGxvY2F0ZWQiPw0KPg0KPg0KDQpBcyBsb25nIGFzIGl0J3MgZG9uZSBvbiB0aGUgYmFzZSB2
b2x1bWUgaXQgZG9lc24ndCBtYXR0ZXIuDQoNCg0KV2VsbCwgScK0dmUgbm93IHRlc3RlZCB0aGUg
c3VnZ2VzdGVkIHByb2NlZHVyZSBhbmQgZGlkbsK0dCByZWFsbHkgZ28gYWxsIHRoZSB3YXkgaG9t
ZS4NCjEuIENyZWF0ZWQgYSBuZXcsIGJpZ2dlciB2aXJ0dWFsIGRpc2sgdGhhbiB0aGUgb3JpZ2lu
YWwsIDQwR0IuDQoyLiBCb290ZWQgV2luMjAwOFIyIGd1ZXN0IGFuZCBjb3VsZCBzZWUgZnJvbSBE
aXNrTWFuYWdlciB0aGF0IGEgbmV3LCBiaWdnZXIgZHJpdmUsIDgwR0IsIGhhZCBhcHBlYXJlZC4N
CjMuIFNodXQgZ3Vlc3QgZG93biBhbmQgaXNzdWVkIGEgZGQgZnJvbSBvbGQgc291cmNlIHRvIG5l
dywgYmlnZ2VyIGRlc3RpbmF0aW9uLg0KNC4gV2hlbiBzdGFydGVkLCBEaXNrTWFuYWdlciBub3cg
c2VlcyBhbiBvZmZsaW5lLCBlcXVhbGx5IHNtYWxsIGRyaXZlIGFzIHRoZSBvcmlnaW5hbCwgNDBH
Qi4gVGhlcmUgaXMgbm8gZnJlZSBzcGFjZSBpbiB0aGUgbmV3IGRyaXZlIHRvIGV4cGFuZCB3aXRo
LCBXaW5kb3dzIG9ubHkgc2VlcyBpdCBhcyBiZWVpbmcgNDBHQi4NCg0KSGF2ZSB0cmllZCAiUmVm
cmVzaCIgYW5kICJSZXNjYW4iLCBidXQgV2luZG93cyBqdXN0IHNlZXMgdHdvIGlkZW50aWNhbGx5
IHNtYWxsIGRpc2tzLg0KDQpTdWdnZXN0aW9ucz8NCg0KDQoNCg0KPg0KPiA+DQo+ID4NCj4gPiBU
aGUgc2Vjb25kLCByaXNraWVyLCBvcHRpb24gaXMgdG8gZXhwb3J0IHRoZSB2bSB0byBhbiBleHBv
cnQNCj4gPiBkb21haW4sDQo+ID4gcmVzaXplIHRoZSBpbWFnZSB2b2x1bWUgc2l6ZSB0byB0aGUg
bmV3IGxhcmdlciBzaXplIHVzaW5nIHFlbXUtaW1nDQo+ID4gYW5kIGFsc28gbW9kaWZ5IHRoZSB2
bSdzIG1ldGFkYXRhIGluIGl0cyBvdmYsDQo+ID4gYXMgeW91IGNhbiBzZWUgdGhpcyBvcHRpb24g
aXMgbW9yZSBjb21wbGljYXRlZCBhbmQgcmVxdWlyZXMgZGVlcGVyDQo+ID4gdW5kZXJzdGFuZGlu
ZyBhbmQgYWx0ZXJpbmcgb2YgdGhlIG1ldGFkYXRhLi4uDQo+ID4gZmluYWxseSB5b3UnbGwgbmVl
ZCB0byBpbXBvcnQgdGhlIHZtIGJhY2suDQo+ID4NCj4gPg0KPiA+DQo+ID4gLS0tLS0gT3JpZ2lu
YWwgTWVzc2FnZSAtLS0tLQ0KPiA+ID4gRnJvbTogIlJvY2t5IiA8IHJvY2t5YmFsb29AZ21haWwu
Y29tPG1haWx0bzpyb2NreWJhbG9vQGdtYWlsLmNvbT4gPg0KPiA+ID4gVG86ICJZZWVsYSBLYXBs
YW4iIDwgeWthcGxhbkByZWRoYXQuY29tPG1haWx0bzp5a2FwbGFuQHJlZGhhdC5jb20+ID4NCj4g
PiA+IENjOiBVc2Vyc0BvdmlydC5vcmc8bWFpbHRvOlVzZXJzQG92aXJ0Lm9yZz4gPiBTZW50OiBU
dWVzZGF5LCBKYW51YXJ5IDgsIDIwMTMgMTE6MzA6MDAgQU0NCj4gPiA+IFN1YmplY3Q6IFJlOiBb
VXNlcnNdIEJlc3QgcHJhY3RpY2UgdG8gcmVzaXplIGEgV00gZGlzayBpbWFnZQ0KPiA+ID4NCj4g
PiA+IEl0cyBqdXN0IGEgdGhlb3JldGljYWwgcXVlc3Rpb24gYXMgSSB0aGluayB0aGUgaXNzdWUg
d2lsbCBjb21lDQo+ID4gPiBmb3INCj4gPiA+IHVzDQo+ID4gPiBhbmQgb3RoZXIgdXNlcnMuDQo+
ID4gPg0KPiA+ID4gSSB0aGluayB0aGVyZSBjYW4gYmUgb25lIG9yIG1vcmUgc25hcHNob3RzIGlu
IHRoZSBXTSBvdmVyIHRoZQ0KPiA+ID4gdGltZS4NCj4gPiA+IEJ1dA0KPiA+ID4gaWYgdGhhdCBp
cyBhbiBpc3N1ZSB3ZSBjYW4gYWx3YXlzIGNvbGxhcHNlIHRoZW0gSSB0aGluay4NCj4gPiA+IElm
IGl0cyBhIGJhc2UgaW1hZ2UgaXQgc2hvdWxkIGJlIFJBVywgcmlnaHQ/DQo+ID4gPiBJbiB0aGlz
IGNhc2UgaXRzIG9uIGZpbGUgc3RvcmFnZSAoTkZTKS4NCj4gPiA+DQo+ID4gPiBSZWdhcmRzIC8v
Umlja3kNCj4gPiA+DQo+ID4gPiBPbiAyMDEzLTAxLTA4IDEwOjA3LCBZZWVsYSBLYXBsYW4gd3Jv
dGU6DQo+ID4gPiA+IEhpIFJpY2t5LA0KPiA+ID4gPiBJbiBvcmRlciB0byBnaXZlIHlvdSBhIGRl
dGFpbGVkIGFuc3dlciBJIG5lZWQgYWRkaXRpb25hbA0KPiA+ID4gPiBkZXRhaWxzDQo+ID4gPiA+
IHJlZ2FyZGluZyB0aGUgZGlzazoNCj4gPiA+ID4gLSBJcyB0aGUgZGlzayBpbWFnZSBjb21wb3Nl
ZCBhcyBhIGNoYWluIG9mIHZvbHVtZXMgb3IganVzdCBhDQo+ID4gPiA+IGJhc2UNCj4gPiA+ID4g
dm9sdW1lPw0KPiA+ID4gPiAoaWYgaXQncyBhIGNoYWluIGl0IHdpbGwgYmUgbW9yZSBjb21wbGlj
YXRlZCwgeW91IG1pZ2h0IHdhbnQgdG8NCj4gPiA+ID4gY29sbGFwc2UgdGhlIGNoYWluIGZpcnN0
IHRvIG1ha2UgaXQgZWFzaWVyKS4NCj4gPiA+ID4gLSBJcyB0aGUgZGlzayBpbWFnZSByYXc/ICh5
b3UgY2FuIHVzZSBxZW11LWltZyBpbmZvIHRvIGNoZWNrKQ0KPiA+ID4gPiAtIElzIHRoZSBkaXNr
IGltYWdlIG9uIGJsb2NrIG9yIGZpbGUgc3RvcmFnZT8NCj4gPiA+ID4NCj4gPiA+ID4gUmVnYXJk
cywNCj4gPiA+ID4gWWVlbGENCj4gPiA+ID4NCj4gPiA+ID4gLS0tLS0gT3JpZ2luYWwgTWVzc2Fn
ZSAtLS0tLQ0KPiA+ID4gPj4gRnJvbTogIlJpY2t5IiA8IHJvY2t5YmFsb29AZ21haWwuY29tPG1h
aWx0bzpyb2NreWJhbG9vQGdtYWlsLmNvbT4gPg0KPiA+ID4gPj4gVG86IFVzZXJzQG92aXJ0Lm9y
ZzxtYWlsdG86VXNlcnNAb3ZpcnQub3JnPiA+ID4+IFNlbnQ6IFR1ZXNkYXksIEphbnVhcnkgOCwg
MjAxMw0KPiA+ID4gPj4gMTA6NDA6MjcNCj4gPiA+ID4+IEFNDQo+ID4gPiA+PiBTdWJqZWN0OiBb
VXNlcnNdIEJlc3QgcHJhY3RpY2UgdG8gcmVzaXplIGEgV00gZGlzayBpbWFnZQ0KPiA+ID4gPj4N
Cj4gPiA+ID4+IEhpLA0KPiA+ID4gPj4NCj4gPiA+ID4+IElmIEkgaGF2ZSBhIFZNIHRoYXQgaGFz
IHJ1biBvdXQgb2YgZGlzayBzcGFjZSwgaG93IGNhbiBJDQo+ID4gPiA+PiBpbmNyZWFzZQ0KPiA+
ID4gPj4gdGhlDQo+ID4gPiA+PiBzcGFjZSBpbiBiZXN0IHdheT8gT25lIHdheSBpcyB0byBhZGQg
YSBzZWNvbmQgYmlnZ2VyIGRpc2sgdG8NCj4gPiA+ID4+IHRoZQ0KPiA+ID4gPj4gV00NCj4gPiA+
ID4+IGFuZCB0aGVuIHVzZSBkZCBvciBzaW1pbGFyIHRvIGNvcHkuIEJ1dCBpcyBpdCBwb3NzaWJs
ZSB0bw0KPiA+ID4gPj4gc3RyZXRjaA0KPiA+ID4gPj4gdGhlDQo+ID4gPiA+PiBvcmlnaW5hbCBk
aXNrIGluc2lkZSBvciBvdXRzaWRlIG9WaXJ0IGFuZCBnZXQgb1ZpcnQgdG8ga25vdw0KPiA+ID4g
Pj4gdGhlDQo+ID4gPiA+PiBiaWdnZXINCj4gPiA+ID4+IHNpemU/DQo+ID4gPiA+Pg0KPiA+ID4g
Pj4gUmVnYXJkcyAvL1JpY2t5DQo+ID4gPiA+PiBfX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fXw0KPiA+ID4gPj4gVXNlcnMgbWFpbGluZyBsaXN0DQo+ID4gPiA+
PiBVc2Vyc0BvdmlydC5vcmc8bWFpbHRvOlVzZXJzQG92aXJ0Lm9yZz4gPiA+Pg0KPiA+ID4gPj4g
aHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzID4gPj4NCj4gPiA+
DQo+ID4gPg0KPiA+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fDQo+ID4gVXNlcnMgbWFpbGluZyBsaXN0IFVzZXJzQG92aXJ0Lm9yZzxtYWlsdG86VXNlcnNA
b3ZpcnQub3JnPiA+DQo+ID4gaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZv
L3VzZXJzID4NCj4NCg0KDQoNCg==
--_000_5F9E965F5A80BC468BE5F40576769F091023B2DCexchange21_
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: base64
PCFET0NUWVBFIEhUTUwgUFVCTElDICItLy9XM0MvL0RURCBIVE1MIDQuMCBUUkFOU0lUSU9OQUwv
L0VOIj4NCjxodG1sPg0KPGhlYWQ+DQo8bWV0YSBodHRwLWVxdWl2PSJDb250ZW50LVR5cGUiIGNv
bnRlbnQ9InRleHQvaHRtbDsgY2hhcnNldD11dGYtOCI+DQo8bWV0YSBuYW1lPSJHRU5FUkFUT1Ii
IGNvbnRlbnQ9Ikd0a0hUTUwvNC40LjQiPg0KPC9oZWFkPg0KPGJvZHk+DQpIZXksPGJyPg0KPGJy
Pg0KSSB3YW50ZWQgdG8gcmVwb3J0IHRoYXQgdHJ5aW5nIHRvICZxdW90O2RkJnF1b3Q7IGZyb20g
dGhlIHN0b3JhZ2Utc2lkZSBhbHdheXMgbWFrZXMgdGhlIFZNwrRzIE9TIHNlZSB0d28gaXRkZW50
aWNhbGx5IHNtYWxsIEhERCdzLiBUaGUgb25seSB3b3JrLWFyb3VuZCBJwrR2ZSBmb3VuZCB0aGF0
IHdvcmtzIGlzIHRvIGNyZWF0ZSBhIG5ldywgYmlnZ2VyIGRyaXZlLCBib290IHRoZSBWTSBmcm9t
IGEgbGl2ZS1DRCBhbmQgJnF1b3Q7ZGQmcXVvdDsgZnJvbSB0aGVyZS4gV2hlbiByZWJvb3RlZA0K
IGFmdGVyIGNvbXBsZXRpb24sIHRoZSBWTcK0cyBPUyB0aGVuIHNlZXMgYSBiaWdnZXIgZHJpdmUg
dGhhdCB5b3UgY2FuIGV4dGVuZCB5b3VyIGZpbGVzeXN0ZW0gb24uIEEgbGl0dGxlIHNsb3dlciBw
cm9jZWR1cmUsIGhhdmluZyB0aGUgbWlycm9yaW5nIGdvIG92ZXIgdGhlIG5ldHdvcmssIGJ1dCB3
b3JrcywgYW5kIHRoYXTCtHMgd2hhdMK0cyBpbXBvcnRhbnQgaW4gdGhlIGVuZDopPGJyPg0KPGJy
Pg0KL0thcmxpPGJyPg0KPGJyPg0KbcOlbiAyMDEzLTAxLTE0IGtsb2NrYW4gMDg6MzcgJiM0Mzsw
MDAwIHNrcmV2IEthcmxpIFNqw7ZiZXJnOjxicj4NCjxibG9ja3F1b3RlIHR5cGU9IkNJVEUiPm9u
cyAyMDEzLTAxLTA5IGtsb2NrYW4gMTM6MDQgLTA1MDAgc2tyZXYgWWVlbGEgS2FwbGFuOg0KPGJs
b2NrcXVvdGUgdHlwZT0iQ0lURSI+DQo8cHJlPgoKLS0tLS0gT3JpZ2luYWwgTWVzc2FnZSAtLS0t
LQomZ3Q7IEZyb206ICZxdW90O0thcmxpIFNqw7ZiZXJnJnF1b3Q7ICZsdDs8YSBocmVmPSJtYWls
dG86S2FybGkuU2pvYmVyZ0BzbHUuc2UiPkthcmxpLlNqb2JlcmdAc2x1LnNlPC9hPiZndDsKJmd0
OyBUbzogJnF1b3Q7WWVlbGEgS2FwbGFuJnF1b3Q7ICZsdDs8YSBocmVmPSJtYWlsdG86eWthcGxh
bkByZWRoYXQuY29tIj55a2FwbGFuQHJlZGhhdC5jb208L2E+Jmd0OwomZ3Q7IENjOiAmcXVvdDtS
b2NreSZxdW90OyAmbHQ7PGEgaHJlZj0ibWFpbHRvOnJvY2t5YmFsb29AZ21haWwuY29tIj5yb2Nr
eWJhbG9vQGdtYWlsLmNvbTwvYT4mZ3Q7LCA8YSBocmVmPSJtYWlsdG86VXNlcnNAb3ZpcnQub3Jn
Ij5Vc2Vyc0BvdmlydC5vcmc8L2E+CiZndDsgU2VudDogV2VkbmVzZGF5LCBKYW51YXJ5IDksIDIw
MTMgNDozMDozNSBQTQomZ3Q7IFN1YmplY3Q6IFJlOiBbVXNlcnNdIEJlc3QgcHJhY3RpY2UgdG8g
cmVzaXplIGEgV00gZGlzayBpbWFnZQomZ3Q7IAomZ3Q7IG9ucyAyMDEzLTAxLTA5IGtsb2NrYW4g
MDk6MTMgLTA1MDAgc2tyZXYgWWVlbGEgS2FwbGFuOgomZ3Q7IAomZ3Q7IC0tLS0tIE9yaWdpbmFs
IE1lc3NhZ2UgLS0tLS0KJmd0OyAmZ3Q7IEZyb206ICZxdW90O0thcmxpIFNqw7ZiZXJnJnF1b3Q7
ICZsdDsgPGEgaHJlZj0ibWFpbHRvOkthcmxpLlNqb2JlcmdAc2x1LnNlIj5LYXJsaS5Tam9iZXJn
QHNsdS5zZTwvYT4gJmd0OwomZ3Q7ICZndDsgVG86ICZxdW90O1llZWxhIEthcGxhbiZxdW90OyAm
bHQ7IDxhIGhyZWY9Im1haWx0bzp5a2FwbGFuQHJlZGhhdC5jb20iPnlrYXBsYW5AcmVkaGF0LmNv
bTwvYT4gJmd0OwomZ3Q7ICZndDsgQ2M6ICZxdW90O1JvY2t5JnF1b3Q7ICZsdDsgPGEgaHJlZj0i
bWFpbHRvOnJvY2t5YmFsb29AZ21haWwuY29tIj5yb2NreWJhbG9vQGdtYWlsLmNvbTwvYT4gJmd0
OywgPGEgaHJlZj0ibWFpbHRvOlVzZXJzQG92aXJ0Lm9yZyI+VXNlcnNAb3ZpcnQub3JnPC9hPiAm
Z3Q7IFNlbnQ6CiZndDsgJmd0OyBXZWRuZXNkYXksIEphbnVhcnkgOSwgMjAxMyAxOjU2OjMyIFBN
CiZndDsgJmd0OyBTdWJqZWN0OiBSZTogW1VzZXJzXSBCZXN0IHByYWN0aWNlIHRvIHJlc2l6ZSBh
IFdNIGRpc2sgaW1hZ2UKJmd0OyAmZ3Q7IAomZ3Q7ICZndDsgdGlzIDIwMTMtMDEtMDgga2xvY2th
biAxMTowMyAtMDUwMCBza3JldiBZZWVsYSBLYXBsYW46CiZndDsgJmd0OyAKJmd0OyAmZ3Q7IFNv
LCBmaXJzdCBvZiBhbGwsIHlvdSBzaG91bGQga25vdyB0aGF0IHJlc2l6aW5nIGEgZGlzayBpcyBu
b3QgeWV0CiZndDsgJmd0OyBzdXBwb3J0ZWQgaW4gb1ZpcnQuCiZndDsgJmd0OyBJZiB5b3UgZGVj
aWRlIHRoYXQgeW91IG11c3QgdXNlIGl0IGFueXdheSwgeW91IHNob3VsZCBrbm93IGluCiZndDsg
Jmd0OyBhZHZhbmNlCiZndDsgJmd0OyB0aGF0IGl0J3Mgbm90IHJlY29tbWVuZGVkLAomZ3Q7ICZn
dDsgYW5kIHRoYXQgeW91ciBkYXRhIGlzIGF0IHJpc2sgd2hlbiB5b3UgcGVyZm9ybSB0aGVzZSBr
aW5kIG9mCiZndDsgJmd0OyBhY3Rpb25zLgomZ3Q7ICZndDsgCiZndDsgJmd0OyBUaGVyZSBhcmUg
c2V2ZXJhbCB3YXlzIHRvIHBlcmZvcm0gdGhpcy4KJmd0OyAmZ3Q7IE9uZSBvZiB0aGVtIGlzIHRv
IGNyZWF0ZSBhIHNlY29uZCAobGFyZ2VyKSBkaXNrIGZvciB0aGUgdm0sCiZndDsgJmd0OyBydW4g
dGhlIHZtIGZyb20gbGl2ZSBjZCBhbmQgdXNlIGRkIHRvIGNvcHkgdGhlIGZpcnN0IGRpc2sgY29u
dGVudHMKJmd0OyAmZ3Q7IGludG8gdGhlIHNlY29uZCBvbmUsCiZndDsgJmd0OyBhbmQgZmluYWxs
eSByZW1vdmUgdGhlIGZpcnN0IGRpc2sgYW5kIG1ha2Ugc3VyZSB0aGF0IHRoZSBuZXcgZGlzawom
Z3Q7ICZndDsgaXMKJmd0OyAmZ3Q7IGNvbmZpZ3VyZWQgYXMgeW91ciBzeXN0ZW0gZGlzay4KJmd0
OyAmZ3Q7IEhlcmUgeW91IGd1aWRlIGZvciB0aGUgZGQgb3BlcmF0aW9uCiZndDsgJmd0OyB0byBi
ZSBkb25lIGZyb20gd2l0aGluIHRoZSBndWVzdCBzeXN0ZW0sIGJ1dCBib290ZWQgZnJvbSBsaXZl
LgomZ3Q7ICZndDsgQ2FuIHRoaXMgYmUgZG9uZSBkaXJlY3RseSBmcm9tIHRoZSBORlMgc3RvcmFn
ZSBpdHNlbGYgaW5zdGVhZD8KJmd0OyAmZ3Q7IAomZ3Q7IAomZ3Q7IEthcmxpLCBpdCBjYW4gYmUg
ZG9uZSBieSB1c2luZyBkZCAob3IgcnN5bmMpLCB3aGVuIHlvdXIgc291cmNlIGlzIHRoZQomZ3Q7
IHZvbHVtZSBvZiB0aGUgY3VycmVudCBkaXNrIGltYWdlCiZndDsgYW5kIHRoZSBkZXN0aW5hdGlv
biBpcyB0aGUgdm9sdW1lIG9mIHRoZSBuZXcgZGlzayBpbWFnZSBjcmVhdGVkLgomZ3Q7IFlvdSBq
dXN0IGhhdmUgdG8gZmluZCB0aGUgaW1hZ2VzIGluIHRoZSBpbnRlcm5hbHMgb2YgdGhlIHZkc20g
aG9zdCwKJmd0OyB3aGljaCBpcyBhIGJpdCBtb3JlIHRyaWNreQomZ3Q7IGFuZCBjYW4gY2F1c2Ug
bW9yZSBkYW1hZ2UgaWYgZG9uZSB3cm9uZy4gWW91IG1lYW4gc2luY2UgdGhlIFZNJ3MgYW5kCiZn
dDsgZGlza3MgYXJlIGNhbGxlZCBsaWtlICZxdW90O2MzZGJmYjVmLTdiM2ItNDYwMi05NjFmLTYy
NGM2OTYxODczNCZxdW90OyB5b3UKJmd0OyBoYXZlIHRvIHF1ZXJ5IHRoZSBhcGkgdG8gZmlndXJl
IG91dCB3aGF0wrRzIHdoYXQsIGJ1dCBvdGhlciB0aGFuCiZndDsgdGhhdCwgeW91wrRyZSBzYXlp
bmcgaXTCtGxsICZxdW90O2p1c3Qgd29yayZxdW90Oywgc28gdGhhdMK0cyBnb29kIHRvIGtub3cs
IHNpbmNlCiZndDsgSSB0aGluayBsZXR0aW5nIHRoZSBzdG9yYWdlIGl0c2VsZiBkbyB0aGUgZGQg
Y29weSBsb2NhbGx5IGlzIGdvaW5nCiZndDsgdG8gYmUgbXVjaCBtdWNoIGZhc3RlciB0aGFuIHRo
cm91Z2ggdGhlIFZNLCBvdmVyIHRoZSBuZXR3b3JrLgomZ3Q7IFRoYW5rcyEKJmd0OyBXaWxsIGl0
IG1hdHRlciBpZiB0aGUgZGlza3MgYXJlICZxdW90O1RoaW4gUHJvdmlzaW9uJnF1b3Q7IG9yICZx
dW90O1ByZWFsbG9jYXRlZCZxdW90Oz8KJmd0OyAKJmd0OyAKCkFzIGxvbmcgYXMgaXQncyBkb25l
IG9uIHRoZSBiYXNlIHZvbHVtZSBpdCBkb2Vzbid0IG1hdHRlci4KPC9wcmU+DQo8L2Jsb2NrcXVv
dGU+DQpXZWxsLCBJwrR2ZSBub3cgdGVzdGVkIHRoZSBzdWdnZXN0ZWQgcHJvY2VkdXJlIGFuZCBk
aWRuwrR0IHJlYWxseSBnbyBhbGwgdGhlIHdheSBob21lLjxicj4NCjEuIENyZWF0ZWQgYSBuZXcs
IGJpZ2dlciB2aXJ0dWFsIGRpc2sgdGhhbiB0aGUgb3JpZ2luYWwsIDQwR0IuPGJyPg0KMi4gQm9v
dGVkIFdpbjIwMDhSMiBndWVzdCBhbmQgY291bGQgc2VlIGZyb20gRGlza01hbmFnZXIgdGhhdCBh
IG5ldywgYmlnZ2VyIGRyaXZlLCA4MEdCLCBoYWQgYXBwZWFyZWQuPGJyPg0KMy4gU2h1dCBndWVz
dCBkb3duIGFuZCBpc3N1ZWQgYSBkZCBmcm9tIG9sZCBzb3VyY2UgdG8gbmV3LCBiaWdnZXIgZGVz
dGluYXRpb24uPGJyPg0KNC4gV2hlbiBzdGFydGVkLCBEaXNrTWFuYWdlciBub3cgc2VlcyBhbiBv
ZmZsaW5lLCBlcXVhbGx5IHNtYWxsIGRyaXZlIGFzIHRoZSBvcmlnaW5hbCwgNDBHQi4gVGhlcmUg
aXMgbm8gZnJlZSBzcGFjZSBpbiB0aGUgbmV3IGRyaXZlIHRvIGV4cGFuZCB3aXRoLCBXaW5kb3dz
IG9ubHkgc2VlcyBpdCBhcyBiZWVpbmcgNDBHQi48YnI+DQo8YnI+DQpIYXZlIHRyaWVkICZxdW90
O1JlZnJlc2gmcXVvdDsgYW5kICZxdW90O1Jlc2NhbiZxdW90OywgYnV0IFdpbmRvd3MganVzdCBz
ZWVzIHR3byBpZGVudGljYWxseSBzbWFsbCBkaXNrcy48YnI+DQo8YnI+DQpTdWdnZXN0aW9ucz88
YnI+DQo8YnI+DQo8YmxvY2txdW90ZSB0eXBlPSJDSVRFIj4NCjxwcmU+CgomZ3Q7IAomZ3Q7ICZn
dDsgCiZndDsgJmd0OyAKJmd0OyAmZ3Q7IFRoZSBzZWNvbmQsIHJpc2tpZXIsIG9wdGlvbiBpcyB0
byBleHBvcnQgdGhlIHZtIHRvIGFuIGV4cG9ydAomZ3Q7ICZndDsgZG9tYWluLAomZ3Q7ICZndDsg
cmVzaXplIHRoZSBpbWFnZSB2b2x1bWUgc2l6ZSB0byB0aGUgbmV3IGxhcmdlciBzaXplIHVzaW5n
IHFlbXUtaW1nCiZndDsgJmd0OyBhbmQgYWxzbyBtb2RpZnkgdGhlIHZtJ3MgbWV0YWRhdGEgaW4g
aXRzIG92ZiwKJmd0OyAmZ3Q7IGFzIHlvdSBjYW4gc2VlIHRoaXMgb3B0aW9uIGlzIG1vcmUgY29t
cGxpY2F0ZWQgYW5kIHJlcXVpcmVzIGRlZXBlcgomZ3Q7ICZndDsgdW5kZXJzdGFuZGluZyBhbmQg
YWx0ZXJpbmcgb2YgdGhlIG1ldGFkYXRhLi4uCiZndDsgJmd0OyBmaW5hbGx5IHlvdSdsbCBuZWVk
IHRvIGltcG9ydCB0aGUgdm0gYmFjay4KJmd0OyAmZ3Q7IAomZ3Q7ICZndDsgCiZndDsgJmd0OyAK
Jmd0OyAmZ3Q7IC0tLS0tIE9yaWdpbmFsIE1lc3NhZ2UgLS0tLS0KJmd0OyAmZ3Q7ICZndDsgRnJv
bTogJnF1b3Q7Um9ja3kmcXVvdDsgJmx0OyA8YSBocmVmPSJtYWlsdG86cm9ja3liYWxvb0BnbWFp
bC5jb20iPnJvY2t5YmFsb29AZ21haWwuY29tPC9hPiAmZ3Q7CiZndDsgJmd0OyAmZ3Q7IFRvOiAm
cXVvdDtZZWVsYSBLYXBsYW4mcXVvdDsgJmx0OyA8YSBocmVmPSJtYWlsdG86eWthcGxhbkByZWRo
YXQuY29tIj55a2FwbGFuQHJlZGhhdC5jb208L2E+ICZndDsKJmd0OyAmZ3Q7ICZndDsgQ2M6IDxh
IGhyZWY9Im1haWx0bzpVc2Vyc0BvdmlydC5vcmciPlVzZXJzQG92aXJ0Lm9yZzwvYT4gJmd0OyBT
ZW50OiBUdWVzZGF5LCBKYW51YXJ5IDgsIDIwMTMgMTE6MzA6MDAgQU0KJmd0OyAmZ3Q7ICZndDsg
U3ViamVjdDogUmU6IFtVc2Vyc10gQmVzdCBwcmFjdGljZSB0byByZXNpemUgYSBXTSBkaXNrIGlt
YWdlCiZndDsgJmd0OyAmZ3Q7IAomZ3Q7ICZndDsgJmd0OyBJdHMganVzdCBhIHRoZW9yZXRpY2Fs
IHF1ZXN0aW9uIGFzIEkgdGhpbmsgdGhlIGlzc3VlIHdpbGwgY29tZQomZ3Q7ICZndDsgJmd0OyBm
b3IKJmd0OyAmZ3Q7ICZndDsgdXMKJmd0OyAmZ3Q7ICZndDsgYW5kIG90aGVyIHVzZXJzLgomZ3Q7
ICZndDsgJmd0OyAKJmd0OyAmZ3Q7ICZndDsgSSB0aGluayB0aGVyZSBjYW4gYmUgb25lIG9yIG1v
cmUgc25hcHNob3RzIGluIHRoZSBXTSBvdmVyIHRoZQomZ3Q7ICZndDsgJmd0OyB0aW1lLgomZ3Q7
ICZndDsgJmd0OyBCdXQKJmd0OyAmZ3Q7ICZndDsgaWYgdGhhdCBpcyBhbiBpc3N1ZSB3ZSBjYW4g
YWx3YXlzIGNvbGxhcHNlIHRoZW0gSSB0aGluay4KJmd0OyAmZ3Q7ICZndDsgSWYgaXRzIGEgYmFz
ZSBpbWFnZSBpdCBzaG91bGQgYmUgUkFXLCByaWdodD8KJmd0OyAmZ3Q7ICZndDsgSW4gdGhpcyBj
YXNlIGl0cyBvbiBmaWxlIHN0b3JhZ2UgKE5GUykuCiZndDsgJmd0OyAmZ3Q7IAomZ3Q7ICZndDsg
Jmd0OyBSZWdhcmRzIC8vUmlja3kKJmd0OyAmZ3Q7ICZndDsgCiZndDsgJmd0OyAmZ3Q7IE9uIDIw
MTMtMDEtMDggMTA6MDcsIFllZWxhIEthcGxhbiB3cm90ZToKJmd0OyAmZ3Q7ICZndDsgJmd0OyBI
aSBSaWNreSwKJmd0OyAmZ3Q7ICZndDsgJmd0OyBJbiBvcmRlciB0byBnaXZlIHlvdSBhIGRldGFp
bGVkIGFuc3dlciBJIG5lZWQgYWRkaXRpb25hbAomZ3Q7ICZndDsgJmd0OyAmZ3Q7IGRldGFpbHMK
Jmd0OyAmZ3Q7ICZndDsgJmd0OyByZWdhcmRpbmcgdGhlIGRpc2s6CiZndDsgJmd0OyAmZ3Q7ICZn
dDsgLSBJcyB0aGUgZGlzayBpbWFnZSBjb21wb3NlZCBhcyBhIGNoYWluIG9mIHZvbHVtZXMgb3Ig
anVzdCBhCiZndDsgJmd0OyAmZ3Q7ICZndDsgYmFzZQomZ3Q7ICZndDsgJmd0OyAmZ3Q7IHZvbHVt
ZT8KJmd0OyAmZ3Q7ICZndDsgJmd0OyAoaWYgaXQncyBhIGNoYWluIGl0IHdpbGwgYmUgbW9yZSBj
b21wbGljYXRlZCwgeW91IG1pZ2h0IHdhbnQgdG8KJmd0OyAmZ3Q7ICZndDsgJmd0OyBjb2xsYXBz
ZSB0aGUgY2hhaW4gZmlyc3QgdG8gbWFrZSBpdCBlYXNpZXIpLgomZ3Q7ICZndDsgJmd0OyAmZ3Q7
IC0gSXMgdGhlIGRpc2sgaW1hZ2UgcmF3PyAoeW91IGNhbiB1c2UgcWVtdS1pbWcgaW5mbyB0byBj
aGVjaykKJmd0OyAmZ3Q7ICZndDsgJmd0OyAtIElzIHRoZSBkaXNrIGltYWdlIG9uIGJsb2NrIG9y
IGZpbGUgc3RvcmFnZT8KJmd0OyAmZ3Q7ICZndDsgJmd0OwomZ3Q7ICZndDsgJmd0OyAmZ3Q7IFJl
Z2FyZHMsCiZndDsgJmd0OyAmZ3Q7ICZndDsgWWVlbGEKJmd0OyAmZ3Q7ICZndDsgJmd0OwomZ3Q7
ICZndDsgJmd0OyAmZ3Q7IC0tLS0tIE9yaWdpbmFsIE1lc3NhZ2UgLS0tLS0KJmd0OyAmZ3Q7ICZn
dDsgJmd0OyZndDsgRnJvbTogJnF1b3Q7Umlja3kmcXVvdDsgJmx0OyA8YSBocmVmPSJtYWlsdG86
cm9ja3liYWxvb0BnbWFpbC5jb20iPnJvY2t5YmFsb29AZ21haWwuY29tPC9hPiAmZ3Q7CiZndDsg
Jmd0OyAmZ3Q7ICZndDsmZ3Q7IFRvOiA8YSBocmVmPSJtYWlsdG86VXNlcnNAb3ZpcnQub3JnIj5V
c2Vyc0BvdmlydC5vcmc8L2E+ICZndDsgJmd0OyZndDsgU2VudDogVHVlc2RheSwgSmFudWFyeSA4
LCAyMDEzCiZndDsgJmd0OyAmZ3Q7ICZndDsmZ3Q7IDEwOjQwOjI3CiZndDsgJmd0OyAmZ3Q7ICZn
dDsmZ3Q7IEFNCiZndDsgJmd0OyAmZ3Q7ICZndDsmZ3Q7IFN1YmplY3Q6IFtVc2Vyc10gQmVzdCBw
cmFjdGljZSB0byByZXNpemUgYSBXTSBkaXNrIGltYWdlCiZndDsgJmd0OyAmZ3Q7ICZndDsmZ3Q7
CiZndDsgJmd0OyAmZ3Q7ICZndDsmZ3Q7IEhpLAomZ3Q7ICZndDsgJmd0OyAmZ3Q7Jmd0OwomZ3Q7
ICZndDsgJmd0OyAmZ3Q7Jmd0OyBJZiBJIGhhdmUgYSBWTSB0aGF0IGhhcyBydW4gb3V0IG9mIGRp
c2sgc3BhY2UsIGhvdyBjYW4gSQomZ3Q7ICZndDsgJmd0OyAmZ3Q7Jmd0OyBpbmNyZWFzZQomZ3Q7
ICZndDsgJmd0OyAmZ3Q7Jmd0OyB0aGUKJmd0OyAmZ3Q7ICZndDsgJmd0OyZndDsgc3BhY2UgaW4g
YmVzdCB3YXk/IE9uZSB3YXkgaXMgdG8gYWRkIGEgc2Vjb25kIGJpZ2dlciBkaXNrIHRvCiZndDsg
Jmd0OyAmZ3Q7ICZndDsmZ3Q7IHRoZQomZ3Q7ICZndDsgJmd0OyAmZ3Q7Jmd0OyBXTQomZ3Q7ICZn
dDsgJmd0OyAmZ3Q7Jmd0OyBhbmQgdGhlbiB1c2UgZGQgb3Igc2ltaWxhciB0byBjb3B5LiBCdXQg
aXMgaXQgcG9zc2libGUgdG8KJmd0OyAmZ3Q7ICZndDsgJmd0OyZndDsgc3RyZXRjaAomZ3Q7ICZn
dDsgJmd0OyAmZ3Q7Jmd0OyB0aGUKJmd0OyAmZ3Q7ICZndDsgJmd0OyZndDsgb3JpZ2luYWwgZGlz
ayBpbnNpZGUgb3Igb3V0c2lkZSBvVmlydCBhbmQgZ2V0IG9WaXJ0IHRvIGtub3cKJmd0OyAmZ3Q7
ICZndDsgJmd0OyZndDsgdGhlCiZndDsgJmd0OyAmZ3Q7ICZndDsmZ3Q7IGJpZ2dlcgomZ3Q7ICZn
dDsgJmd0OyAmZ3Q7Jmd0OyBzaXplPwomZ3Q7ICZndDsgJmd0OyAmZ3Q7Jmd0OwomZ3Q7ICZndDsg
Jmd0OyAmZ3Q7Jmd0OyBSZWdhcmRzIC8vUmlja3kKJmd0OyAmZ3Q7ICZndDsgJmd0OyZndDsgX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KJmd0OyAmZ3Q7ICZn
dDsgJmd0OyZndDsgVXNlcnMgbWFpbGluZyBsaXN0CiZndDsgJmd0OyAmZ3Q7ICZndDsmZ3Q7IDxh
IGhyZWY9Im1haWx0bzpVc2Vyc0BvdmlydC5vcmciPlVzZXJzQG92aXJ0Lm9yZzwvYT4gJmd0OyAm
Z3Q7Jmd0OwomZ3Q7ICZndDsgJmd0OyAmZ3Q7Jmd0OyA8YSBocmVmPSJodHRwOi8vbGlzdHMub3Zp
cnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnMiPmh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFp
bG1hbi9saXN0aW5mby91c2VyczwvYT4gJmd0OyAmZ3Q7Jmd0OwomZ3Q7ICZndDsgJmd0OyAKJmd0
OyAmZ3Q7ICZndDsgCiZndDsgJmd0OyBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fXwomZ3Q7ICZndDsgVXNlcnMgbWFpbGluZyBsaXN0IDxhIGhyZWY9Im1haWx0
bzpVc2Vyc0BvdmlydC5vcmciPlVzZXJzQG92aXJ0Lm9yZzwvYT4gJmd0OwomZ3Q7ICZndDsgPGEg
aHJlZj0iaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzIj5odHRw
Oi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnM8L2E+ICZndDsKJmd0OyAK
PC9wcmU+DQo8L2Jsb2NrcXVvdGU+DQo8YnI+DQo8L2Jsb2NrcXVvdGU+DQo8YnI+DQo8L2JvZHk+
DQo8L2h0bWw+DQo=
--_000_5F9E965F5A80BC468BE5F40576769F091023B2DCexchange21_--
12 years, 4 months
[Users] DL380 G5 - Fails to Activate
by Tom Brown
Hi
I have a couple of old DL380 G5's and i am putting them into their own cluster for testing various things out.
The install of 3.1 from dreyou goes fine onto them but when they try to activate i get the following
Host xxx.xxx.net.uk moved to Non-Operational state as host does not meet the cluster's minimum CPU level. Missing CPU features : model_Conroe, nx
KVM appears to run just fine on these host and their cpu's are
Intel(R) Xeon(R) CPU 5140 @ 2.33GHz
Is it possible to add these in to a 3.1 cluster ??
thanks
12 years, 4 months
[Users] ovirt node
by David Michael
hi
i cannot add ovirt node to thr ovirt engine and i got this log message
*[org.ovirt.engine.core.bll.AddVdsCommand]*>* (http-0.0.0.0-8080-3)
CanDoAction of action AddVds failed.*>*
Reasons:VDS_CANNOT_CONNECT_TO_SERVER,VAR__ACTION__ADD,VAR__TYPE__HOST*
12 years, 4 months
Re: [Users] Change locale for VNC console
by Itamar Heim
On 01/03/2013 01:38 PM, Frank Wall wrote:
> On Thu, Jan 03, 2013 at 11:20:34AM +0000, Alexandre Santos wrote:
>> Did you shutdown the VM and started it again or just restart the VNC connection?
>
> It was a complete shutdown, have tried this multiple times.
actually, changing a config requires engine restart
12 years, 4 months
[Users] Attaching export domain to dc fails
by Patrick Hurrelmann
Hi list,
in one datacenter I'm facing problems with my export storage. The dc is
of type single host with local storage. On the host I see that the nfs
export domain is still connected, but the engine does not show this and
therefore it cannot be used for exports or detached.
Trying to add attach the export domain again fails. The following is
logged n vdsm:
Thread-1902159::ERROR::2013-01-24
17:11:45,474::task::853::TaskManager.Task::(_setError)
Task=`4bc15024-7917-4599-988f-2784ce43fbe7`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 861, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 960, in attachStorageDomain
pool.attachSD(sdUUID)
File "/usr/share/vdsm/storage/securable.py", line 63, in wrapper
return f(self, *args, **kwargs)
File "/usr/share/vdsm/storage/sp.py", line 924, in attachSD
dom.attach(self.spUUID)
File "/usr/share/vdsm/storage/sd.py", line 442, in attach
raise se.StorageDomainAlreadyAttached(pools[0], self.sdUUID)
StorageDomainAlreadyAttached: Storage domain already attached to pool:
'domain=cd23808b-136a-4b33-a80c-f2581eab022d,
pool=d95c53ca-9cef-4db2-8858-bf4937bd8c14'
It won't let me attach the export domain saying that it is already
attached. Manually umounting the export domain on the host results in
the same error on subsequent attach.
This is on CentOS 6.3 using Dreyou's rpms. Installed versions on host:
vdsm.x86_64 4.10.0-0.44.14.el6
vdsm-cli.noarch 4.10.0-0.44.14.el6
vdsm-python.x86_64 4.10.0-0.44.14.el6
vdsm-xmlrpc.noarch 4.10.0-0.44.14.el6
Engine:
ovirt-engine.noarch 3.1.0-3.19.el6
ovirt-engine-backend.noarch 3.1.0-3.19.el6
ovirt-engine-cli.noarch 3.1.0.7-1.el6
ovirt-engine-config.noarch 3.1.0-3.19.el6
ovirt-engine-dbscripts.noarch 3.1.0-3.19.el6
ovirt-engine-genericapi.noarch 3.1.0-3.19.el6
ovirt-engine-jbossas711.x86_64 1-0
ovirt-engine-notification-service.noarch 3.1.0-3.19.el6
ovirt-engine-restapi.noarch 3.1.0-3.19.el6
ovirt-engine-sdk.noarch 3.1.0.5-1.el6
ovirt-engine-setup.noarch 3.1.0-3.19.el6
ovirt-engine-tools-common.noarch 3.1.0-3.19.el6
ovirt-engine-userportal.noarch 3.1.0-3.19.el6
ovirt-engine-webadmin-portal.noarch 3.1.0-3.19.el6
ovirt-image-uploader.noarch 3.1.0-16.el6
ovirt-iso-uploader.noarch 3.1.0-16.el6
ovirt-log-collector.noarch 3.1.0-16.el6
How can this be recovered to a sane state? If more information is
needed, please do not hesitate to request it.
Thanks and regards
Patrick
--
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg
HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
12 years, 4 months
[Users] storage domain auto re-cover
by Alex Leonhardt
hi,
is it possible to set a storage domain to auto-recover / auto-reactivate ?
e.g. after I restart a host that runs a storage domain, i want ovirt engine
to make that storage domain active after the host has come up.
thanks
alex
--
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
12 years, 4 months
[Users] VM migration failed on oVirt Node Hypervisor release 2.5.5 (0.1.fc17) : empty cacert.pem
by Kevin Maziere Aubry
Hi
My concern is about ovirt node oVirt Node Hypervisor release 2.5.5
(0.1.fc17), iso downloaded from ovirt.
I've installed and connect 4 nodes on a manager, and try to migrate a VM
between hypervisors.
I always fail with error :
libvirtError: operation failed: Failed to connect to remote libvirt URI
qemu+tls://172.16.6.3/system
where 172.0.0.1 is the IP of a nde.
I've check on the node and the port 16514 is open.
I also test the virsh command to have a better error message :
virsh -c tls://172.16.6.3/system
error: Unable to import client certificate /etc/pki/CA/cacert.pem
I've checked the cert file on the ovirt node and found it was empty, and
that on all nodes install from the Ovirt ISO it is empty
I also checked /config/etc/pki/CA/cacert.pem, which is also empty.
On a vdsm node install from package on fedora17, it works.
ls -al /etc/pki/CA/cacert.pem
lrwxrwxrwx. 1 root root 30 18 janv. 14:30 /etc/pki/CA/cacert.pem ->
/etc/pki/vdsm/certs/cacert.pem
And the cert is good.
I've seen no bug report regarding the feature...
Kevin
--
Kevin Mazière
Responsable Infrastructure
Alter Way – Hosting
1 rue Royal - 227 Bureaux de la Colline
92213 Saint-Cloud Cedex
Tél : +33 (0)1 41 16 38 41
Mob : +33 (0)7 62 55 57 05
http://www.alterway.fr
12 years, 4 months
Re: [Users] Problems when trying to delete a snapshot
by Eduardo Warszawski
----- Original Message -----
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Hi,
> I recovered from this error by import my base-image in a new machine
> and make a restore of the backups.
>
> But is it possible "by hand" to merge latest snapshot in to a
> base-image to get a new VM up and running with the old disk image?
>
Looking at your vdsm logs the snapshot should be intact, then it can be
manually restored to the previous state. Please restore the images dirs,
removing the "old" and the "orig" dirs you have.
You need to change the engine db accordingly too.
Later you can retry the merge.
Regards.
> I have tried with qemu-img but have no go with it.
>
> Regards //Ricky
>
>
> On 2012-12-30 16:57, Haim Ateya wrote:
> > Hi Ricky,
> >
> > from going over your logs, it seems like create snapshot failed,
> > its logged clearly in both engine and vdsm logs [1]. did you try to
> > delete this snapshot or was it a different one? if so, not sure its
> > worth debugging.
> >
> > bee7-78e7d1cbc201, vmId=d41b4ebe-3631-4bc1-805c-d762c636ca5a), log
> > id: 46d21393 2012-12-13 10:40:24,372 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> > (pool-5-thread-50) [12561529] Failed in SnapshotVDS method
> > 2012-12-13 10:40:24,372 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> > (pool-5-thread-50) [12561529] Error code SNAPSHOT_FAILED and error
> > message VDSGenericException: VDSErrorException: Failed to
> > SnapshotVDS, error = Snapshot failed 2012-12-13 10:40:24,372 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> > (pool-5-thread-50) [12561529] Command
> > org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand return
> > value Class Name:
> > org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
> >
> >
> mStatus Class Name:
> org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
> > mCode 48 mMessage
> > Snapshot failed
> >
> >
> >
> > enter/6d91788c-99d9-11e1-b913-78e7d1cbc201/mastersd/master/tasks/21cbcc25-7672-4704-a414-a44f5e9944ed
> > temp
> > /rhev/data-center/6d91788c-99d9-11e1-b913-78e7d1cbc201/mastersd/maste
> >
> >
> r/tasks/21cbcc25-7672-4704-a414-a44f5e9944ed.temp
> > 21cbcc25-7672-4704-a414-a44f5e9944ed::ERROR::2012-12-14
> > 10:48:41,189::volume::492::Storage.Volume::(create) Unexpected
> > error Traceback (most recent call last): File
> > "/usr/share/vdsm/storage/volume.py", line 475, in create
> > srcVolUUID, imgPath, volPath) File
> > "/usr/share/vdsm/storage/fileVolume.py", line 138, in _create
> > oop.getProcessPool(dom.sdUUID).createSparseFile(volPath,
> > sizeBytes) File "/usr/share/vdsm/storage/remoteFileHandler.py",
> > line 277, in callCrabRPCFunction *args, **kwargs) File
> > "/usr/share/vdsm/storage/remoteFileHandler.py", line 195, in
> > callCrabRPCFunction raise err IOError: [Errno 27] File too large
> >
> > 2012-12-13 10:40:24,372 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> > (pool-5-thread-50) [12561529] Vds: virthost01 2012-12-13
> > 10:40:24,372 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase]
> > (pool-5-thread-50) [12561529] Command SnapshotVDS execution failed.
> > Exception: VDSErrorException: VDSGenericException:
> > VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed
> > 2012-12-13 10:40:24,373 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
> > (pool-5-thread-50) [12561529] FINISH, SnapshotVDSCommand, log id:
> > 46d21393 2012-12-13 10:40:24,373 ERROR
> > [org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
> > (pool-5-thread-50) [12561529] Wasnt able to live snpashot due to
> > error: VdcBLLException: VdcBLLException:
> > org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> > VDSGenericException: VDSErrorException: Failed to SnapshotVDS,
> > error = Snapshot failed, rolling back. 2012-12-13 10:40:24,376
> > ERROR [org.ovirt.engine.core.bll.CreateSnapshotCommand]
> > (pool-5-thread-50) [4fd6c4e4] Ending command with failure:
> > org.ovirt.engine.core.bll.CreateSnapshotCommand 2012-12-13 1
> >
> > 21cbcc25-7672-4704-a414-a44f5e9944ed::ERROR::2012-12-14
> > 10:48:41,196::task::833::TaskManager.Task::(_setError)
> > Task=`21cbcc25-7672-4704-a414-a44f5e9944ed`::Unexpected error
> > Traceback (most recent call last): File
> > "/usr/share/vdsm/storage/task.py", line 840, in _run return
> > fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line
> > 307, in run return self.cmd(*self.argslist, **self.argsdict) File
> > "/usr/share/vdsm/storage/securable.py", line 68, in wrapper return
> > f(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line
> > 1903, in createVolume srcImgUUID=srcImgUUID,
> > srcVolUUID=srcVolUUID) File "/usr/share/vdsm/storage/fileSD.py",
> > line 258, in createVolume volUUID, desc, srcImgUUID, srcVolUUID)
> > File "/usr/share/vdsm/storage/volume.py", line 494, in create
> > (volUUID, e)) VolumeCreationError: Error creating a new volume:
> > ('Volume creation 6da02c1e-5ef5-4fab-9ab2-bb081b35e7b3 failed:
> > [Errno 27] File too large',)
> >
> >
> >
> > ----- Original Message -----
> >> From: "Ricky Schneberger" <ricky(a)schneberger.se> To: "Haim Ateya"
> >> <hateya(a)redhat.com> Cc: users(a)ovirt.org Sent: Thursday, December
> >> 20, 2012 5:52:10 PM Subject: Re: [Users] Problems when trying to
> >> delete a snapshot
> >>
> > Hi, The task did not finished but it broked my VM. What I have
> > right now is a VM with a base-image and a snapshot that I need to
> > merge together so I can import the disk in a new VM.
> >
> > I have attached the logs and even the output from the
> > tree-command.
> >
> > Regards //
> >
> > Ricky
> >
> >
> >
> > On 2012-12-16 08:35, Haim Ateya wrote:
> >>>> please attach full engine and vdsm log from SPM machine.
> >>>> also, did the task finished ? please run tree command for
> >>>> /rhev/data-center/.
> >>>>
> >>>> ----- Original Message -----
> >>>>> From: "Ricky Schneberger" <ricky(a)schneberger.se> To:
> >>>>> users(a)ovirt.org Sent: Friday, December 14, 2012 3:16:58 PM
> >>>>> Subject: [Users] Problems when trying to delete a snapshot
> >>>>>
> >>>> I was trying to delete a snapshot from one of my VM and
> >>>> everything started fine.
> >>>>
> >>>> The disk image is a thin provisioned 100GB disk with 8GB
> >>>> data. I just hade one snapshot and it was that one I started
> >>>> to delete. After more than two hours I look in the folder
> >>>> with that VMs disk images and found out that there was i new
> >>>> created file with a size of around 650GB and it was still
> >>>> growing.
> >>>>
> >>>> -rw-rw----. 1 vdsm kvm 8789950464 14 dec 12.23
> >>>> 8ede8e53-1323-442b-84f2-3c94114c64cf -rw-r--r--. 1 vdsm kvm
> >>>> 681499951104 14 dec 14.10
> >>>> 8ede8e53-1323-442b-84f2-3c94114c64cf_MERGE -rw-r--r--. 1 vdsm
> >>>> kvm 272 14 dec 12.24
> >>>> 8ede8e53-1323-442b-84f2-3c94114c64cf.meta -rw-rw----. 1 vdsm
> >>>> kvm 107382439936 6 jun 2012
> >>>> b4a43421-728b-4204-a389-607221d945b7 -rw-r--r--. 1 vdsm kvm
> >>>> 282 14 dec 12.24 b4a43421-728b-4204-a389-607221d945b7.meta
> >>>>
> >>>> Any idea what is happening?
> >>>>
> >>>> Regards
> >>>>>
> >>>>> _______________________________________________ Users
> >>>>> mailing list Users(a)ovirt.org
> >>>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>>>
> >>
>
> - --
> Ricky Schneberger
>
> - ------------------------------------
> "Not using free exhaust energy to help your engine breathe is
> downright
> criminal"
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.11 (GNU/Linux)
> Comment: Using GnuPG with undefined - http://www.enigmail.net/
>
> iEYEARECAAYFAlDhjYYACgkQOap81biMC2NY1gCdHeTHy92dFzMMhwKA360OSauW
> KMIAn1rClC+ZWRgukQJaeCY0g3APw4to
> =G4Bl
> -----END PGP SIGNATURE-----
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
12 years, 4 months
[Users] Installing a lab setup from scratch using F18
by Joop
As promised on IRC (jvandewege) I'll post my findings of setting up an
ovirt lab environment from scratch using F18.
First some background:
- 2 hosts for testing storage cluster with replicated gluster data and
iso domains (HP ML110 G5)
- 2 hosts for VMs (HP DL360 G5?)
- 1 managment server (HP ML110 G5)
All physical servers have atleast 1Gb connection and they also have 2
10Gb ethernet ports connected to two Arista switches.
Complete setup (except for the managment srv) is redundant. Using
F18-x64 DVD and using minimal server with extra tools, after install the
ovirt.repo and the beta gluster repo is activated.
This serves as a proof of concept for a bigger setup.
Problems sofar:
- looks like F18 uses a different path to access video since using the
defaults leads to garbled video, need to use nomodeset as a kernel option
- upgrading the minimal install (yum upgrade) gives me kernel-3.7.2-204
and the boot process halts with soft locks on different cpus, reverting
to 3.6.10-4.fc18.x86_64 fixes that. Managment is using 3.7.2 kernel
without problems BUT it doesn't use libvirt/qemu-kvm/vdsm, my guess its
related.
- need to disable NetworkManager and enable network (and ifcfg-xxx) to
get network going
- adding the storage hosts from de webui works but after reboot vdsm is
not starting, reason seems to be that network isn't initialised until
after all interfaces are done with their dhcp requests. There are 4
interfaces which use dhcp, setting those to bootprotocol=none seems the
help.
- during deploy their is a warning about 'cannot set tuned profile',
seems harmless but hadn't seen that one until now.
- the deployment script discovers during deployment that the ID of the
second storage server is identical to the first one and abort the
deployment (Blame HP!) shouldn't it generate a unique one using uuidgen??
Things that are OK sofar:
- ovirt-engine setup (no problems with postgresql)
- creating/activating gluster volumes (no more deadlocks)
Adding virt hosts has to wait til tomorrow, got problems getting the dvd
iso onto an usb stick, will probably burn a DVD to keep going.
Joop
12 years, 4 months
[Users] cannot add gluster domain
by T-Sinjon
HI, everyone:
Recently , I newly installed ovirt 3.1 from http://resources.ovirt.org/releases/stable/rpm/Fedora/17/noarch/,
and node use http://resources.ovirt.org/releases/stable/tools/ovirt-node-iso-2.5.5-0.1...
when i add gluster domain via nfs, mount error occurred,
I have do manually mount action on the node but failed if without -o nolock option:
# /usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6 my-gluster-ip:/gvol02/GlusterDomain /rhev/data-center/mnt/my-gluster-ip:_gvol02_GlusterDomain
mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd. mount.nfs: an incorrect mount option was specified
blow is the vdsm.log from node and engine.log, any help was appreciated :
vdsm.log
Thread-12717::DEBUG::2013-01-22 09:19:02,261::BindingXMLRPC::156::vds::(wrapper) [my-engine-ip]
Thread-12717::DEBUG::2013-01-22 09:19:02,261::task::588::TaskManager.Task::(_updateState) Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::moving from state init -> state preparing
Thread-12717::INFO::2013-01-22 09:19:02,262::logUtils::37::dispatcher::(wrapper) Run and protect: validateStorageServerConnection(domType=1, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection': 'my-gluster-ip:/gvol02/GlusterDomain', 'iqn': '', 'portal': '', 'user': '', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000', 'port': ''}], options=None)
Thread-12717::INFO::2013-01-22 09:19:02,262::logUtils::39::dispatcher::(wrapper) Run and protect: validateStorageServerConnection, Return response: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-12717::DEBUG::2013-01-22 09:19:02,262::task::1172::TaskManager.Task::(prepare) Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::finished: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-12717::DEBUG::2013-01-22 09:19:02,262::task::588::TaskManager.Task::(_updateState) Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::moving from state preparing -> state finished
Thread-12717::DEBUG::2013-01-22 09:19:02,262::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-12717::DEBUG::2013-01-22 09:19:02,262::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-12717::DEBUG::2013-01-22 09:19:02,263::task::978::TaskManager.Task::(_decref) Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::ref 0 aborting False
Thread-12718::DEBUG::2013-01-22 09:19:02,307::BindingXMLRPC::156::vds::(wrapper) [my-engine-ip]
Thread-12718::DEBUG::2013-01-22 09:19:02,307::task::588::TaskManager.Task::(_updateState) Task=`c07a075a-a910-4bc3-9a33-b957d05ea270`::moving from state init -> state preparing
Thread-12718::INFO::2013-01-22 09:19:02,307::logUtils::37::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=1, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection': 'my-gluster-ip:/gvol02/GlusterDomain', 'iqn': '', 'portal': '', 'user': '', 'password': '******', 'id': '6463ca53-6c57-45f6-bb5c-45505891cae9', 'port': ''}], options=None)
Thread-12718::DEBUG::2013-01-22 09:19:02,467::__init__::1249::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n /usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6 my-gluster-ip:/gvol02/GlusterDomain /rhev/data-center/mnt/my-gluster-ip:_gvol02_GlusterDomain' (cwd None)
Thread-12718::ERROR::2013-01-22 09:19:02,486::hsm::1932::Storage.HSM::(connectStorageServer) Could not connect to storageServer
Traceback (most recent call last):
File "/usr/share/vdsm/storage/hsm.py", line 1929, in connectStorageServer
File "/usr/share/vdsm/storage/storageServer.py", line 256, in connect
File "/usr/share/vdsm/storage/storageServer.py", line 179, in connect
File "/usr/share/vdsm/storage/mount.py", line 190, in mount
File "/usr/share/vdsm/storage/mount.py", line 206, in _runcmd
MountError: (32, ";mount.nfs: rpc.statd is not running but is required for remote locking.\nmount.nfs: Either use '-o nolock' to keep locks local, or start statd.\nmount.nfs: an incorrect mount option was specified\n")
engine.log:
2013-01-22 17:19:20,073 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand] (ajp--0.0.0.0-8009-7) [25932203] START, ValidateStorageServerConnectionVDSCommand(vdsId = 626e37f4-5ee3-11e2-96fa-0030487c133e, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: null, connection: my-gluster-ip:/gvol02/GlusterDomain };]), log id: 303f4753
2013-01-22 17:19:20,095 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand] (ajp--0.0.0.0-8009-7) [25932203] FINISH, ValidateStorageServerConnectionVDSCommand, return: {00000000-0000-0000-0000-000000000000=0}, log id: 303f4753
2013-01-22 17:19:20,115 INFO [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand] (ajp--0.0.0.0-8009-7) [25932203] Running command: AddStorageServerConnectionCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: System
2013-01-22 17:19:20,117 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (ajp--0.0.0.0-8009-7) [25932203] START, ConnectStorageServerVDSCommand(vdsId = 626e37f4-5ee3-11e2-96fa-0030487c133e, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: 6463ca53-6c57-45f6-bb5c-45505891cae9, connection: my-gluster-ip:/gvol02/GlusterDomain };]), log id: 198f3eb4
2013-01-22 17:19:20,323 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (ajp--0.0.0.0-8009-7) [25932203] FINISH, ConnectStorageServerVDSCommand, return: {6463ca53-6c57-45f6-bb5c-45505891cae9=477}, log id: 198f3eb4
2013-01-22 17:19:20,325 ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper] (ajp--0.0.0.0-8009-7) [25932203] The connection with details my-gluster-ip:/gvol02/GlusterDomain failed because of error code 477 and error message is: 477
2013-01-22 17:19:20,415 INFO [org.ovirt.engine.core.bll.storage.AddNFSStorageDomainCommand] (ajp--0.0.0.0-8009-6) [6641b9e1] Running command: AddNFSStorageDomainCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: System
2013-01-22 17:19:20,425 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--0.0.0.0-8009-6) [6641b9e1] START, CreateStorageDomainVDSCommand(vdsId = 626e37f4-5ee3-11e2-96fa-0030487c133e, storageDomain=org.ovirt.engine.core.common.businessentities.storage_domain_static@8e25c6bc, args=my-gluster-ip:/gvol02/GlusterDomain), log id: 675539c4
2013-01-22 17:19:21,064 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--0.0.0.0-8009-6) [6641b9e1] Failed in CreateStorageDomainVDS method
2013-01-22 17:19:21,065 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--0.0.0.0-8009-6) [6641b9e1] Error code StorageDomainFSNotMounted and error message VDSGenericException: VDSErrorException: Failed to CreateStorageDomainVDS, error = Storage domain remote path not mounted: ('/rhev/data-center/mnt/my-gluster-ip:_gvol02_GlusterDomain',)
2013-01-22 17:19:21,066 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--0.0.0.0-8009-6) [6641b9e1] Command org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand return value
Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
mStatus Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
mCode 360
mMessage Storage domain remote path not mounted: ('/rhev/data-center/mnt/my-gluster-ip:_gvol02_GlusterDomain',)
12 years, 4 months
[Users] KVM version not showing in Ovirt Manager
by Tom Brown
Hi
I have just added another HV to a cluster and its up and running fine. I can run VM's on it and migrate fro other HV's onto it. I do note however that in the manager there is no KVM version listed as installed however on other HV's in the cluster there is a version present.
I see that the KVM version is slightly different on this new host but as i said apart from this visual issue everything appear to be running fine. These HV's are CentOS 6.3 using dreyou 3.1
Node where KVM version not showing in the manager
node003 ~]# rpm -qa | grep kvm
qemu-kvm-rhev-0.12.1.2-2.295.el6.10.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.295.el6.10.x86_64
Node where KVM version is showing in the manager
node002 ~]# rpm -qa | grep kvm
qemu-kvm-tools-0.12.1.2-2.295.el6_3.8.x86_64
qemu-kvm-0.12.1.2-2.295.el6_3.8.x86_64
thanks
12 years, 4 months
[Users] Encrypted Admin password error
by Dead Horse
This error appears very frequently in the engine.log.
2013-01-22 15:33:10,837 ERROR
[org.ovirt.engine.core.engineencryptutils.EncryptionUtils] (MSC service
thread 1-5) Failed to decrypt Data must start with zero
2013-01-22 15:33:10,838 ERROR
[org.ovirt.engine.core.dal.dbbroker.generic.DBConfigUtils] (MSC service
thread 1-5) Failed to decrypt value for property TruststorePass will be
used encrypted value
2013-01-22 15:33:10,865 ERROR
[org.ovirt.engine.core.engineencryptutils.EncryptionUtils] (MSC service
thread 1-5) Failed to decrypt Data must start with zero
2013-01-22 15:33:10,866 ERROR
[org.ovirt.engine.core.dal.dbbroker.generic.DBConfigUtils] (MSC service
thread 1-5) Failed to decrypt value for property AdminPassword will be used
encrypted value
Starting appearing after manual edit to the "AdminPassword" value in the
database.
Tried using engine-config -s AdminPassword='somepassword' to change it
which always resulted in: cannot set value 'somepassword' to key
AdminPassword.
Hence the manual edit
Any ideas on how to make the engine happy in that regard again?
- DHC
12 years, 4 months
[Users] oVirt Weekly Meeting -- 2013-01-23
by Mike Burns
Minutes: http://ovirt.org/meetings/ovirt/2013/ovirt.2013-01-23-15.00.html
Minutes (text): http://ovirt.org/meetings/ovirt/2013/ovirt.2013-01-23-15.00.txt
Log: http://ovirt.org/meetings/ovirt/2013/ovirt.2013-01-23-15.00.log.html
============================
#ovirt: oVirt Weekly Meeting
============================
Meeting started by mburns_ovirt_ws at 15:00:57 UTC. The full logs are
available at
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-01-23-15.00.log.html .
Meeting summary
---------------
* agenda and roll call (mburns_ovirt_ws, 15:01:05)
* Release status (mburns, 15:04:10)
* vdsm and engine rpms are posted to ovirt-beta repo (mburns,
15:04:23)
* testing is ongoing with ovirt-node (mburns, 15:04:43)
* hope to have ovirt-node packages and image uploaded and beta
announcement sent today (mburns, 15:05:10)
* workshops (dneary, 15:08:24)
* Sunnyvale workshop on now. 90 registered, ~70 attendees for day 1
(dneary, 15:08:51)
* Dates for Shanghai workshop in Intel's campus there have been pushed
back (dneary, 15:09:29)
* Shanghai workshop will now happen on May 8-9, 2013 (dneary,
15:09:49)
* Call for participation and registration for that workshop will go
online next week (dneary, 15:10:38)
* oVirt Board meeting during the NetApp workshop is planned for
tomorrow, Thursday 24 Jan. Remote attendance is possible - please
contact dneary(a)redhat.com or lhawthor(a)redhat.com to attend remotely
(dneary, 15:13:33)
* Board meeting starts at 09:00 AM PST, 17:00 UTC (mburns, 15:16:38)
* Infra update (quick) (mburns, 15:17:15)
* <quaid> we've got *both* sets of servers & I'll be distributing
access to the Infra maintainers later today so work can continue in
my absence (mburns, 15:17:38)
* Release Status (continued) (mburns, 15:18:02)
* Q from aglitke -- will we have a 3.2 ovirt on a stick? (mburns,
15:18:33)
* A - yes, it's already in use at the workshop right now (mburns,
15:18:48)
* next step is to get it into a jenkins build to build it daily
(mburns, 15:19:07)
* question from linex about upgrades -- will it be a simple package
update to go from 3.1 to 3.2? (mburns, 15:19:34)
* answer -- no, 3.1 runs on F17 and 3.2 on F18, so there is an OS
upgrade involved as well as running engine-upgrade (mburns,
15:20:17)
* upgrade from 3.2 beta to 3.2 GA *should* be more smooth (mburns,
15:20:57)
* Proposal -- since we don't have beta ready yet, slip test day from
24-Jan to 29-Jan and GA from 30-Jan to 06-Feb (mburns, 15:24:12)
* question -- release note status (mburns, 15:25:16)
* sgordon and cheryn tan are coordinating (mburns, 15:25:29)
* maintainers need to be responsive is asked for info from them if we
hope to have them available on time (mburns, 15:25:48)
* should have a draft ready for the 29th (mburns, 15:26:13)
* AGREED: release date to slip 1 week to 06-Feb and test day to 29-Jan
(mburns, 15:27:40)
* Other Topics (mburns, 15:29:40)
Meeting ended at 15:32:24 UTC.
Action Items
------------
Action Items, by person
-----------------------
* **UNASSIGNED**
* (none)
People Present (lines said)
---------------------------
* mburns (60)
* dneary (20)
* aglitke (19)
* mburns_ovirt_ws (7)
* sgordon (5)
* linex (5)
* ovirtbot (4)
* Rydekull (2)
* quaid (1)
* dustins (1)
* oschreib (0)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
12 years, 4 months
[Users] Attaching floppy to guest
by Gianluca Cecchi
Hello,
if I want to attach a floppy image to a guest, can I only do via run
once when it is powered off or can I attach it to a running guest
too?
In my case I have a winxp guest running and I only see
Change CD
as an option...
Thanks,
Gianluca
12 years, 4 months
[Users] Guests are paused without error message while doing maintenance on NFS storage
by Karli Sjöberg
--_000_5F9E965F5A80BC468BE5F40576769F091023A18Bexchange21_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SGksDQoNCnRoaXMgaXMgYSBiaXQgY29tcGxleCBpc3N1ZSwgc28gScK0bCB0cnkgYW5kIGJlIGFz
IGNsZWFyIGFzIHBvc3NpYmxlLiBXZSBhcmUgcnVubmluZyBvVmlydC0zLjEgaW4gb3VyIHByb2R1
Y3Rpb24gZW52aXJvbm1lbnQsIGJhc2VkIG9uIG1pbmltYWwgRmVkb3JhIDE3IGluc3RhbGxzLiBX
ZSBoYXZlIDR4SFAgMzgwJ3MgKEludGVsKSBydW5uaW5nIGluIG9uZSBjbHVzdGVyLCBhbmQgMnhT
dW4gNzMxMCdzIChBTUQpIGluIGFub3RoZXIgY2x1c3Rlci4gVGhleSBoYXZlIHNoYXJlZCBzdG9y
YWdlIG92ZXIgTkZTIHRvIGEgRnJlZUJTRC1iYXNlZCBzeXN0ZW0gdGhhdCB1c2VzIFpGUyBhcyBh
IGZpbGVzeXN0ZW0uIFRoZSBzdG9yYWdlIGJvb3RzIG9mZiBvZiBhIG1pcnJvcmVkIFpGUyBwb29s
IG1hZGUgdXAgb2YgdHdvIFVTQidzIHRoYXQgb25seSBob3VzZXMgLywgd2hpbGUgL3ZhciwgL3Vz
ciwgZXRjLiBsaWVzIG9uIGEgc2VwYXJhdGUgWkZTIHBvb2wgbWFkZSB1cCBvZiB0aGUgcmVzdCBv
ZiB0aGUgSEREJ3MgaW4gdGhlIHN5c3RlbS4gSXQgbG9va3MgbGlrZSB0aGlzOg0KDQpGUyAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgTU9VTlRQT0lOVA0KcG9vbDEg
KFRoZSBtaXJyb3JlZCBVU0IncykgICAgbm9uZQ0KcG9vbDEvcm9vdCAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAvIChtb3VudGVkIHJvKQ0KcG9vbDIgKFRoZSByZWd1bGFyIEhERCdzKSAg
ICAgbm9uZQ0KcG9vbDIvcm9vdCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBub25lDQpw
b29sMi9yb290L3VzciAgICAgICAgICAgICAgICAgICAgICAgIC91c3INCnBvb2wyL3Jvb3QvdXNy
L2hvbWUgICAgICAgICAgICAgIC91c3IvaG9tZQ0KcG9vbDIvcm9vdC91c3IvbG9jYWwgICAgICAg
ICAgICAgICAvdXNyL2xvY2FsDQpwb29sMi9yb290L3ZhciAgICAgICAgICAgICAgICAgICAgICAg
IC92YXINCnRtcGZzICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgL3RtcA0K
cG9vbDIvZXhwb3J0ICAgICAgICAgICAgICAgICAgICAgICAgICAvZXhwb3J0DQpwb29sMi9leHBv
cnQvZHMxICAgICAgICAgICAgICAgICAgIC9leHBvcnQvZHMxDQpwb29sMi9leHBvcnQvZHMxL2Rh
dGEgICAgICAgICAgIC9leHBvcnQvZHMxL2RhdGENCnBvb2wyL2V4cG9ydC9kczEvZXhwb3J0ICAg
ICAgIC9leHBvcnQvZHMxL2V4cG9ydA0KcG9vbDIvZXhwb3J0L2RzMS9pc28gICAgICAgICAgICAg
L2V4cG9ydC9kczEvaXNvDQpwb29sMi9leHBvcnQvZHMyICAgICAgICAgICAgICAgICAgIC9leHBv
cnQvZHMyDQpwb29sMi9leHBvcnQvZHMyL2RhdGEgICAgICAgICAgIC9leHBvcnQvZHMyL2RhdGEN
Cg0KL2V0Yy9leHBvcnRzOg0KL2V4cG9ydC9kczEvZGF0YSAgICAgLWFsbGRpcnMgLW1hcHJvb3Q9
cm9vdCAxMC4wLjAuKGFsbCBvZiB0aGUgSFYncykNCi9leHBvcnQvZHMxL2V4cG9ydCAtYWxsZGly
cyAtbWFwcm9vdD1yb290IDEwLjAuMC4oYWxsIG9mIHRoZSBIVidzKQ0KL2V4cG9ydC9kczEvaXNv
ICAgICAgICAtYWxsZGlycyAtbWFwcm9vdD1yb290IDEwLjAuMC4oYWxsIG9mIHRoZSBIVidzKQ0K
L2V4cG9ydC9kczIvZGF0YSAgICAgLWFsbGRpcnMgLW1hcHJvb3Q9cm9vdCAxMC4wLjAuKGFsbCBv
ZiB0aGUgSFYncykNCg0KVG8gbWFrZSB0aG9zZSBVU0IncyBsYXN0IGZvciBhcyBsb25nIGFzIHBv
c3NpYmxlLCAvIGlzIHVzdWFsbHkgbW91bnRlZCByZWFkLW9ubHkuIEFuZCB3aGVuIHlvdSBuZWVk
IHRvIGNoYW5nZSBhbnl0aGluZywgeW91IG5lZWQgdG8gcmVtb3VudCAvIHRvIHJlYWQtd3JpdGUs
IGRvIHRoZSBtYWludGVuYW5jZSwgYW5kIHRoZW4gcmVtb3VudCBiYWNrIHRvIHJlYWQtb25seSBh
Z2Fpbi4gQnV0IHdoZW4geW91IGlzc3VlIGEgbW91bnQgY29tbWFuZCwgdGhlIFZNJ3MgaW4gb1Zp
cnQgcGF1c2UuIEF0IGZpcnN0IHdlIGRpZG7CtHQgdW5kZXJzdGFuZCB0aGF0IHdhcyBhY3R1YWxs
eSB0aGUgY2F1c2UgYW5kIHRyaWVkIHRvIGNvcnJlbGF0ZSB0aGUgc2VlbWluZ2x5IHNwb250YW5l
b3VzIHBhdXNpbmcgdG8ganVzdCBhYm91dCBhbnl0aGluZywgVGhlbiBJIHdhcyBsb2dnZWQgaW4g
dG8gYm90aCBvVmlydCdzIHdlYmFkbWluLCBhbmQgdGhlIHN0b3JhZ2UgYXQgdGhlIHNhbWUgYW5k
IGlzc3VlZCAibW91bnQgLXV3IC8iLCBhbmQgKmJvb20qLCByYW5kb20gVk0ncyBzdGFydGVkIHRv
IHBhdXNlOikgTm90IGFsbCBvZiB0aGVtIHRob3VnaCwgYW5kIG5vdCBqdXN0IGV2ZXJ5IG9uZSBp
biBlaXRoZXIgY2x1c3RlciBvciBzb21ldGhpbmcsIGl0IGlzIGNvbXBsZXRlbHkgcmFuZG9tIHdo
aWNoIFZNJ3MgYXJlIHBhdXNlZCBldmVyeSB0aW1lLg0KDQojIHRpbWUgbW91bnQgLXVyIC8NCg0K
cmVhbCAwbTIuMTk4cw0KdXNlciAwbTAuMDAwcw0Kc3lzIDBtMC4wMDJzDQoNCkFuZCBoZXJlwrRz
IHdoYXQgdmRzbSBvbiBvbmUgb2YgdGhlIEhWJ3MgdGhvdWdodCBhYm91dCB0aGF0Og0KaHR0cDov
L3Bhc3RlYmluLmNvbS9NWGpncERmVQ0KDQpJdCBiZWdpbnMgd2l0aCBhbGwgVk0ncyBiZWluZyAi
VXAiLCB0aGVuIG1lIGlzc3VpbmcgdGhlIHJlbW91bnQgb24gdGhlIHN0b3JhZ2UgZnJvbSByZWFk
LXdyaXRlIHRvIHJlYWQtb25seSB3aGljaCB0b29rIDIgc2VjcyB0byBjb21wbGV0ZSwgdmRzbSBm
cmVha2luZyBvdXQgd2hlbiBpdCBzaG9ydGx5IGxvb3NlcyBpdMK0cyBjb25uZWN0aW9ucyBhbmQg
bGFzdGx5IG1lIGF0IDE0OjM0IG1ha2luZyB0aGVtIGFsbCBydW4gYWdhaW4gZnJvbSB3ZWJhZG1p
bi4NCg0KVHdvIHRoaW5nczoNCjEpIERvZXMgYW55b25lIGtub3cgb2YgYW55IGltcHJvdmVtZW50
cyB0aGF0IGNvdWxkIGJlIG1hZGUgb24gdGhlIHN0b3JhZ2Ugc2lkZSwgYXBhcnQgZnJvbSB0aGUg
b2J2aW91cyAic3RvcCByZW1vdW50aW5nIiwgc2luY2UgcGF0Y2hpbmcgbXVzdCBldmVudHVhbGx5
IGJlIG1hZGUsIGNvbmZpZ3VyYXRpb25zIGNoYW5nZWQsIGFuZCBzbyBvbi4gQSBzbWFydGVyIHdh
eSBvZiBjb25maWd1cmluZyBzb21ldGhpbmc/IEJvb3RpbmcgZnJvbSBhbm90aGVyIG9yZGluYXJ5
IEhERCBpcyBzYWRseSBvdXQgb2YgdGhlIHF1ZXN0aW9uIGJlY2F1c2UgdGhlcmUgaXNuwrR0IGFu
eSByb29tIGZvciBhbnkgbW9yZSwgaXTCtHMgZnVsbC4gQW5kIEkgd291bGQgaGF2ZSByZWFsbHkg
bGlrZSBpdCByYXRoZXIgdG8gYm9vdCBmcm9tIHRoZSBIREQncyB0aGF0IGFyZSBhbHJlYWR5IGlu
IHRoZXJlLCBidXQgdGhlcmUgYXJlICJvdGhlciB0aGluZ3MiIHByZXZlbnRpbmcgdGhhdC4NCjIp
IE5vdGhpbmcgaW4gZW5naW5lIHdhcyBsb2dnZWQgYWJvdXQgaXQsIG5vICJFdmVudHMiIHdlcmUg
bWFkZSBhbmQgbm90aGluZyBpbiBlbmdpbmUubG9nIHRoYXQgY291bGQgaW5kaWNhdGUgc29tZXRo
aW5nIGhhZCBnb25lIHdyb25nIGF0IGFsbC4gSWYgaXQgd2FzbsK0dCBzZXJpb3VzIGVub3VnaCB0
byBpc3N1ZSBhIHdhcm5pbmcsIHdoeSBkaXNydXB0IHRoZSBzZXJ2aWNlIHdpdGggcGF1c2luZyB0
aGUgbWFjaGluZXM/IE9yIGF0IGxlYXN0IGF1dG9tYXRpY2FsbHkgc3RhcnQgdGhlbSBiYWNrIHVw
IHdoZW4gY29ubmVjdGlvbiB0byB0aGUgc3RvcmFnZSBhbG1vc3QgaW1tZWRpYXRlbHkgY2FtZSBi
YWNrIG9uIGl0wrRzIG93bi4gU2F5aW5nIG5vdGhpbmcgbWFkZSBpdCByZWFsbHkgaGFyZCB0byB0
cm91Ymxlc2hvb3QsIHNpbmNlIHdlIGRpZG7CtHQgaW5pdGlhbGx5IGtuZXcgYXQgYWxsIHdoYXQg
Y291bGQgYmUgY2F1c2luZyB0aGUgcGF1c2VzIHRvIGhhcHBlbiwgYW5kIHdoZW4uDQoNCkJlc3Qg
UmVnYXJkcw0KL0thcmxpIFNqw7ZiZXJnDQo=
--_000_5F9E965F5A80BC468BE5F40576769F091023A18Bexchange21_
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: base64
PCFET0NUWVBFIEhUTUwgUFVCTElDICItLy9XM0MvL0RURCBIVE1MIDQuMCBUUkFOU0lUSU9OQUwv
L0VOIj4NCjxodG1sPg0KPGhlYWQ+DQo8bWV0YSBodHRwLWVxdWl2PSJDb250ZW50LVR5cGUiIGNv
bnRlbnQ9InRleHQvaHRtbDsgY2hhcnNldD11dGYtOCI+DQo8bWV0YSBuYW1lPSJHRU5FUkFUT1Ii
IGNvbnRlbnQ9Ikd0a0hUTUwvNC40LjQiPg0KPC9oZWFkPg0KPGJvZHk+DQpIaSw8YnI+DQo8YnI+
DQp0aGlzIGlzIGEgYml0IGNvbXBsZXggaXNzdWUsIHNvIEnCtGwgdHJ5IGFuZCBiZSBhcyBjbGVh
ciBhcyBwb3NzaWJsZS4gV2UgYXJlIHJ1bm5pbmcgb1ZpcnQtMy4xIGluIG91ciBwcm9kdWN0aW9u
IGVudmlyb25tZW50LCBiYXNlZCBvbiBtaW5pbWFsIEZlZG9yYSAxNyBpbnN0YWxscy4gV2UgaGF2
ZSA0eEhQIDM4MCdzIChJbnRlbCkgcnVubmluZyBpbiBvbmUgY2x1c3RlciwgYW5kIDJ4U3VuIDcz
MTAncyAoQU1EKSBpbiBhbm90aGVyIGNsdXN0ZXIuIFRoZXkNCiBoYXZlIHNoYXJlZCBzdG9yYWdl
IG92ZXIgTkZTIHRvIGEgRnJlZUJTRC1iYXNlZCBzeXN0ZW0gdGhhdCB1c2VzIFpGUyBhcyBhIGZp
bGVzeXN0ZW0uIFRoZSBzdG9yYWdlIGJvb3RzIG9mZiBvZiBhIG1pcnJvcmVkIFpGUyBwb29sIG1h
ZGUgdXAgb2YgdHdvIFVTQidzIHRoYXQgb25seSBob3VzZXMgLywgd2hpbGUgL3ZhciwgL3Vzciwg
ZXRjLiBsaWVzIG9uIGEgc2VwYXJhdGUgWkZTIHBvb2wgbWFkZSB1cCBvZiB0aGUgcmVzdCBvZiB0
aGUgSEREJ3MNCiBpbiB0aGUgc3lzdGVtLiBJdCBsb29rcyBsaWtlIHRoaXM6PGJyPg0KPGJyPg0K
RlMmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm
bmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm
bmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgTU9VTlRQT0lOVDxicj4NCnBvb2wxIChU
aGUgbWlycm9yZWQgVVNCJ3MpJm5ic3A7Jm5ic3A7Jm5ic3A7IG5vbmU8YnI+DQpwb29sMS9yb290
Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5i
c3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7
Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5i
c3A7Jm5ic3A7IC8gKG1vdW50ZWQgcm8pPGJyPg0KcG9vbDIgKFRoZSByZWd1bGFyIEhERCdzKSZu
YnNwOyZuYnNwOyZuYnNwOyZuYnNwOyBub25lPGJyPg0KcG9vbDIvcm9vdCZuYnNwOyZuYnNwOyZu
YnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw
OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZu
YnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyBub25l
PGJyPg0KcG9vbDIvcm9vdC91c3ImbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm
bmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgL3Vzcjxicj4NCnBv
b2wyL3Jvb3QvdXNyL2hvbWUmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgL3Vzci9ob21lPGJyPg0KcG9v
bDIvcm9vdC91c3IvbG9jYWwmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgL3Vzci9sb2NhbDxi
cj4NCnBvb2wyL3Jvb3QvdmFyJm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5i
c3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7
Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7IC92YXI8YnI+DQp0bXBm
cyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZu
YnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw
OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZu
YnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw
OyAvdG1wPGJyPg0KcG9vbDIvZXhwb3J0Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5i
c3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7
Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5i
c3A7IC9leHBvcnQ8YnI+DQpwb29sMi9leHBvcnQvZHMxJm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7
Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5i
c3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7IC9leHBvcnQvZHMxPGJyPg0KcG9vbDIvZXhwb3J0
L2RzMS9kYXRhJm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7
Jm5ic3A7Jm5ic3A7IC9leHBvcnQvZHMxL2RhdGE8YnI+DQpwb29sMi9leHBvcnQvZHMxL2V4cG9y
dCZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyAvZXhwb3J0L2RzMS9leHBvcnQ8
YnI+DQpwb29sMi9leHBvcnQvZHMxL2lzbyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZu
YnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyAvZXhwb3J0L2RzMS9pc288
YnI+DQpwb29sMi9leHBvcnQvZHMyJm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7
Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5i
c3A7Jm5ic3A7Jm5ic3A7IC9leHBvcnQvZHMyPGJyPg0KcG9vbDIvZXhwb3J0L2RzMi9kYXRhJm5i
c3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7
IC9leHBvcnQvZHMyL2RhdGE8YnI+DQo8YnI+DQovZXRjL2V4cG9ydHM6PGJyPg0KL2V4cG9ydC9k
czEvZGF0YSZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyAtYWxsZGlycyAtbWFwcm9vdD1yb290IDEw
LjAuMC4oYWxsIG9mIHRoZSBIVidzKTxicj4NCi9leHBvcnQvZHMxL2V4cG9ydCAtYWxsZGlycyAt
bWFwcm9vdD1yb290IDEwLjAuMC4oYWxsIG9mIHRoZSBIVidzKTxicj4NCi9leHBvcnQvZHMxL2lz
byZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyAtYWxsZGlycyAtbWFw
cm9vdD1yb290IDEwLjAuMC4oYWxsIG9mIHRoZSBIVidzKTxicj4NCi9leHBvcnQvZHMyL2RhdGEm
bmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgLWFsbGRpcnMgLW1hcHJvb3Q9cm9vdCAxMC4wLjAuKGFs
bCBvZiB0aGUgSFYncyk8YnI+DQo8YnI+DQpUbyBtYWtlIHRob3NlIFVTQidzIGxhc3QgZm9yIGFz
IGxvbmcgYXMgcG9zc2libGUsIC8gaXMgdXN1YWxseSBtb3VudGVkIHJlYWQtb25seS4gQW5kIHdo
ZW4geW91IG5lZWQgdG8gY2hhbmdlIGFueXRoaW5nLCB5b3UgbmVlZCB0byByZW1vdW50IC8gdG8g
cmVhZC13cml0ZSwgZG8gdGhlIG1haW50ZW5hbmNlLCBhbmQgdGhlbiByZW1vdW50IGJhY2sgdG8g
cmVhZC1vbmx5IGFnYWluLiBCdXQgd2hlbiB5b3UgaXNzdWUgYSBtb3VudCBjb21tYW5kLCB0aGUN
CiBWTSdzIGluIG9WaXJ0IHBhdXNlLiBBdCBmaXJzdCB3ZSBkaWRuwrR0IHVuZGVyc3RhbmQgdGhh
dCB3YXMgYWN0dWFsbHkgdGhlIGNhdXNlIGFuZCB0cmllZCB0byBjb3JyZWxhdGUgdGhlIHNlZW1p
bmdseSBzcG9udGFuZW91cyBwYXVzaW5nIHRvIGp1c3QgYWJvdXQgYW55dGhpbmcsIFRoZW4gSSB3
YXMgbG9nZ2VkIGluIHRvIGJvdGggb1ZpcnQncyB3ZWJhZG1pbiwgYW5kIHRoZSBzdG9yYWdlIGF0
IHRoZSBzYW1lIGFuZCBpc3N1ZWQgJnF1b3Q7bW91bnQgLXV3DQogLyZxdW90OywgYW5kICpib29t
KiwgcmFuZG9tIFZNJ3Mgc3RhcnRlZCB0byBwYXVzZTopIE5vdCBhbGwgb2YgdGhlbSB0aG91Z2gs
IGFuZCBub3QganVzdCBldmVyeSBvbmUgaW4gZWl0aGVyIGNsdXN0ZXIgb3Igc29tZXRoaW5nLCBp
dCBpcyBjb21wbGV0ZWx5IHJhbmRvbSB3aGljaCBWTSdzIGFyZSBwYXVzZWQgZXZlcnkgdGltZS48
YnI+DQo8YnI+DQojIHRpbWUgbW91bnQgLXVyIC88YnI+DQo8YnI+DQpyZWFsIDBtMi4xOThzPGJy
Pg0KdXNlciAwbTAuMDAwczxicj4NCnN5cyAwbTAuMDAyczxicj4NCjxicj4NCkFuZCBoZXJlwrRz
IHdoYXQgdmRzbSBvbiBvbmUgb2YgdGhlIEhWJ3MgdGhvdWdodCBhYm91dCB0aGF0Ojxicj4NCjxh
IGhyZWY9Imh0dHA6Ly9wYXN0ZWJpbi5jb20vTVhqZ3BEZlUiPmh0dHA6Ly9wYXN0ZWJpbi5jb20v
TVhqZ3BEZlU8L2E+PGJyPg0KPGJyPg0KSXQgYmVnaW5zIHdpdGggYWxsIFZNJ3MgYmVpbmcgJnF1
b3Q7VXAmcXVvdDssIHRoZW4gbWUgaXNzdWluZyB0aGUgcmVtb3VudCBvbiB0aGUgc3RvcmFnZSBm
cm9tIHJlYWQtd3JpdGUgdG8gcmVhZC1vbmx5IHdoaWNoIHRvb2sgMiBzZWNzIHRvIGNvbXBsZXRl
LCB2ZHNtIGZyZWFraW5nIG91dCB3aGVuIGl0IHNob3J0bHkgbG9vc2VzIGl0wrRzIGNvbm5lY3Rp
b25zIGFuZCBsYXN0bHkgbWUgYXQgMTQ6MzQgbWFraW5nIHRoZW0gYWxsIHJ1biBhZ2FpbiBmcm9t
IHdlYmFkbWluLjxicj4NCjxicj4NClR3byB0aGluZ3M6PGJyPg0KMSkgRG9lcyBhbnlvbmUga25v
dyBvZiBhbnkgaW1wcm92ZW1lbnRzIHRoYXQgY291bGQgYmUgbWFkZSBvbiB0aGUgc3RvcmFnZSBz
aWRlLCBhcGFydCBmcm9tIHRoZSBvYnZpb3VzICZxdW90O3N0b3AgcmVtb3VudGluZyZxdW90Oywg
c2luY2UgcGF0Y2hpbmcgbXVzdCBldmVudHVhbGx5IGJlIG1hZGUsIGNvbmZpZ3VyYXRpb25zIGNo
YW5nZWQsIGFuZCBzbyBvbi4gQSBzbWFydGVyIHdheSBvZiBjb25maWd1cmluZyBzb21ldGhpbmc/
IEJvb3RpbmcgZnJvbSBhbm90aGVyDQogb3JkaW5hcnkgSEREIGlzIHNhZGx5IG91dCBvZiB0aGUg
cXVlc3Rpb24gYmVjYXVzZSB0aGVyZSBpc27CtHQgYW55IHJvb20gZm9yIGFueSBtb3JlLCBpdMK0
cyBmdWxsLiBBbmQgSSB3b3VsZCBoYXZlIHJlYWxseSBsaWtlIGl0IHJhdGhlciB0byBib290IGZy
b20gdGhlIEhERCdzIHRoYXQgYXJlIGFscmVhZHkgaW4gdGhlcmUsIGJ1dCB0aGVyZSBhcmUgJnF1
b3Q7b3RoZXIgdGhpbmdzJnF1b3Q7IHByZXZlbnRpbmcgdGhhdC48YnI+DQoyKSBOb3RoaW5nIGlu
IGVuZ2luZSB3YXMgbG9nZ2VkIGFib3V0IGl0LCBubyAmcXVvdDtFdmVudHMmcXVvdDsgd2VyZSBt
YWRlIGFuZCBub3RoaW5nIGluIGVuZ2luZS5sb2cgdGhhdCBjb3VsZCBpbmRpY2F0ZSBzb21ldGhp
bmcgaGFkIGdvbmUgd3JvbmcgYXQgYWxsLiBJZiBpdCB3YXNuwrR0IHNlcmlvdXMgZW5vdWdoIHRv
IGlzc3VlIGEgd2FybmluZywgd2h5IGRpc3J1cHQgdGhlIHNlcnZpY2Ugd2l0aCBwYXVzaW5nIHRo
ZSBtYWNoaW5lcz8gT3IgYXQgbGVhc3QgYXV0b21hdGljYWxseQ0KIHN0YXJ0IHRoZW0gYmFjayB1
cCB3aGVuIGNvbm5lY3Rpb24gdG8gdGhlIHN0b3JhZ2UgYWxtb3N0IGltbWVkaWF0ZWx5IGNhbWUg
YmFjayBvbiBpdMK0cyBvd24uIFNheWluZyBub3RoaW5nIG1hZGUgaXQgcmVhbGx5IGhhcmQgdG8g
dHJvdWJsZXNob290LCBzaW5jZSB3ZSBkaWRuwrR0IGluaXRpYWxseSBrbmV3IGF0IGFsbCB3aGF0
IGNvdWxkIGJlIGNhdXNpbmcgdGhlIHBhdXNlcyB0byBoYXBwZW4sIGFuZCB3aGVuLjxicj4NCjxi
cj4NCkJlc3QgUmVnYXJkczxicj4NCi9LYXJsaSBTasO2YmVyZw0KPC9ib2R5Pg0KPC9odG1sPg0K
--_000_5F9E965F5A80BC468BE5F40576769F091023A18Bexchange21_--
12 years, 4 months
Re: [Users] [Engine-devel] ovirt engine sdk
by Michael Pasternak
On 01/23/2013 03:53 PM, navin p wrote:
> Hi Michael,
>
> Thanks for your help.
>
> On Wed, Jan 23, 2013 at 6:15 PM, Michael Pasternak <mpastern(a)redhat.com <mailto:mpastern@redhat.com>> wrote:
>
>
>
> in python, you can see object's attributes by accessing __dict__/__getattr__/dir(object)/etc.,
> vm.__dict__ will do the job for you, however i'd suggest you using some IDE (i'm using Eclipse + PyDev plugin),
> this way you'll be able accessing object attributes simply by Ctrl+SPACE auto-completion.
>
> Do i have import something for Ctrl+SPACE to work ? It doesn't work for me atleast for list attributes.
>
> for vm in vmlist:
> print vm.name <http://vm.name>,vm.memory,vm.id <http://vm.id>,vm.os.kernel,vm.cluster.id <http://vm.cluster.id>,vm.start_time
> #print help(vm.statistics.list())
> vmslist = vm.statistics.list()
> for i in vmslist:
> print i.get_name()
>
> prints
>
> memory.installed
> memory.used
> cpu.current.guest
> cpu.current.hypervisor
> cpu.current.total
>
> but i need the values of memory.installed and memory.used .
statistic holders are complex types, you can fetch data by:
i.unit // the unit of the holder data
i.values.value[0].datum // actual data
>
> Also where do i get the Java SDK and jars ? I looked at maven but it was 1.0 version of SDK.
central repo has 1.0.0.2-1, see [1], deployment details can be found at [2], wiki at [3].
[1] http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.ovirt.engine.sdk%22
[2] http://www.ovirt.org/Java-sdk#Maven_deployment
[3] http://www.ovirt.org/Java-sdk
>
>
> Regards,
> Navin
--
Michael Pasternak
RedHat, ENG-Virtualization R&D
12 years, 4 months
[Users] vnic : add a custom properties and use it in vdsm_hook
by Benoit ML
Hello Evrybody,
Is there a way to add custom properties for a nic ? and use it in vdsm_hooks ?
The objectife is to redefine some network parameters of a vnic at the
vm boot ... (such per vnic bandwitchs, per vnic vlan, and so on) and
maybe use openvswitch ...
Thank you in advance
--
--
Benoit
12 years, 4 months
[Users] host deploy and after reboot not responsive
by Gianluca Cecchi
Hello,
using ovirt 3.2 on fedora 18 from ovirt-nightly
3.2.0-1.20130113.gitc954518
After deploy and reboot of a fedora 18 host it stays not-responsive.
What to check and which log files to send from engine and node?
Thanks,
Gianluca
12 years, 4 months
[Users] Adding additional network(s)
by Tom Brown
Hi
I am setting up another DC, that is managed by my sole management
node, and this DC will have a requirement that the VM's will need an
additional storage NIC. This NIC is for NFS/CIFS traffic and is
independent of the oVirt VM's disks.
I have cabled the additional physical NIC in the HV's as this network
is non routed storage, and I where to go from here. Whats the next
step needed to add the NIC to the DC and then i presume adding the NIC
to the VM's is straight forward.
thanks
12 years, 4 months
Re: [Users] cannot add gluster domain
by Alex Leonhardt
Hi all,
Am not too familiar with fedora and its services, anyone can help him?
Alex
On Jan 23, 2013 5:02 AM, "T-Sinjon" <tscbj1989(a)gmail.com> wrote:
> I have forced v3 in my /etc/nfsmount and there's no firewall between NFS
> server and the host.
>
> The only problem is no rpc.statd running . Could you tell me how can i
> start it since there's no rpcbind installed on overt node 2.5.5-0.1?
>
> [root@ovirtnode1 ~]# systemctl status nfs-lock.service
> nfs-lock.service - NFS file locking service.
> Loaded: loaded (/usr/lib/systemd/system/nfs-lock.service; enabled)
> Active: failed (Result: exit-code) since Thu, 17 Jan 2013 09:41:45
> +0000; 5 days ago
> CGroup: name=systemd:/system/nfs-lock.service
>
> Jan 17 09:41:45 localhost.localdomain rpc.statd[1385]: Version 1.2.6
> starting
> Jan 17 09:41:45 localhost.localdomain rpc.statd[1385]: Initializing NSM
> state
> [root@ovirtnode1 ~]# systemctl start nfs-lock.service
> Failed to issue method call: Unit rpcbind.service failed to load: No such
> file or directory. See system logs and 'systemctl status rpcbind.service'
> for details.
>
> On 22 Jan, 2013, at 6:14 PM, Alex Leonhardt <alex.tuxx(a)gmail.com> wrote:
>
> Hi, this seems to look like the error you're getting :
>
> MountError: (32, ";mount.nfs: rpc.statd is not running but is required for
> remote locking.\nmount.nfs: Either use '-o nolock' to keep locks local, or
> start statd.\nmount.nfs: an incorrect mount option was specified\n")
>
> Are you running nfs3 on that host ? if yes, have you forced v3 ? is
> rpc.statd running ? is the NFS server firewalling off the rpc.* ports ?
>
> alex
>
>
> On 22 January 2013 09:58, T-Sinjon <tscbj1989(a)gmail.com> wrote:
>
>> HI, everyone:
>> Recently , I newly installed ovirt 3.1 from
>> http://resources.ovirt.org/releases/stable/rpm/Fedora/17/noarch/,
>> and node use
>> http://resources.ovirt.org/releases/stable/tools/ovirt-node-iso-2.5.5-0.1...
>>
>> when i add gluster domain via nfs, mount error occurred,
>> I have do manually mount action on the node but failed if without
>> -o nolock option:
>> # /usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6
>> my-gluster-ip:/gvol02/GlusterDomain
>> /rhev/data-center/mnt/my-gluster-ip:_gvol02_GlusterDomain
>> mount.nfs: rpc.statd is not running but is required for remote
>> locking. mount.nfs: Either use '-o nolock' to keep locks local, or start
>> statd. mount.nfs: an incorrect mount option was specified
>>
>> blow is the vdsm.log from node and engine.log, any help was
>> appreciated :
>>
>> vdsm.log
>> Thread-12717::DEBUG::2013-01-22
>> 09:19:02,261::BindingXMLRPC::156::vds::(wrapper) [my-engine-ip]
>> Thread-12717::DEBUG::2013-01-22
>> 09:19:02,261::task::588::TaskManager.Task::(_updateState)
>> Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::moving from state init ->
>> state preparing
>> Thread-12717::INFO::2013-01-22
>> 09:19:02,262::logUtils::37::dispatcher::(wrapper) Run and protect:
>> validateStorageServerConnection(domType=1,
>> spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection':
>> 'my-gluster-ip:/gvol02/GlusterDomain', 'iqn': '', 'portal': '', 'user': '',
>> 'password': '******', 'id': '00000000-0000-0000-0000-000000000000', 'port':
>> ''}], options=None)
>> Thread-12717::INFO::2013-01-22
>> 09:19:02,262::logUtils::39::dispatcher::(wrapper) Run and protect:
>> validateStorageServerConnection, Return response: {'statuslist':
>> [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}
>> Thread-12717::DEBUG::2013-01-22
>> 09:19:02,262::task::1172::TaskManager.Task::(prepare)
>> Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::finished: {'statuslist':
>> [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}
>> Thread-12717::DEBUG::2013-01-22
>> 09:19:02,262::task::588::TaskManager.Task::(_updateState)
>> Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::moving from state preparing ->
>> state finished
>> Thread-12717::DEBUG::2013-01-22
>> 09:19:02,262::resourceManager::809::ResourceManager.Owner::(releaseAll)
>> Owner.releaseAll requests {} resources {}
>> Thread-12717::DEBUG::2013-01-22
>> 09:19:02,262::resourceManager::844::ResourceManager.Owner::(cancelAll)
>> Owner.cancelAll requests {}
>> Thread-12717::DEBUG::2013-01-22
>> 09:19:02,263::task::978::TaskManager.Task::(_decref)
>> Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::ref 0 aborting False
>> Thread-12718::DEBUG::2013-01-22
>> 09:19:02,307::BindingXMLRPC::156::vds::(wrapper) [my-engine-ip]
>> Thread-12718::DEBUG::2013-01-22
>> 09:19:02,307::task::588::TaskManager.Task::(_updateState)
>> Task=`c07a075a-a910-4bc3-9a33-b957d05ea270`::moving from state init ->
>> state preparing
>> Thread-12718::INFO::2013-01-22
>> 09:19:02,307::logUtils::37::dispatcher::(wrapper) Run and protect:
>> connectStorageServer(domType=1,
>> spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection':
>> 'my-gluster-ip:/gvol02/GlusterDomain', 'iqn': '', 'portal': '', 'user': '',
>> 'password': '******', 'id': '6463ca53-6c57-45f6-bb5c-45505891cae9', 'port':
>> ''}], options=None)
>> Thread-12718::DEBUG::2013-01-22
>> 09:19:02,467::__init__::1249::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n
>> /usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6
>> my-gluster-ip:/gvol02/GlusterDomain
>> /rhev/data-center/mnt/my-gluster-ip:_gvol02_GlusterDomain' (cwd None)
>> Thread-12718::ERROR::2013-01-22
>> 09:19:02,486::hsm::1932::Storage.HSM::(connectStorageServer) Could not
>> connect to storageServer
>> Traceback (most recent call last):
>> File "/usr/share/vdsm/storage/hsm.py", line 1929, in
>> connectStorageServer
>> File "/usr/share/vdsm/storage/storageServer.py", line 256, in connect
>> File "/usr/share/vdsm/storage/storageServer.py", line 179, in connect
>> File "/usr/share/vdsm/storage/mount.py", line 190, in mount
>> File "/usr/share/vdsm/storage/mount.py", line 206, in _runcmd
>> MountError: (32, ";mount.nfs: rpc.statd is not running but is required
>> for remote locking.\nmount.nfs: Either use '-o nolock' to keep locks local,
>> or start statd.\nmount.nfs: an incorrect mount option was specified\n")
>>
>> engine.log:
>> 2013-01-22 17:19:20,073 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand]
>> (ajp--0.0.0.0-8009-7) [25932203] START,
>> ValidateStorageServerConnectionVDSCommand(vdsId =
>> 626e37f4-5ee3-11e2-96fa-0030487c133e, storagePoolId =
>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList =
>> [{ id: null, connection: my-gluster-ip:/gvol02/GlusterDomain };]), log id:
>> 303f4753
>> 2013-01-22 17:19:20,095 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand]
>> (ajp--0.0.0.0-8009-7) [25932203] FINISH,
>> ValidateStorageServerConnectionVDSCommand, return:
>> {00000000-0000-0000-0000-000000000000=0}, log id: 303f4753
>> 2013-01-22 17:19:20,115 INFO
>> [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
>> (ajp--0.0.0.0-8009-7) [25932203] Running command:
>> AddStorageServerConnectionCommand internal: false. Entities affected : ID:
>> aaa00000-0000-0000-0000-123456789aaa Type: System
>> 2013-01-22 17:19:20,117 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
>> (ajp--0.0.0.0-8009-7) [25932203] START,
>> ConnectStorageServerVDSCommand(vdsId =
>> 626e37f4-5ee3-11e2-96fa-0030487c133e, storagePoolId =
>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList =
>> [{ id: 6463ca53-6c57-45f6-bb5c-45505891cae9, connection:
>> my-gluster-ip:/gvol02/GlusterDomain };]), log id: 198f3eb4
>> 2013-01-22 17:19:20,323 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
>> (ajp--0.0.0.0-8009-7) [25932203] FINISH, ConnectStorageServerVDSCommand,
>> return: {6463ca53-6c57-45f6-bb5c-45505891cae9=477}, log id: 198f3eb4
>> 2013-01-22 17:19:20,325 ERROR
>> [org.ovirt.engine.core.bll.storage.NFSStorageHelper] (ajp--0.0.0.0-8009-7)
>> [25932203] The connection with details my-gluster-ip:/gvol02/GlusterDomain
>> failed because of error code 477 and error message is: 477
>> 2013-01-22 17:19:20,415 INFO
>> [org.ovirt.engine.core.bll.storage.AddNFSStorageDomainCommand]
>> (ajp--0.0.0.0-8009-6) [6641b9e1] Running command:
>> AddNFSStorageDomainCommand internal: false. Entities affected : ID:
>> aaa00000-0000-0000-0000-123456789aaa Type: System
>> 2013-01-22 17:19:20,425 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
>> (ajp--0.0.0.0-8009-6) [6641b9e1] START, CreateStorageDomainVDSCommand(vdsId
>> = 626e37f4-5ee3-11e2-96fa-0030487c133e,
>> storageDomain=org.ovirt.engine.core.common.businessentities.storage_domain_static@8e25c6bc,
>> args=my-gluster-ip:/gvol02/GlusterDomain), log id: 675539c4
>> 2013-01-22 17:19:21,064 ERROR
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
>> (ajp--0.0.0.0-8009-6) [6641b9e1] Failed in CreateStorageDomainVDS method
>> 2013-01-22 17:19:21,065 ERROR
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
>> (ajp--0.0.0.0-8009-6) [6641b9e1] Error code StorageDomainFSNotMounted and
>> error message VDSGenericException: VDSErrorException: Failed to
>> CreateStorageDomainVDS, error = Storage domain remote path not mounted:
>> ('/rhev/data-center/mnt/my-gluster-ip:_gvol02_GlusterDomain',)
>> 2013-01-22 17:19:21,066 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
>> (ajp--0.0.0.0-8009-6) [6641b9e1] Command
>> org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand
>> return value
>> Class Name:
>> org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
>> mStatus Class Name:
>> org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
>> mCode 360
>> mMessage Storage domain remote path not mounted:
>> ('/rhev/data-center/mnt/my-gluster-ip:_gvol02_GlusterDomain',)
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
> --
>
> | RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
>
>
>
12 years, 4 months
Re: [Users] Where to download ovirt-engine-sdk-java 1.0.0.2?
by Michael Pasternak
Hi,
On 01/23/2013 02:37 AM, Sherry Yu wrote:
> Hi Michael,
>
> Can you point me to download ovirt-engine-sdk-java 1.0.0.2 and where to get more info. on Java development using this API?
sdk can be deployed using [1], more info can be found here [2].
[1] http://www.ovirt.org/Java-sdk#Maven_deployment
[2] http://www.ovirt.org/Java-sdk
>
> My task is to create an integration between RHEV 3.x and SAP LVM, a product that has a Java API. I met with Oved at oVirt workshop this week and he mentioned this Java SDK. It sounds a better fit than the REST API that I have been investigating.
>
> Many Thanks and I am looking forward to hearing from you.
> Sherry
>
> ----- Forwarded Message -----
> From: "Oved Ourfalli" <ovedo(a)redhat.com>
> To: "Sherry Yu" <syu(a)redhat.com>
> Sent: Tuesday, January 22, 2013 4:30:11 PM
> Subject: Fwd: [Engine-devel] ovirt-engine-sdk-java 1.0.0.2 released
>
>
>
> ----- Forwarded Message -----
> From: "Michael Pasternak" <mpastern(a)redhat.com>
> To: users(a)ovirt.org
> Cc: "engine-devel" <engine-devel(a)ovirt.org>
> Sent: Wednesday, January 16, 2013 6:38:02 AM
> Subject: [Engine-devel] ovirt-engine-sdk-java 1.0.0.2 released
>
>
> Basically this release addresses an issue when [1] constructor is used
> with NULLs as optional parameters,
>
> [1] public Api(String url, String username, String password, String key_file,
> String cert_file, String ca_file, Integer port, Integer timeout,
> Boolean persistentAuth, Boolean insecure, Boolean filter, Boolean debug)
>
--
Michael Pasternak
RedHat, ENG-Virtualization R&D
12 years, 4 months
[Users] setupNetworks failure - Host non-operational
by Deepak C Shetty
Hi All,
I have a multi-VM setup, where I have ovirt engine on one VM and
VDSM host on another.
Discovering the host from the engine puts the host in Unassigned state,
with the error saying 'ovirtmgmt' network not found.
When i select setupNetworks and drag-drop the ovirtmgmt to setup over
eth0, i see the below error in VDSM & host goes to non-operataional state.
I tried the steps mentioned by Alon in
http://lists.ovirt.org/pipermail/users/2012-December/011257.html
but still see the same error
============= dump from vdsm.log ================
MainProcess|Thread-23::ERROR::2013-01-22
18:25:53,496::configNetwork::1438::setupNetworks::(setupNetworks)
Requested operation is not valid: cannot set autostart for transient network
Traceback (most recent call last):
File "/usr/share/vdsm/configNetwork.py", line 1420, in setupNetworks
implicitBonding=True, **d)
File "/usr/share/vdsm/configNetwork.py", line 1030, in addNetwork
configWriter.createLibvirtNetwork(network, bridged, iface)
File "/usr/share/vdsm/configNetwork.py", line 208, in
createLibvirtNetwork
self._createNetwork(netXml)
File "/usr/share/vdsm/configNetwork.py", line 192, in _createNetwork
net.setAutostart(1)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2148, in
setAutostart
if ret == -1: raise libvirtError ('virNetworkSetAutostart()
failed', net=self)
libvirtError: Requested operation is not valid: cannot set autostart for
transient network
MainProcess|Thread-23::ERROR::2013-01-22
18:25:53,502::supervdsmServer::77::SuperVdsm.ServerCallback::(wrapper)
Error in setupNetworks
Traceback (most recent call last):
File "/usr/share/vdsm/supervdsmServer.py", line 75, in wrapper
return func(*args, **kwargs)
File "/usr/share/vdsm/supervdsmServer.py", line 170, in setupNetworks
return configNetwork.setupNetworks(networks, bondings, **options)
File "/usr/share/vdsm/configNetwork.py", line 1420, in setupNetworks
implicitBonding=True, **d)
File "/usr/share/vdsm/configNetwork.py", line 1030, in addNetwork
configWriter.createLibvirtNetwork(network, bridged, iface)
File "/usr/share/vdsm/configNetwork.py", line 208, in
createLibvirtNetwork
self._createNetwork(netXml)
File "/usr/share/vdsm/configNetwork.py", line 192, in _createNetwork
net.setAutostart(1)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2148, in
setAutostart
if ret == -1: raise libvirtError ('virNetworkSetAutostart()
failed', net=self)
libvirtError: Requested operation is not valid: cannot set autostart for
transient network
12 years, 4 months
[Users] Help on an almost migrated ovirt-engine
by Adrian Gibanel
I need to migrate an ovirt-engine from an All-In-One (AIO) setup to a dedicated machine.
So with an old mailing message I've tried to do it but not finished yet, that's why I ask for help.
I've written a wiki page for the experience so that it becomes a howto which can be found here:
http://www.ovirt.org/User:Adrian15/oVirt_engine_migration
At the last step the one that finally starts ovirt-engine I've decided to ask help here just in case I was missing something important.
So here are my doubts.
* Original message that inspired the howto is here: http://www.mail-archive.com/users@ovirt.org/msg00670.html
* What packages should I delete safely from an AIO setup so that it's just an hypervisor once I've migrate the ovirt-engine part?
* Is the right way the one I've used to recreate the database?
** Origin
pg_dump -U postgres engine | gzip > engine_db.gz
** Destination
pg_dump -U postgres -s -f tempdb.dump engine
dropdb -U postgres engine
createdb -U postgres engine
zcat engine_db.gz | psql -U postgres engine
* Let's read: http://www.mail-archive.com/users@ovirt.org/msg00682.html : WRT certificates, note that hostname should nt change, or SSL will be invalidated.
Did he mean the SSL when you connect via http or https to the manager which currently doesn't bother me?
Or maybe the SSL to connect to other hosts and communicate to vdsm (sorry if I'm saying something nonsense. I don't understand oVirt architecture completely) which bothers me?
* Certificates is: /etc/pki/ovirt-engine ? Something more?
* Conf is: /etc/ovirt-engine ? Something more?
Thank you!
--
--
Adrián Gibanel
I.T. Manager
+34 675 683 301
www.btactic.com
Ens podeu seguir a/Nos podeis seguir en:
i
Abans d´imprimir aquest missatge, pensa en el medi ambient. El medi ambient és cosa de tothom. / Antes de imprimir el mensaje piensa en el medio ambiente. El medio ambiente es cosa de todos.
AVIS:
El contingut d'aquest missatge i els seus annexos és confidencial. Si no en sou el destinatari, us fem saber que està prohibit utilitzar-lo, divulgar-lo i/o copiar-lo sense tenir l'autorització corresponent. Si heu rebut aquest missatge per error, us agrairem que ho feu saber immediatament al remitent i que procediu a destruir el missatge .
AVISO:
El contenido de este mensaje y de sus anexos es confidencial. Si no es el destinatario, les hacemos saber que está prohibido utilizarlo, divulgarlo y/o copiarlo sin tener la autorización correspondiente. Si han recibido este mensaje por error, les agradeceríamos que lo hagan saber inmediatamente al remitente y que procedan a destruir el mensaje .
12 years, 4 months
[Users] error at Creating the database
by Arindam Choudhury
Hi,
I am a newbie. I am trying to build oVirt-engine from source following this
tutorial http://www.ovirt.org/Building_oVirt_engine
When I am trying to create the database, I am getting the following error:
$ ./create_db_devel.sh -u postgres
Running original create_db script...
Creating the database: engine
dropdb: could not connect to database template1: FATAL: Ident
authentication failed for user "postgres"
createdb: could not connect to database template1: FATAL: Ident
authentication failed for user "postgres"
Failed to create database engine
Failed to create database engine
I am on rawhide and I have already altered
# tail -3 /var/lib/pgsql/data/pg_hba.conf
local all all trust
host all all 127.0.0.1/32 trust
host all all ::1/128 trust
What am I doing wrong?
Thanks,
Arindam
12 years, 4 months
[Users] error while building oVirt-engine
by Arindam Choudhury
Hi,
I am following http://www.ovirt.org/Building_oVirt_engine tutorial to build
oVirt-engine from source. when i do:
$ mvn clean install -e
[INFO]
[INFO]
------------------------------------------------------------------------
[INFO] Building Extensions for GWT 3.2.0
[INFO]
------------------------------------------------------------------------
[WARNING] The POM for org.ovirt.engine.ui:genericapi:jar:3.2.0 is missing,
no dependency information available
[INFO]
------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] oVirt Modules - ui ................................ SUCCESS [0.013s]
[INFO] Extensions for GWT ................................ FAILURE [0.014s]
[INFO] UI Utils Compatibility (for UICommon) ............. SKIPPED
[INFO]
------------------------------------------------------------------------
[ERROR] Failed to execute goal on project gwt-extension: Could not resolve
dependencies for project org.ovirt.engine.ui:gwt-extension:jar:3.2.0:
Failure to find org.ovirt.engine.ui:genericapi:jar:3.2.0 in
http://repo1.maven.org/maven2 was cached in the local repository,
resolution will not be reattempted until the update interval of central has
elapsed or updates are forced -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute
goal on project gwt-extension: Could not resolve dependencies for project
org.ovirt.engine.ui:gwt-extension:jar:3.2.0: Failure to find
org.ovirt.engine.ui:genericapi:jar:3.2.0 in
http://repo1.maven.org/maven2was cached in the local repository,
resolution will not be reattempted
until the update interval of central has elapsed or updates are forced
at
org.apache.maven.lifecycle.internal.LifecycleDependencyResolver.getDependencies(LifecycleDependencyResolver.java:210)
at
org.apache.maven.lifecycle.internal.LifecycleDependencyResolver.resolveProjectDependencies(LifecycleDependencyResolver.java:117)
at
org.apache.maven.lifecycle.internal.MojoExecutor.ensureDependenciesAreResolved(MojoExecutor.java:258)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:201)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
at
org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
at
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:322)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:158)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290)
at
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230)
at
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409)
at
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352)
Caused by: org.apache.maven.project.DependencyResolutionException: Could
not resolve dependencies for project
org.ovirt.engine.ui:gwt-extension:jar:3.2.0: Failure to find
org.ovirt.engine.ui:genericapi:jar:3.2.0 in
http://repo1.maven.org/maven2was cached in the local repository,
resolution will not be reattempted
until the update interval of central has elapsed or updates are forced
at
org.apache.maven.project.DefaultProjectDependenciesResolver.resolve(DefaultProjectDependenciesResolver.java:189)
at
org.apache.maven.lifecycle.internal.LifecycleDependencyResolver.getDependencies(LifecycleDependencyResolver.java:185)
... 22 more
Caused by: org.sonatype.aether.resolution.DependencyResolutionException:
Failure to find org.ovirt.engine.ui:genericapi:jar:3.2.0 in
http://repo1.maven.org/maven2 was cached in the local repository,
resolution will not be reattempted until the update interval of central has
elapsed or updates are forced
at
org.sonatype.aether.impl.internal.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:375)
at
org.apache.maven.project.DefaultProjectDependenciesResolver.resolve(DefaultProjectDependenciesResolver.java:183)
... 23 more
Caused by: org.sonatype.aether.resolution.ArtifactResolutionException:
Failure to find org.ovirt.engine.ui:genericapi:jar:3.2.0 in
http://repo1.maven.org/maven2 was cached in the local repository,
resolution will not be reattempted until the update interval of central has
elapsed or updates are forced
at
org.sonatype.aether.impl.internal.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:538)
at
org.sonatype.aether.impl.internal.DefaultArtifactResolver.resolveArtifacts(DefaultArtifactResolver.java:216)
at
org.sonatype.aether.impl.internal.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:358)
... 24 more
Caused by: org.sonatype.aether.transfer.ArtifactNotFoundException: Failure
to find org.ovirt.engine.ui:genericapi:jar:3.2.0 in
http://repo1.maven.org/maven2 was cached in the local repository,
resolution will not be reattempted until the update interval of central has
elapsed or updates are forced
at
org.sonatype.aether.impl.internal.DefaultUpdateCheckManager.newException(DefaultUpdateCheckManager.java:230)
at
org.sonatype.aether.impl.internal.DefaultUpdateCheckManager.checkArtifact(DefaultUpdateCheckManager.java:204)
at
org.sonatype.aether.impl.internal.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:427)
... 26 more
[ERROR]
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions,
please read the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionExce...
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the
command
[ERROR] mvn <goals> -rf :gwt-extension
12 years, 4 months
[Users] Power mgmt and engine relationship
by Gianluca Cecchi
Hello,
Is there any requirement in connectivity between engine and hosts for
fencing?
Or is only from any host to any host?
Is it a concept cluster wide, correct? Not a DC one?
Thanks
Gianluca
12 years, 4 months
[Users] oVirt node install hanging on admin login prompt
by Nicolas Ecarnot
Hi,
I'm trying to install oVirt node 2.5.5-0.1.fc17 and it is node going
well. I'm trying to install it on a Dell blade M610 via a iDrac, and I'm
quit accustomed to this kind of thing.
- As I already had the issue of the label on the grub boot line, I knew
how to specify /dev/sr0 or sr1 on the linux line. Well.
- The install is ok with discovering my hard drive, and the installation
seems to be fine. After rebooting, the bott process is leading me to the
login prompt where the documentation tells me to log as 'admin' with the
correct password :
The keyboard is not responding, I can not even type the user name.
I can't change of console, no network activity is seen.
There does not seem to be a kernel panic, as some additional log lines
are displayed (about the creation of bond interfaces).
I also see some more lines about "systemd-readahead-collect : failed to
open pack file: read-only file system"...
When downloading the iso install file from the oVirt repo, I choosed the
smalest one, not the "live" one. Did I did wrong?
Anyway, now, what are my options?
--
Nicolas Ecarnot
12 years, 4 months
[Users] FC on ovirt-node with brocade : port never up
by Kevin Maziere Aubry
Hi all
I have a strange behaviour between my ovirt node and my FC switch.
All material are brocade.
Ovirt node release : oVirt Node Hypervisor release 2.5.5 (0.1.fc17)
The thing is that the node doesn't detect the FC port, it detect the FC
card only.
On 10 clone hosts, 2 have detected the link, all others fails.
And for these 2 hosts I've rebooted them many time before it works.
Because there is only the BFA driver include, I can't debug anything, or
update
Does anyone have a feedback on that ?
Kevin
--
Kevin Mazière
Responsable Infrastructure
Alter Way – Hosting
1 rue Royal - 227 Bureaux de la Colline
92213 Saint-Cloud Cedex
Tél : +33 (0)1 41 16 38 41
Mob : +33 (0)7 62 55 57 05
http://www.alterway.fr
12 years, 4 months
[Users] nfs multiple data storage domains
by Jithin Raju
hi all,
I have added 2 data storage nfs domains in my new ovirt 3.1 installation.
when my first(master) storage is full its not utilising the second instead
its throwing error, any idea how to make the second one to be utilised?
Thanks,
Jithin
12 years, 4 months
[Users] gluster volume creation error
by Jithin Raju
Hi ,
Volume creation is failing in posixfs data center.
While trying to create a distribute volume web UI exits with error:
"creation of volume failed" and volume is not listed in web UI.
>From the backend I can see volume got created.
gluster volume info
Volume Name: vol1
Type: Distribute
Status: Created
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: x.250.76.71:/data
Brick2: x.250.76.70:/data
When I try to mount the volume manually to /mnt
its not giving any message
exit status is zero.
mount command listed as below:
fig:/vol1 on /mnt type fuse.glusterfs
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
when I run a df it gives me like below:
"df: `/mnt': Transport endpoint is not connected"
So i just tail'ed "/var/log/glusterfs/etc-glusterfs-glusterd.vol.log"
[2013-01-21 11:30:07.828518] W [socket.c:1494:__socket_proto_state_machine]
0-socket.management: reading from socket failed. Error (Transport endpoint
is not connected), peer (135.250.76.70:1009)
[2013-01-21 11:30:10.839882] W [socket.c:1494:__socket_proto_state_machine]
0-socket.management: reading from socket failed. Error (Transport endpoint
is not connected), peer (135.250.76.70:1007)
[2013-01-21 11:30:13.852374] W [socket.c:1494:__socket_proto_state_machine]
0-socket.management: reading from socket failed. Error (Transport endpoint
is not connected), peer (135.250.76.70:1005)
[2013-01-21 11:30:16.864634] W [socket.c:1494:__socket_proto_state_machine]
0-socket.management: reading from socket failed. Error (Transport endpoint
is not connected), peer (135.250.76.70:1003)
[2013-01-21 11:30:19.875986] W [socket.c:1494:__socket_proto_state_machine]
0-socket.management: reading from socket failed. Error (Transport endpoint
is not connected), peer (135.250.76.70:1001)
[2013-01-21 11:30:22.886854] W [socket.c:1494:__socket_proto_state_machine]
0-socket.management: reading from socket failed. Error (Transport endpoint
is not connected), peer (135.250.76.70:999)
[2013-01-21 11:30:25.898840] W [socket.c:1494:__socket_proto_state_machine]
0-socket.management: reading from socket failed. Error (Transport endpoint
is not connected), peer (135.250.76.70:997)
[2013-01-21 11:30:28.910000] W [socket.c:1494:__socket_proto_state_machine]
0-socket.management: reading from socket failed. Error (Transport endpoint
is not connected), peer (135.250.76.70:995)
[2013-01-21 11:30:31.922336] W [socket.c:1494:__socket_proto_state_machine]
0-socket.management: reading from socket failed. Error (Transport endpoint
is not connected), peer (135.250.76.70:993)
[2013-01-21 11:30:34.934772] W [socket.c:1494:__socket_proto_state_machine]
0-socket.management: reading from socket failed. Error (Transport endpoint
is not connected), peer (135.250.76.70:991)
[2013-01-21 11:30:37.946215] W [socket.c:1494:__socket_proto_state_machine]
0-socket.management: reading from socket failed. Error (Transport endpoint
is not connected), peer (135.250.76.70:989)
Just wanted to know what am I doing wrong here?
package details:
vdsm-python-4.10.0-10.fc17.x86_64
vdsm-cli-4.10.0-10.fc17.noarch
vdsm-xmlrpc-4.10.0-10.fc17.noarch
vdsm-4.10.0-10.fc17.x86_64
vdsm-gluster-4.10.0-10.fc17.noarch
selinux is permissive,iptables i have flushed.
Thanks,
Jithin
12 years, 4 months
[Users] lost my notesŠ need to import a .ova file into ovirt
by Jonathan Horne
--_000_9BE6F493F83A594DA60C45E6A09DC5AC0172BCA3AUSP01DAG0201co_
Content-Type: text/plain; charset="Windows-1252"
Content-Transfer-Encoding: quoted-printable
I had successfully done this before during testing phase, but now I cannot =
find my notes and the threads I'm finding where I asked about it are not be=
aring fruit as they did before.
What is the syntax for importing a .ova virtual machine into overt 3.1?
________________________________
This is a PRIVATE message. If you are not the intended recipient, please de=
lete without copying and kindly advise us by e-mail of the mistake in deliv=
ery. NOTE: Regardless of content, this e-mail shall not operate to bind SKO=
POS to any order or other contract unless pursuant to explicit written agre=
ement or government initiative expressly permitting the use of e-mail for s=
uch purpose.
--_000_9BE6F493F83A594DA60C45E6A09DC5AC0172BCA3AUSP01DAG0201co_
Content-Type: text/html; charset="Windows-1252"
Content-ID: <B4232F1742C08E4BB0E1AA2AFAFF3C4B(a)collaborationhost.net>
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DWindows-1=
252">
</head>
<body style=3D"word-wrap:break-word; color:rgb(0,0,0); font-size:14px; font=
-family:Calibri,sans-serif">
<div>
<div>
<div>I had successfully done this before during testing phase, but now I ca=
nnot find my notes and the threads I'm finding where I asked about it are n=
ot bearing fruit as they did before.</div>
</div>
</div>
<div><br>
</div>
<div>What is the syntax for importing a .ova virtual machine into overt 3.1=
?</div>
<br>
<hr>
<font color=3D"Gray" face=3D"Arial" size=3D"1">This is a PRIVATE message. I=
f you are not the intended recipient, please delete without copying and kin=
dly advise us by e-mail of the mistake in delivery. NOTE: Regardless of con=
tent, this e-mail shall not operate to
bind SKOPOS to any order or other contract unless pursuant to explicit wri=
tten agreement or government initiative expressly permitting the use of e-m=
ail for such purpose.</font>
</body>
</html>
--_000_9BE6F493F83A594DA60C45E6A09DC5AC0172BCA3AUSP01DAG0201co_--
12 years, 4 months
[Users] Error run once VM
by Juan Jose
Hello everybody,
I'm following the "
http://www.ovirt.org/Quick_Start_Guide#Create_a_Fedora_Virtual_Machine" and
when I click OK buttom after put all parameters in "Run Virtual Machine", I
receive bellow error in events and in vdsm.log file from my host:
*Thread-352921::DEBUG::2013-01-21
15:55:40,709::task::978::TaskManager.Task::(_decref)
Task=`8bb281a1-434b-4506-b4a8-2d6665bb382f`::ref 0 aborting
FalseThread-352921::INFO::2013-01-21
15:55:40,709::clientIF::274::vds::(prepareVolumePath) prepared volume
path: /rhev/data-center/d6e7e8b8-49c7-11e2-a261-000a5e429f63/57d184a0-908b-49b5-926f-cd413b9e6526/images/c77ff9d7-6280-4454-b342-faa206989d2a/bf973de9-d344-455d-a628-3dbfbf2693d9Thread-352921::DEBUG::2013-01-21
15:55:40,717::libvirtvm::1338::vm.Vm::(_run)
vmId=`51738dae-c758-4e77-bad7-281f56c4d61d`::<?xml version="1.0"
encoding="utf-8"?><domain
type="kvm"> <name>Fedora17</name> <uuid>51738dae-c758-4e77-bad7-281f56c4d61d</uuid> <memory>1048576</memory> <currentMemory>1048576</currentMemory> <vcpu>1</vcpu> <devices> <channel
type="unix"> <target name="com.redhat.rhevm.vdsm"
type="virtio"/> <source mode="bind"
path="/var/lib/libvirt/qemu/channels/Fedora17.com.redhat.rhevm.vdsm"/> </channel> <input
bus="ps2" type="mouse"/> <channel type="spicevmc"> <target
name="com.redhat.spice.0" type="virtio"/> </channel> <graphics
autoport="yes" keymap="en-us" listen="0" passwd="*****"
passwdValidTo="1970-01-01T00:00:01" port="-1" tlsPort="-1"
type="spice"> <channel mode="secure" name="main"/> <channel
mode="secure" name="inputs"/> <channel mode="secure"
name="cursor"/> <channel mode="secure" name="playback"/> <channel
mode="secure" name="record"/> <channel mode="secure"
name="display"/> </graphics> <console type="pty"> <target port="0"
type="virtio"/> </console> <video> <model heads="1" type="qxl"
vram="65536"/> </video> <interface type="bridge"> <mac
address="00:1a:4a:6d:ca:00"/> <model type="virtio"/> <source
bridge="ovirtmgmt"/> <boot order="3"/> </interface> <memballoon
model="virtio"/> <disk device="cdrom" snapshot="no"
type="file"> <source
file="/rhev/data-center/d6e7e8b8-49c7-11e2-a261-000a5e429f63/cd9b45e6-2150-44d9-af1a-a557840fde9e/images/11111111-1111-1111-1111-111111111111/Fedora-17-x86_64-Live-XFCE.iso"
startupPolicy="optional"/> <target bus="ide"
dev="hdc"/> <readonly/> <serial></serial> <boot
order="1"/> </disk> <disk device="disk" snapshot="no"
type="file"> <source
file="/rhev/data-center/d6e7e8b8-49c7-11e2-a261-000a5e429f63/57d184a0-908b-49b5-926f-cd413b9e6526/images/c77ff9d7-6280-4454-b342-faa206989d2a/bf973de9-d344-455d-a628-3dbfbf2693d9"/> <target
bus="virtio" dev="vda"/> <serial>c77ff9d7-6280-4454-b342-faa206989d2a</serial> <boot
order="2"/> <driver cache="none" error_policy="stop" io="threads"
name="qemu" type="raw"/> </disk> </devices> <os> <type arch="x86_64"
machine="pc-0.14">hvm</type> <smbios mode="sysinfo"/> </os> <sysinfo
type="smbios"> <system> <entry name="manufacturer">Red
Hat</entry> <entry name="product">RHEV Hypervisor</entry> <entry
name="version">17-1</entry> <entry
name="serial">36303030-3139-3236-3800-00199935CC54_00:19:99:35:cc:54</entry> <entry
name="uuid">51738dae-c758-4e77-bad7-281f56c4d61d</entry> </system> </sysinfo> <clock
adjustment="0" offset="variable"> <timer name="rtc"
tickpolicy="catchup"/> </clock> <features> <acpi/> </features> <cpu
match="exact"> <model>Conroe</model> <topology cores="1" sockets="1"
threads="1"/> </cpu></domain>
Thread-352921::DEBUG::2013-01-21
15:55:41,258::vm::580::vm.Vm::(_startUnderlyingVm)
vmId=`51738dae-c758-4e77-bad7-281f56c4d61d`::_ongoingCreations
releasedThread-352921::ERROR::2013-01-21
15:55:41,259::vm::604::vm.Vm::(_startUnderlyingVm)
vmId=`51738dae-c758-4e77-bad7-281f56c4d61d`::The vm start process
failedTraceback (most recent call last):
File "/usr/share/vdsm/vm.py", line 570, in _startUnderlyingVm
self._run()
File "/usr/share/vdsm/libvirtvm.py", line 1364, in _run
self._connection.createXML(domxml, flags),
File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
line 82, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2420, in createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed',
conn=self)libvirtError: internal error Failed to open socket to
sanlock daemon: No such file or
directoryThread-352921::DEBUG::2013-01-21
15:55:41,262::vm::920::vm.Vm::(setDownStatus)
vmId=`51738dae-c758-4e77-bad7-281f56c4d61d`::Changed state to Down:
internal error Failed to open socket to sanlock daemon: No such file
or directory*
In Tree VMs, click in my "Fedora17" VM, and in detail windows events:
Failed to run VM Fedora17 (User: admin@internal).
Failed to run VM Fedora17 on Host host1.
VM Fedora17 is down. Exit message: internal error Failed to open socket to
sanlock daemon: No such file or directory.
In /var/log/vdsm/libvirt.log:
2013-01-21 14:55:41.258+0000: 10619: error :
virNetClientProgramDispatchError:174 : internal error Failed to open socket
to sanlock daemon: No such file or directory
if I make a "systemctl status sanlock" I see below error message:
sanlock.service - Shared Storage Lease Manager
Loaded: loaded (/usr/lib/systemd/system/sanlock.service; disabled)
Active: *failed* (Result: exit-code) since Mon, 21 Jan 2013 13:17:32
+0100; 2h 57min ago
Process: 23911 ExecStop=/lib/systemd/systemd-sanlock stop (code=exited,
status=0/SUCCESS)
Process: 23898 ExecStart=/lib/systemd/systemd-sanlock start (code=exited,
status=0/SUCCESS)
Main PID: 23904 (code=exited, status=255)
CGroup: name=systemd:/system/sanlock.service
Jan 21 13:17:32 ovirt-host systemd-sanlock[23898]: Starting sanlock: [ OK
]
Jan 21 13:17:32 ovirt-host sanlock[23904]: 2013-01-21 13:17:32+0100 2854380
[23904]: sanlock daemon started 2.4 aio...70652
Jan 21 13:17:32 ovirt-host sanlock[23904]: 2013-01-21 13:17:32+0100 2854380
[23904]: wdmd connect failed for watchd...dling
Could someone guide me about what could be the problem, please?
Many thanks in avanced,
Juanjo.
12 years, 4 months
[Users] SLOW I/O performance
by Alex Leonhardt
Hi All,
This is my current setup:
HV1 has :
storage_domain_1
is SPM master
HV2 has :
storage_domain_2
is normal (not master)
HV1 has storage_domain_1 mounted via 127.0.0.1 (network name, but hosts
entry sends it to loopback)
HV2 has storage_domain_2 mounted via 127.0.0.1 (network name, but hosts
entry sends it to loopback)
All VMs on HV1 have its storage set to storage_domain_1 and all VMs on HV2
have their storage set to storage_domain_2
My problem now is that after I finally created all the disks on HV2 over a
super slow mgmt network (ovirtmgmt), it's a 100 Mbit only, I'm now trying
to kickstart all the VMs I created, however, formatting the disk is taking
for ever ~ 20-30mins for 12 GB, that is roughly how long it took to create
the disks over the 100Mbit link.
The weirdness really starts with HV2, as all VMs on HV1 with disks on
storage_domain_1 have "good" I/O throughput, all VMs on HV2 are awfully
slow in reading/writing to disk.
I've tried some network settings to increase throughput, but those didnt
help / had no effect at all.
Anyone come across this issue ? Is it something to do with the ovirtmgmt
interface only being 100Mbit ?
Alex
--
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
12 years, 4 months
[Users] more useful repo to use right now?
by Gianluca Cecchi
Hello,
I see that under
http://resources.ovirt.org/releases/beta/rpm/Fedora/18
there are populated rpms.
Does this mean 3.2 beta has been released? Where to find announcements
in case for future?
I presume if this is the case, it would be better to use them instead
of nightly at this stage, correct?
If I install beta, can I then "regularly" update to final 3.2 with the
usual "engine-update" and so on?
I'm going to test on Dell R310 (engine) and R815 (node) tomorrow, so
I'm available to test what is best for the project.
I'm going to test both on local storage and iSCSI EQL PS5000X
thanks
Gianluca
12 years, 4 months
Re: [Users] mark VM to start on boot
by Roy Golan
This is a multi-part message in MIME format.
--------------010204010309060805010107
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
On 01/20/2013 07:25 PM, Jim Kinney wrote:
>
> They are marked "highly available" but I thought that was for
> migration only. I saw a database boolean but other than an update
> command, I see no other way to boot on powerup.
>
your VMs probably failed to restart on your hosts. maybe storage isn't
connected yet. check the engine.log
> On Jan 20, 2013 1:38 AM, "Roy Golan" <rgolan(a)redhat.com
> <mailto:rgolan@redhat.com>> wrote:
>
> On 01/18/2013 08:27 PM, Jim Kinney wrote:
>> How do I mark a VM to startup if a node fails?
>>
>> 2 hosts in cluster, windows domain controller on one host, backup
>> on second host. Both are marked high priority.
>> "Bad Things" happen and both hosts get rebooted. I want those
>> domain controllers to automatically restart.
>>
> are those VMs also marked as "Highly available" under High
> Availability tab?
>> I'm assuming the failure of the hosts did not knock down the
>> manager. (I have them on separate floors, power, etc).
>> --
>> --
>> James P. Kinney III
>> ////
>> ////Every time you stop a school, you will have to build a jail.
>> What you gain at one end you lose at the other. It's like feeding
>> a dog on his own tail. It won't fatten the dog.
>> - Speech 11/23/1900 Mark Twain
>> ////
>> http://electjimkinney.org
>> http://heretothereideas.blogspot.com/
>> ////
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org <mailto:Users@ovirt.org>
>> http://lists.ovirt.org/mailman/listinfo/users
>
--------------010204010309060805010107
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 01/20/2013 07:25 PM, Jim Kinney
wrote:<br>
</div>
<blockquote
cite="mid:CAEo=5PxpxC7AmspcfvxdMjSmN3aP=FVEy-G_L_zvhq3ecX=e_Q@mail.gmail.com"
type="cite">
<p>They are marked "highly available" but I thought that was for
migration only. I saw a database boolean but other than an
update command, I see no other way to boot on powerup.</p>
</blockquote>
your VMs probably failed to restart on your hosts. maybe storage
isn't connected yet. check the engine.log<br>
<blockquote
cite="mid:CAEo=5PxpxC7AmspcfvxdMjSmN3aP=FVEy-G_L_zvhq3ecX=e_Q@mail.gmail.com"
type="cite">
<div class="gmail_quote">On Jan 20, 2013 1:38 AM, "Roy Golan" <<a
moz-do-not-send="true" href="mailto:rgolan@redhat.com">rgolan(a)redhat.com</a>>
wrote:<br type="attribution">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<div>On 01/18/2013 08:27 PM, Jim Kinney wrote:<br>
</div>
<blockquote type="cite">How do I mark a VM to startup if a
node fails? <br>
<br>
2 hosts in cluster, windows domain controller on one host,
backup on second host. Both are marked high priority. <br
clear="all">
"Bad Things" happen and both hosts get rebooted. I want
those domain controllers to automatically restart.<br>
<br>
</blockquote>
are those VMs also marked as "Highly available" under High
Availability tab?<br>
<blockquote type="cite">I'm assuming the failure of the
hosts did not knock down the manager. (I have them on
separate floors, power, etc).<br>
-- <br>
-- <br>
James P. Kinney III<br>
<i><i><i><i><br>
</i></i></i></i>Every time you stop a school, you
will have to build a jail. What you gain at one end you
lose at the other. It's like feeding a dog on his own
tail. It won't fatten the dog.<br>
- Speech 11/23/1900 Mark Twain<br>
<i><i><i><i><br>
<a moz-do-not-send="true"
href="http://electjimkinney.org" target="_blank">http://electjimkinney.org</a><br>
<a moz-do-not-send="true"
href="http://heretothereideas.blogspot.com/"
target="_blank">http://heretothereideas.blogspot.com/</a><br>
</i></i></i></i> <br>
<fieldset></fieldset>
<br>
<pre>_______________________________________________
Users mailing list
<a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users(a)ovirt.org</a>
<a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a>
</pre>
</blockquote>
<br>
</div>
</blockquote>
</div>
</blockquote>
<br>
</body>
</html>
--------------010204010309060805010107--
12 years, 4 months
[Users] Planned outage :: resources.ovirt.org/lists.ovirt.org :: 2013-01-21 01:00 UTC
by Karsten 'quaid' Wade
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
------enig2QPOTWAJSDKQFRARSDTUF
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
There will be an outage of www.ovirt.org for approximately 45 minutes.
The outage will occur at 2013-01-21 01:00 UTC. To view in your local time=
:
date -d '2013-01-21 01:00 UTC'
I may start part of the outage 15 minutes before that. If you anticipate
needing services until the outage window, reply back to me with details
ASAP.
=3D=3D Details =3D=3D
We need to resize the Linode instance to get another 15 GB for storage
until we can move services off the Linode permanently, as planned. This
resizing should give us some breathing room.
The account resize is estimated to take 30 minutes, during which time
the Linode VM will be offline. After that, there will be a few minutes
for reboot and restart of services, including the manual starting of the
IRC bot 'ovirtbot'.
The time window chosen coincides with the lowest CPU usage typically
seen on any given day - 01:00 UTC tends to be very quiet for about an
hour. Hopefully no one will even notice the downtime.
If you have any services, such as Jenkins or Gerrit backup, that may go
off during that window, you may want to retime it or be prepared for an
error.
=3D=3D Affected services =3D=3D
* resources.ovirt.org
** meeting logs
** packages
* lists.ovirt.org (MailMan)
* ovirtbot
* Gerrit backup (anacron may pick this up)
* Other cronjobs (anacron may pick this up)
=3D=3D Not-affected services =3D=3D
* www.ovirt.org (MediaWiki)
* jenkins.ovirt.org
* gerrit.ovirt.org
* alterway{01,02}.ovirt.org
--=20
Karsten 'quaid' Wade, Sr. Analyst - Community Growth
http://TheOpenSourceWay.org .^\ http://community.redhat.com
@quaid (identi.ca/twitter/IRC) \v' gpg: AD0E0C41
------enig2QPOTWAJSDKQFRARSDTUF
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iD8DBQFQ/C5s2ZIOBq0ODEERAielAKCYj7E23olhFjrftzhIdMey/n4k9QCdG4MR
ILrAtGG53Wnn545S12rdacI=
=C+px
-----END PGP SIGNATURE-----
------enig2QPOTWAJSDKQFRARSDTUF--
12 years, 4 months
[Users] Attaching an existing KVM installation to oVirt
by Eric_E_Smith@DELL.com
Hello - I'm new to the list and thought I would send my first email. Is there a way to attach an existing KVM installation (Non Fedora - non node based installation of say, CentOS or Ubuntu) to oVirt?
Thanks in advance,
Eric
12 years, 4 months
[Users] Storage domain weirdness (or design)
by Alex Leonhardt
Hi,
I see a strange behaviour -
Setup:
1 ovirtmgmt / ovirt-engine hsot
2 ovirt nodes / HVs
2 storage domains in same cluster & DC
HV1 => storage domain 1 (master)
HV2 => storage domain 2
Issue:
When I create a host with say ~40GB disk to be on HV2/storage_domain_2, it
does that via HV1 ? Why is that ? I realize a storage domain is "attached
to the cluster/dc", however, when creating the VM I explicitly seleted to
only run on HV2, so why would it still create the disk via HV1 ? The mgmt
network (for now) is only just that, a mgmt network, not meant to create 40
GB disks (dd from /dev/null) over NFS .. it's currently only a 100mbit
switch
Question:
How, if at all, can I make Ovirt create the Disk from the host where it's
meant to run on ?
Thanks,
Alex
--
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
12 years, 4 months
[Users] mark VM to start on boot
by Jim Kinney
How do I mark a VM to startup if a node fails?
2 hosts in cluster, windows domain controller on one host, backup on second
host. Both are marked high priority.
"Bad Things" happen and both hosts get rebooted. I want those domain
controllers to automatically restart.
I'm assuming the failure of the hosts did not knock down the manager. (I
have them on separate floors, power, etc).
--
--
James P. Kinney III
*
*Every time you stop a school, you will have to build a jail. What you gain
at one end you lose at the other. It's like feeding a dog on his own tail.
It won't fatten the dog.
- Speech 11/23/1900 Mark Twain
*
http://electjimkinney.org
http://heretothereideas.blogspot.com/
*
12 years, 4 months
[Users] Move virtual hard disk images between hosts - Howto
by Adrian Gibanel
------=_Part_211_15072353.1358593952028
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi,=20
I've written a howto on how to move, a better name would be to copy, virtua=
l hard disk images between hosts.=20
It has been useful for me in the scenario where I need to move them between=
different datacenters. I still don't understand the formal ways you have t=
o move hard disks between hosts of the same/different data resource, cluste=
r and datacenters so I do it manually.=20
Anyways I wanted you to take a look at it:=20
http://www.ovirt.org/User:Adrian15/Virtual_Machines_Images_Raw_Management=
=20
And tell me:=20
* When it would work, when not.=20
* Pieces of advice.=20
* And if I'm missing something like, I don't know, having to edit some valu=
es at the database.=20
I think that it will be also useful for some kind of restore on disaster sc=
enarios although I'm not sure.=20
I've also used some tricks to deal with sparse files without having to wait=
for ages.=20
--=20
Adri=C3=A1n Gibanel=20
I.T. Manager=20
+34 675 683 301=20
www.btactic.com=20
Ens podeu seguir a/Nos podeis seguir en:=20
i=20
Abans d=C2=B4imprimir aquest missatge, pensa en el medi ambient. El medi am=
bient =C3=A9s cosa de tothom. / Antes de imprimir el mensaje piensa en el m=
edio ambiente. El medio ambiente es cosa de todos.=20
AVIS:=20
El contingut d'aquest missatge i els seus annexos =C3=A9s confidencial. Si =
no en sou el destinatari, us fem saber que est=C3=A0 prohibit utilitzar-lo,=
divulgar-lo i/o copiar-lo sense tenir l'autoritzaci=C3=B3 corresponent. Si=
heu rebut aquest missatge per error, us agrairem que ho feu saber immediat=
ament al remitent i que procediu a destruir el missatge .=20
AVISO:=20
El contenido de este mensaje y de sus anexos es confidencial. Si no es el d=
estinatario, les hacemos saber que est=C3=A1 prohibido utilizarlo, divulgar=
lo y/o copiarlo sin tener la autorizaci=C3=B3n correspondiente. Si han reci=
bido este mensaje por error, les agradecer=C3=ADamos que lo hagan saber inm=
ediatamente al remitente y que procedan a destruir el mensaje .=20
------=_Part_211_15072353.1358593952028
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><head><style type=3D'text/css'>p { margin: 0; }</style></head><body><=
div style=3D'font-family: arial,helvetica,sans-serif; font-size: 10pt; colo=
r: #000000'>Hi,<br><br> I've written a howto on how to move, a better=
name would be to copy, virtual hard disk images between hosts.<br>It has b=
een useful for me in the scenario where I need to move them between differe=
nt datacenters. I still don't understand the formal ways you have to move h=
ard disks between hosts of the same/different data resource, cluster =
and datacenters so I do it manually.<br><br> Anyways I wanted you to =
take a look at it:<br><br>http://www.ovirt.org/User:Adrian15/Virtual_Machin=
es_Images_Raw_Management<br><br>And tell me:<br>* When it would work, when =
not.<br>* Pieces of advice.<br>* And if I'm missing something like, I don't=
know, having to edit some values at the database.<br><br>I think that it w=
ill be also useful for some kind of restore on disaster scenarios although =
I'm not sure.<br><br>I've also used some tricks to deal with sparse files w=
ithout having to wait for ages.<br><br>-- <br><div><span name=3D"x"></span>=
<font style=3D"font-weight: bold;" size=3D"3"><a style=3D"color: rgb(0, 0, =
0);" href=3D"http://www.btactic.com/"><span id=3D"DWT100"><font class=3D"Ap=
ple-style-span" face=3D"verdana, helvetica, sans-serif"><span class=3D"Appl=
e-style-span" style=3D"background-color: rgb(255, 255, 255);"></span></font=
></span></a></font><font style=3D"font-family: 'Times New Roman';" color=3D=
"#5f5f5f" face=3D"Arial" size=3D"1"><font size=3D"3"><span style=3D"font-fa=
mily: verdana,helvetica,sans-serif; color: rgb(0, 0, 0);"><font style=3D"fo=
nt-family: helvetica;" size=3D"2"><strong>Adri=C3=A1n Gibanel</strong><br>I=
.T. Manager<br><br>+34 675 683 301<br><a href=3D"http://btactic.com/">www.b=
tactic.com</a></font><br><br></span></font></font><font color=3D"#008000" f=
ace=3D"Arial" size=3D"1"><img src=3D"http://www.btactic.com/signaturabtacti=
cmail/btacticsignature.png" style=3D"border-width: 0px;"><br></font><font c=
lass=3D"Apple-style-span" face=3D"Arial"><b><span class=3D"Apple-style-span=
" style=3D"font-family: Verdana; font-weight: normal;"><span id=3D"bc4bed34=
-88ab-466b-a731-c40f5c09ab6c"><font color=3D"#5f5f5f" face=3D"Arial" size=
=3D"1"><br>Ens podeu seguir a/Nos podeis seguir en:<br>
<br>
</font></span><a href=3D"http://www.facebook.com/pages/btactic/118651634826=
400?v=3Dapp_9953271133"><img style=3D"border: 0pt none;" src=3D"http://www.=
btactic.com/wp-content/themes/btactic/img/facebookfoot.jpg"></a> i <a href=
=3D"http://twitter.com/btactic"><img style=3D"border: 0pt none;" src=3D"htt=
p://www.btactic.com/wp-content/themes/btactic/img/twitterfoot.jpg"></a></sp=
an></b></font><br><font color=3D"#008000" face=3D"Arial" size=3D"1"><br></f=
ont><div><font color=3D"#008000" face=3D"Arial" size=3D"1">Abans d=C2=B4imp=
rimir
aquest missatge, pensa en el medi ambient. El medi ambient =C3=A9s cosa de=
=20
tothom.
/ Antes de imprimir el mensaje piensa en el medio ambiente. El medio=20
ambiente
es cosa de todos. </font><font color=3D"#5f5f5f" face=3D"Arial" size=3D"1">=
<br>
<br>
AVIS: <br>
El contingut d'aquest missatge i els seus annexos =C3=A9s confidencial. Si =
no
en sou el destinatari, us fem saber que est=C3=A0 prohibit utilitzar-lo,=20
divulgar-lo
i/o copiar-lo sense tenir l'autoritzaci=C3=B3 corresponent. Si heu rebut=20
aquest
missatge per error, us agrairem que ho feu saber immediatament <span class=
=3D"Object" id=3D"OBJ_PREFIX_DWT103">al remitent
i que procediu a destruir el missatge</span>.<br>
<br>
AVISO:<br>
El contenido de este mensaje y de sus anexos es confidencial. Si no es
el destinatario, les hacemos saber que est=C3=A1 prohibido utilizarlo,=20
divulgarlo
y/o copiarlo sin tener la autorizaci=C3=B3n correspondiente. Si han recibid=
o
este mensaje por error, les agradecer=C3=ADamos que lo hagan saber=20
inmediatamente
<span class=3D"Object" id=3D"OBJ_PREFIX_DWT104">al remitente y que procedan=
a
destruir el mensaje</span>.</font>
</div><span name=3D"x"></span><br></div></div></body></html>
------=_Part_211_15072353.1358593952028--
12 years, 4 months
Re: [Users] Thin provisioning extending while "Make template" still a bug?
by Maor Lipchuk
Hi Adrian,
Sorry for the late respond.
Please see inline comments,
and feel free if there are any more questions, or other issues which you
want us to address to.
p.s. Since some of the responds were in different threads, I gathered
all of them to this email.
> 3) About the attached logs the virtual machine made from template was
> finally made at:
> 2012/12/22 17:21
> Logs are from 16:17 to 17:21 aproximately and I might reuse them to
> ask why it takes so long to make the virtual machine while checking
> the storage image at /home/storage (its size) it would seem that the >
copy has finished.
> I think that the "Make vm from template" task started about 30
> minutes or 40 minutes before it being finished. Well, that's the
> average time it takes for me.
Since you are creating a server the default behaviour is to clone your
disks, which means engine calls copyImage to the SPM.
>From what I saw in the logs your template have two disks, which one of
them is 1920 GB, this is why it took 30 minutes to copy it.
If you want your copy to be faster, you can change the default in the
resource allocation tab to use thin instead of clone when you add a new
server.
although, take in notice that when you will use thin provisioning, the
server will be based on the template and you will not be able to remove
the template until the VM will be removed.
On 12/25/2012 09:34 PM, Adrian Gibanel wrote:> I've described the
"Create template from VM" extending bug (the one I
> was told in irc) but I've attached the "Create VM from template" logs.
>
> If there's also a bug about "Create VM from template" being extended
> then it's useful.
> If it's not I'll try to recreate a "Create template from VM" task and
> attach the logs.
>
> Sorry about the confusion.
If you can still reproduce this scenario when you create a template with
wrong provisioning, please attach the logs of it.
On 12/24/2012 01:51 AM, Adrian Gibanel wrote:> I've just noticed that I
made a typo.
> This is the right template disk allocation policy table
> (The fix is that second hard disk is preallocated instead of being
Thin Prov)
>
> Alias | Virtual Size | Allocation | Target
> First | 1920 GB | Thin Prov | local_storage
> Seco | 1 GB | Preallocat | local_storage
I took a look in your logs, and it seems that when you create a VM from
template the passing arguments are preallocated for one image and sparse
for the other, so it seems that this is fit with the allocation policy
table you sent.
>
> ----- Mensaje original -----
>
>> De: "Adrian Gibanel" <adrian.gibanel(a)btactic.com>
>> Para: "users" <users(a)ovirt.org>
>> Enviados: Lunes, 24 de Diciembre 2012 0:45:24
>> Asunto: Re: [Users] Thin provisioning extending while "Make template"
>> still a bug?
>
>> 1) So... the mentioned bug does exist or am I just experiencing a
>> normal oVirt usage (Making a vm from template with thin provisioning
>> works ok without extending)?
>
>> 2) About template disk allocation policy...
>
>> When I create a new server based on template and I click on:
>> Resource Allocation tab:
>
>> Template Provisioning: Clone
>
>> Alias | Virtual Size | Allocation | Target
>> First | 1920 GB | Thin Prov | local_storage
>> Seco | 1 GB | Thin Prov | local_storage
>
Regards,
Maor
12 years, 4 months
[Users] hierarchy map of ovirt environment
by Jiri Belka
Hello,
in vSphere you can have 'views' like Storage views, Network views...
Example:
http://i.techrepublic.com.com/blogs/sept-2010-virtualizationtips-tip4-fig...
This is very useful, typical scenario is when delivery manager asks
sysadmins about potential impact on VMs when a scheduled update of a
switch/storage box goes wrong.
It's easy in vSphere, just check 'views' and you will see it.
Something like this possible in oVirt?
It would be nice to have it as 'map', also as 'result' of searching,
something like...
vms: network.name = foo and network.risk = down
jbelka
12 years, 4 months
[Users] Settings lost after node reboot
by Nicolas Ecarnot
Hi,
Migration failed due to a lack of an iptables rules about tls
> # libvirt tls
> -A INPUT -p tcp --dport 16514 -j ACCEPT
I added it in /etc/sysconfig/iptables and migration worked.
After a reboot, this rule is lost, as well as some setting I added in /etc.
I see the the nodes have very specific mouting strategies, and/or read
only architecture.
I'm not sure I want to become an expert about why and how it's done, but
I'd be glad if someone just tells me where I have to write my settings
in order them to resist node reboot?
Thank you.
--
Nicolas Ecarnot
12 years, 4 months
Re: [Users] Fwd: API usage - 3.1
by Michael Pasternak
Hi Tom,
> -------- Original Message --------
> Subject: [Users] API usage - 3.1
> Date: Fri, 11 Jan 2013 16:27:03 +0000
> From: Tom Brown <tom(a)ng23.net>
> To: users <users(a)ovirt.org>
>
> Trying to get going adding VM's via the API and so far have managed to get quite far - I am however facing this
>
> vm_template = """<vm>
> <name>%s</name>
> <cluster>
> <name>Default</name>
> </cluster>
> <template>
> <name>Blank</name>
> </template>
> <vm_type>server</vm_type>
> <memory>536870912</memory>
> <os>
> <boot dev="hd"/>
> </os>
> </vm>"""
>
> The VM is created but the type ends up being a desktop and not a server -
>
> What did i do wrong?
the name of the element is <type> (not <vm_type>).
>
> thanks
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
--
Michael Pasternak
RedHat, ENG-Virtualization R&D
12 years, 4 months
[Users] oVirt features survey - please participate!
by Dave Neary
Hi everyone,
After the mammoth thread these past few days on what you would like to
see next from oVirt, Itamar and I have put together a list of all of the
features you requested and made a survey to help us understand a bit
more which features are more important to you, and the way in which you
use oVirt.
https://www.surveymonkey.com/s/oVirtFeatures
It will take you between 1 and 3 minutes to participate in this survey,
and help prioritise efforts for the next version or two of oVirt. If you
know of people who are oVirt users, but who are not on this mailing
list, please feel free to forward this link on to them!
Also, let me remind you that you can see first hand what is coming in
the upcoming oVirt 3.2 release and talk to the people behind oVirt
during the oVirt Workshop in NetApp HQ, Sunnyvale, California next week.
Registration is still open for another day or so, and we have about 10
places still available. Sign up now!
http://www.ovirt.org/NetApp_Workshop_January_2013
Regards,
Dave.
--
Dave Neary - Community Action and Impact
Open Source and Standards, Red Hat - http://community.redhat.com
Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13
12 years, 4 months
[Users] Fail to connect to an iSCSI Eq. LUN
by Nicolas Ecarnot
Hi,
As a beginner, I'm reading again and again, but I'm not sure of the best
way to do :
Through the oVirt web manager, I'm trying to create an iSCSI storage domain.
On my Equalogic SAN, I've created a volume with no restriction access
(for the time being).
I have two hypervisors on which I'm quite sure my network config is good
enough for now (two nics bonded for the management, and 2 nics bonded
for the iscsi). Everything is pinging ok. Networking is not an issue.
In the ovirt web manager, I try to create the very first storage domain,
of iscsi type of course.
I choose one of the node, then the iscsi discovery + login is working fine.
I can see my Equalogic volume, I'm checking it, and saving with the OK
button, and I get the following error :
> "Error while executing action New SAN Storage Domain: Physical device
> initialization failed. Check that the device is empty. Please remove
> all files and partitions from the device."
Not very interesting, but the node log file is more instructive :
> Thread-2767::INFO::2013-01-16
> 13:35:57,064::logUtils::37::dispatcher::(wrapper) Run and protect:
> createVG(vgname='7becc578-a94b-41f4-bbec-8df5fe9f46c0',
> devlist=['364ed2ad5297bb022fd0ee5ba36ad91a0'], options=None)
>
> Thread-2767::DEBUG::2013-01-16
> 13:35:57,066::__init__::1249::Storage.Misc.excCmd::(_log) '/usr/bin/sudo
> -n /sbin/lvm pvcreate --config " devices { preferred_names =
> [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
> disable_after_error_count=3 filter = [
> \\"a%3600508e000000000ec7b6d8dea602b0e|364ed2ad5297bb022fd0ee5ba36ad91a0%\\",
> \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1
> wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } "
> --metadatasize 128m --metadatacopies 2 --metadataignore y
> /dev/mapper/364ed2ad5297bb022fd0ee5ba36ad91a0' (cwd None)
>
> Thread-2767::DEBUG::2013-01-16
> 13:35:57,147::__init__::1249::Storage.Misc.excCmd::(_log) FAILED: <err>
> = " Can't open /dev/mapper/364ed2ad5297bb022fd0ee5ba36ad91a0
> exclusively. Mounted filesystem?\n"; <rc> = 5
>
> Thread-2767::DEBUG::2013-01-16
> 13:35:57,149::__init__::1249::Storage.Misc.excCmd::(_log) '/usr/bin/sudo
> -n /sbin/lvm pvs --config " devices { preferred_names =
> [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
> disable_after_error_count=3 filter = [
> \\"a%3600508e000000000ec7b6d8dea602b0e|364ed2ad5297bb022fd0ee5ba36ad91a0%\\",
> \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1
> wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " -o
> vg_name,pv_name --noheading
> /dev/mapper/364ed2ad5297bb022fd0ee5ba36ad91a0' (cwd None)
>
> Thread-2767::DEBUG::2013-01-16
> 13:35:57,224::__init__::1249::Storage.Misc.excCmd::(_log) FAILED: <err>
> = ' No physical volume label read from
> /dev/mapper/364ed2ad5297bb022fd0ee5ba36ad91a0\n Failed to read physical
> volume "/dev/mapper/364ed2ad5297bb022fd0ee5ba36ad91a0"\n'; <rc> = 5
>
> Thread-2767::ERROR::2013-01-16
> 13:35:57,226::task::853::TaskManager.Task::(_setError)
> Task=`1c5b8931-0085-489c-8962-ff5cc1323dc7`::Unexpected error
>
> Traceback (most recent call last):
> File "/usr/share/vdsm/storage/task.py", line 861, in _run
> File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
> File "/usr/share/vdsm/storage/hsm.py", line 1680, in createVG
> File "/usr/share/vdsm/storage/lvm.py", line 788, in createVG
> File "/usr/share/vdsm/storage/lvm.py", line 631, in _initpvs
> PhysDevInitializationError: Failed to initialize physical device:
> ("found: {} notFound: ('/dev/mapper/364ed2ad5297bb022fd0ee5ba36ad91a0',)",)
I guess the interesting part is :
> Can't open /dev/mapper/364ed2ad5297bb022fd0ee5ba36ad91a0 exclusively. Mounted filesystem?
On any node, this partition is not mounted.
But I have set up the access to this volume as "shared" on the Equalogic
SAN, not knowing whether I should do so? I see both nodes connected on
the same volume.
I tried to remove this permit, but it didn't help.
I also found Redhat answering such a question :
> There are two ways to clear this error:
>
> - Remove VG from the Storage LUN using vgremove command. Run this command from one of the hypervisors.
> - Clear the first 4K from the disk . Execute the following from one of the hypervisors:
>
> # dd if=/dev/zero of=/dev/mapper/diskname bs=4k count=1
I did try that, but with no luck.
Now, two things :
- Do I have to keep the access to this volume shared/allowed to all the
hypervisors dedicated to this volume?
- What is the problem with the pvcreate command?
--
Nicolas Ecarnot
12 years, 4 months
[Users] Template restore
by Alex Leonhardt
Hi All,
Am having some difficulties with restoring templates -
basically, currently, I'm using each new HV (hyper-visor/node) as a
separate DC, now, the first host has about 5 VM templates that I need to be
able to restore to the 2nd HV I installed today - however - after detaching
the export domain and re-attaching (finally) to the 2nd HV, I still am
unable to import the templates into the 2nd HV (remember, separate data
center) as it complains that the "system" - the overall ovirt-engine -
already contains templates with the same ID, if I manually fake a different
ID, it complains about the name, if I manually amend the name, it wont show
at all ... :\
so - how can I make the templates available to the 2nd DC to create VMs
from it (this is the end goal) ...
on top of it - i dont think that when importing templates, you should have
to make the name unique, especially since they currently seem DC dependent,
so in a different DC they should be able to have the same name; and, when
restoring Templates, the system should assign new IDs to the Templates
imported, and only use the originals as a verification step if the target
DC is the same as the src DC.
Any help would be very appreciated.
Ta!
Alex
--
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
12 years, 4 months
[Users] About ISOs
by Juan Jose
Hello everybody again,
I have uploaded two isos to my ISO domain but I have had problems uploading
one of they. How do I delete this iso from datacenter and system?, is there
some utililty to do this, or some procedure?
Many thanks in avanced,
Juanjo.
12 years, 4 months
[Users] egine-iso-uploader error
by Juan Jose
Hello everybody,
I have been able to solve my NFS problems and now I have configured ISO
resource and a data resource in datacenter but when I try to execute the
command "engine-iso-uploader" I can see below error:
[root@ovirt-engine ~]# engine-iso-uploader -v list
Please provide the REST API password for the admin@internal oVirt Engine
user (CTRL+D to abort):
ERROR: [ERROR]::ca_file (CA certificate) must be specified for SSL
connection.
INFO: Use the -h option to see usage.
DEBUG: Configuration:
DEBUG: command: list
DEBUG: Traceback (most recent call last):
DEBUG: File "/bin/engine-iso-uploader", line 931, in <module>
DEBUG: isoup = ISOUploader(conf)
DEBUG: File "/bin/engine-iso-uploader", line 331, in __init__
DEBUG: self.list_all_ISO_storage_domains()
DEBUG: File "/bin/engine-iso-uploader", line 381, in
list_all_ISO_storage_domains
DEBUG: if not self._initialize_api():
DEBUG: File "/bin/engine-iso-uploader", line 358, in _initialize_api
DEBUG: password=self.configuration.get("passwd"))
DEBUG: File "/usr/lib/python2.7/site-packages/ovirtsdk/api.py", line 78,
in __init__
DEBUG: debug=debug
DEBUG: File
"/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
line 47, in __init__
DEBUG: debug=debug))
DEBUG: File
"/usr/lib/python2.7/site-packages/ovirtsdk/web/connection.py", line 38, in
__init__
DEBUG: timeout=timeout)
DEBUG: File
"/usr/lib/python2.7/site-packages/ovirtsdk/web/connection.py", line 102, in
__createConnection
DEBUG: raise NoCertificatesError
DEBUG: NoCertificatesError: [ERROR]::ca_file (CA certificate) must be
specified for SSL connection.
I have one Fedora 17 oVirt engine 3.1 installed with a Fedora 17 host.
Someone can show me what is the problem and how is it solved, please?
Many thanks in avanced,
Juanjo.
12 years, 4 months
Re: [Users] node loses contact with rhev agent
by Vinzenz Feenstra
This is a multi-part message in MIME format.
--------------020604000405040203010901
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Hi Jonathan,
Could you provide the logs for it?
Thanks.
On 01/04/2013 10:32 AM, Vinzenz Feenstra wrote:
> Hi,
>
> On 01/03/2013 08:53 PM, Jonathan Horne wrote:
>> i have 2 nodes now that have both lost contact with the rhev agents
>> on guest VMs. if i migrate the guest to another node, the IP address
>> and memory usage immediately show up. migrate a guest back, and the
>> IP/memory info disappears.
>>
>> how can i troubleshoot what is causing this?
>
> It'd be nice if you could send me the logs from /var/log/vdsm/vdsm*
> which are on the node where it looses the contact. I am currently
> trying to find exactly the cause of such an issue as well and it would
> be helpful if we could be provided with log files. Currently I am
> still lacking them.
> I will check them and let you know what I will be able to find.
>
> If you wouldn't mind I would be referring to you for more questions if
> some will come up during the search for the cause and I will let you
> know the results asap.
>>
>> thanks,
>> jonathan
>>
>
> --
> Regards,
>
> Vinzenz Feenstra | Senior Software Engineer
> RedHat Engineering Virtualization R & D
> Phone: +420 532 294 625
> IRC: vfeenstr or evilissimo
>
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
--
Regards,
Vinzenz Feenstra | Senior Software Engineer
RedHat Engineering Virtualization R & D
Phone: +420 532 294 625
IRC: vfeenstr or evilissimo
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
--------------020604000405040203010901
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">Hi Jonathan,<br>
<br>
Could you provide the logs for it?<br>
Thanks.<br>
<br>
On 01/04/2013 10:32 AM, Vinzenz Feenstra wrote:<br>
</div>
<blockquote cite="mid:50E6A199.8050604@redhat.com" type="cite">
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
<div class="moz-cite-prefix">Hi,<br>
<br>
On 01/03/2013 08:53 PM, Jonathan Horne wrote:<br>
</div>
<blockquote
cite="mid:9BE6F493F83A594DA60C45E6A09DC5AC01701E6C@AUSP01DAG0201.collaborationhost.net"
type="cite">
<meta http-equiv="Content-Type" content="text/html;
charset=ISO-8859-1">
<div>
<div>
<div>i have 2 nodes now that have both lost contact with the
rhev agents on guest VMs. if i migrate the guest to
another node, the IP address and memory usage immediately
show up. migrate a guest back, and the IP/memory info
disappears.</div>
<div><br>
</div>
<div>how can i troubleshoot what is causing this?</div>
</div>
</div>
</blockquote>
<br>
It'd be nice if you could send me the logs from
/var/log/vdsm/vdsm* which are on the node where it looses the
contact. I am currently trying to find exactly the cause of such
an issue as well and it would be helpful if we could be provided
with log files. Currently I am still lacking them.<br>
I will check them and let you know what I will be able to find.<br>
<br>
If you wouldn't mind I would be referring to you for more
questions if some will come up during the search for the cause and
I will let you know the results asap.<br>
<blockquote
cite="mid:9BE6F493F83A594DA60C45E6A09DC5AC01701E6C@AUSP01DAG0201.collaborationhost.net"
type="cite">
<div>
<div>
<div><br>
</div>
<div>thanks,</div>
<div>jonathan</div>
<div> </div>
</div>
</div>
<br>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
Regards,
Vinzenz Feenstra | Senior Software Engineer
RedHat Engineering Virtualization R & D
Phone: +420 532 294 625
IRC: vfeenstr or evilissimo
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com</pre>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users(a)ovirt.org</a>
<a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a>
</pre>
</blockquote>
<br>
<br>
<pre class="moz-signature" cols="72">--
Regards,
Vinzenz Feenstra | Senior Software Engineer
RedHat Engineering Virtualization R & D
Phone: +420 532 294 625
IRC: vfeenstr or evilissimo
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com</pre>
</body>
</html>
--------------020604000405040203010901--
12 years, 4 months
[Users] spm keeps on shifting between nodes continously
by Jithin Raju
Hi,
I have 2 nodes of ovirt 3.1+ gluster. When i am trying to activate the Data
center its changing to up then contend then back up continuously.
Same way along with the above SPM status is shifting between the two nodes
continously.
With one node its working fine.
Somebody has reported this before I remember, but do not remember the fix.
engine log:
2013-01-15 15:50:41,762 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand]
(QuartzScheduler_Worker-66) [16c01e11] START,
HSMGetAllTasksInfoVDSCommand(vdsId = 7caf739e-5ef7-11e2-aa89-525400927148),
log id: 59dae374
2013-01-15 15:50:41,791 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand]
(QuartzScheduler_Worker-66) [16c01e11] FINISH,
HSMGetAllTasksInfoVDSCommand, return: [], log id: 59dae374
2013-01-15 15:50:41,793 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand]
(QuartzScheduler_Worker-66) [16c01e11] FINISH,
SPMGetAllTasksInfoVDSCommand, return: [], log id: 77055e85
2013-01-15 15:50:41,795 INFO [org.ovirt.engine.core.bll.AsyncTaskManager]
(QuartzScheduler_Worker-66) [16c01e11]
AsyncTaskManager::AddStoragePoolExistingTasks: Discovered no tasks on
Storage Pool DC
2013-01-15 15:50:41,796 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand]
(QuartzScheduler_Worker-66) [16c01e11] START,
SPMGetAllTasksInfoVDSCommand(storagePoolId =
1a995d7c-5ef3-11e2-a8c4-525400927148, ignoreFailoverLimit = false,
compatabilityVersion = null), log id: 318b02c2
2013-01-15 15:50:41,798 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand]
(QuartzScheduler_Worker-66) [16c01e11] --
SPMGetAllTasksInfoVDSCommand::ExecuteIrsBrokerCommand: Attempting on
storage pool 1a995d7c-5ef3-11e2-a8c4-525400927148
2013-01-15 15:50:41,800 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand]
(QuartzScheduler_Worker-66) [16c01e11] START,
HSMGetAllTasksInfoVDSCommand(vdsId = 7caf739e-5ef7-11e2-aa89-525400927148),
log id: 22d29c5b
2013-01-15 15:50:41,832 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand]
(QuartzScheduler_Worker-66) [16c01e11] FINISH,
HSMGetAllTasksInfoVDSCommand, return: [], log id: 22d29c5b
2013-01-15 15:50:41,836 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand]
(QuartzScheduler_Worker-66) [16c01e11] FINISH,
SPMGetAllTasksInfoVDSCommand, return: [], log id: 318b02c2
2013-01-15 15:50:41,841 INFO [org.ovirt.engine.core.bll.AsyncTaskManager]
(QuartzScheduler_Worker-66) [16c01e11]
AsyncTaskManager::AddStoragePoolExistingTasks: Discovered no tasks on
Storage Pool DC
2013-01-15 15:50:51,830 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.GetStoragePoolInfoVDSCommand]
(QuartzScheduler_Worker-44)
irsBroker::BuildStorageDynamicFromXmlRpcStruct::Failed building Storage
dynamic, xmlRpcStruct =
org.ovirt.engine.core.vdsbroker.xmlrpc.XmlRpcStruct@7fdd2faf
2013-01-15 15:50:51,832 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.GetStoragePoolInfoVDSCommand]
(QuartzScheduler_Worker-44)
org.ovirt.engine.core.vdsbroker.irsbroker.IRSErrorException:
IRSErrorException:
2013-01-15 15:50:51,833 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(QuartzScheduler_Worker-44) IrsBroker::Failed::GetStoragePoolInfoVDS due
to: IRSErrorException: IRSErrorException:
2013-01-15 15:50:51,865 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand]
(QuartzScheduler_Worker-44) START, SpmStopVDSCommand(vdsId =
7caf739e-5ef7-11e2-aa89-525400927148, storagePoolId =
1a995d7c-5ef3-11e2-a8c4-525400927148), log id: 6c7ade5e
2013-01-15 15:50:51,899 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand]
(QuartzScheduler_Worker-44) SpmStopVDSCommand::Stopping SPM on vds
blueberry, pool id 1a995d7c-5ef3-11e2-a8c4-525400927148
2013-01-15 15:50:53,032 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand]
(QuartzScheduler_Worker-44) FINISH, SpmStopVDSCommand, log id: 6c7ade5e
2013-01-15 15:50:53,036 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(QuartzScheduler_Worker-44) Irs placed on server null failed. Proceed
Failover
2013-01-15 15:50:53,046 INFO
[org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand]
(QuartzScheduler_Worker-44) [3f11e766] Running command:
SetStoragePoolStatusCommand internal: true. Entities affected : ID:
1a995d7c-5ef3-11e2-a8c4-525400927148 Type: StoragePool
2013-01-15 15:50:53,091 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(QuartzScheduler_Worker-44) [3f11e766] hostFromVds::selectedVds - fig,
spmStatus Free, storage pool DC
2013-01-15 15:50:53,097 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(QuartzScheduler_Worker-44) [3f11e766] starting spm on vds fig, storage
pool DC, prevId -1, LVER 27
2013-01-15 15:50:53,103 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
(QuartzScheduler_Worker-44) [3f11e766] START, SpmStartVDSCommand(vdsId =
d199e4dc-5ef4-11e2-a538-525400927148, storagePoolId =
1a995d7c-5ef3-11e2-a8c4-525400927148, prevId=-1, prevLVER=27,
storagePoolFormatType=V1, recoveryMode=Manual, SCSIFencing=false), log id:
64385c4b
2013-01-15 15:50:53,144 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
(QuartzScheduler_Worker-44) [3f11e766] spmStart polling started: taskId =
373b6b46-d79b-45a5-a534-3a18d38ac65e
Thanks,
Jithin
12 years, 4 months
[Users] windows 8 guest support
by Jithin Raju
Hi All,
Windows 8 guest is supported in ovirt?
Any plan since it has some issues with qemu sata support.
Thanks,
Jithin
12 years, 4 months
[Users] Nfs version 3 or 4 when mounting predefined engine ISO?
by Gianluca Cecchi
Hello,
what should it be in 3.2 the version of NFS default ISO created on engine?
Can I change it afterwards
During engine setup I was only requested if I wanted it or not:
(f18 with ovirt-nightly repo and 3.2.0-1.20130113.gitc954518)
Configure NFS share on this server to be used as an ISO Domain? ['yes'|
'no'] [yes] :
Local ISO domain path [/var/lib/exports/iso] : /ISO
ok
Current situation on engine regarding iptables
[root@f18engine ~]# iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmptype 255
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate
RELATED,ESTABLISHED
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
tcp dpt:22
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
tcp dpt:80
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
tcp dpt:443
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
udp dpt:111
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
tcp dpt:111
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
udp dpt:892
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
tcp dpt:892
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
udp dpt:875
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
tcp dpt:875
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
udp dpt:662
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
tcp dpt:662
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
tcp dpt:2049
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
tcp dpt:32803
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW
udp dpt:32769
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with
icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ANd regarding nfs:
[root@f18engine ~]# ps -ef|grep [n]fs
root 1134 2 0 Jan15 ? 00:00:00 [nfsd4]
root 1135 2 0 Jan15 ? 00:00:00 [nfsd4_callbacks]
root 1136 2 0 Jan15 ? 00:00:00 [nfsd]
root 1137 2 0 Jan15 ? 00:00:00 [nfsd]
root 1138 2 0 Jan15 ? 00:00:00 [nfsd]
root 1139 2 0 Jan15 ? 00:00:00 [nfsd]
root 1140 2 0 Jan15 ? 00:00:00 [nfsd]
root 1141 2 0 Jan15 ? 00:00:00 [nfsd]
root 1142 2 0 Jan15 ? 00:00:00 [nfsd]
root 1143 2 0 Jan15 ? 00:00:00 [nfsd]
[root@f18engine ~]# systemctl status rpcbind.service
rpcbind.service - RPC bind service
Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled)
Active: active (running) since Tue, 2013-01-15 13:38:46 CET; 1 day and 2h
ago
Process: 1098 ExecStart=/sbin/rpcbind -w ${RPCBIND_ARGS} (code=exited,
status=0/SUCCESS)
Main PID: 1128 (rpcbind)
CGroup: name=systemd:/system/rpcbind.service
└ 1128 /sbin/rpcbind -w
Jan 15 13:38:46 f18engine.ceda.polimi.it systemd[1]: Started RPC bind
service.
When host tries to attach ISO it fails
host is f18 with ovirt-nightly and
vdsm-4.10.3-0.78.gitb005b54.fc18.x86_64
I noticed
[root@f18ovn03 ]# ps -ef|grep mount
root 1692 1 0 14:39 ? 00:00:00 /usr/sbin/rpc.mountd
root 6616 2334 0 15:17 ? 00:00:00 /usr/bin/sudo -n
/usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6,nfsvers=3
f18engine:/ISO /rhev/data-center/mnt/f18engine:_ISO
root 6617 6616 0 15:17 ? 00:00:00 /usr/bin/mount -t nfs -o
soft,nosharecache,timeo=600,retrans=6,nfsvers=3 f18engine:/ISO
/rhev/data-center/mnt/f18engine:_ISO
root 6618 6617 0 15:17 ? 00:00:00 /sbin/mount.nfs
f18engine:/ISO /rhev/data-center/mnt/f18engine:_ISO -o
rw,soft,nosharecache,timeo=600,retrans=6,nfsvers=3
root 6687 4147 0 15:17 pts/0 00:00:00 grep --color=auto mount
The problem here is option
nfsvers=3
in fact if I manually run on node
[root@f18ovn03 ]# mount -t nfs -o nfsvers=4 f18engine:/ISO /p
--> OK
and
[root@f18ovn03 ]# mount
...
f18engine:/ISO on /p type nfs4
(rw,relatime,vers=4.0,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.4.4.59,local_lock=none,addr=10.4.4.60)
while
# mount -t nfs -o nfsvers=3 f18engine:/ISO /p
--> KO
stalled
What should I change, engine or host or both?
Thanks in advance,
Gianluca
12 years, 4 months
[Users] Video from NetApp event?
by Peter Styk
Are there any plans to shoot some video at the NetApp event? Would be
really cool to embed some youtubes on the wiki later on.
Best
polfilm
12 years, 4 months
[Users] .img file for win2008 controller driver?
by Jonathan Horne
--_000_9BE6F493F83A594DA60C45E6A09DC5AC01725C50AUSP01DAG0201co_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Windows 2008 R2 install hits a point where it cannot find the disks, and as=
ks for driver. I assume I need a .img file to mount to virtual floppy. I =
cannot seem to locate this, can someone point me in the right direction? I=
s this supposed to be included with the base ovirt install?
Thanks,
jonathan
________________________________
This is a PRIVATE message. If you are not the intended recipient, please de=
lete without copying and kindly advise us by e-mail of the mistake in deliv=
ery. NOTE: Regardless of content, this e-mail shall not operate to bind SKO=
POS to any order or other contract unless pursuant to explicit written agre=
ement or government initiative expressly permitting the use of e-mail for s=
uch purpose.
--_000_9BE6F493F83A594DA60C45E6A09DC5AC01725C50AUSP01DAG0201co_
Content-Type: text/html; charset="us-ascii"
Content-ID: <B70DF9CC12FEC541BF637AB7BC55BFA3(a)collaborationhost.net>
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<style>
<!--
@font-face
{font-family:Calibri}
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:12.0pt;
font-family:"Times New Roman","serif"}
a:link, span.MsoHyperlink
{color:blue;
text-decoration:underline}
a:visited, span.MsoHyperlinkFollowed
{color:purple;
text-decoration:underline}
span.EmailStyle17
{font-family:"Calibri","sans-serif";
color:black}
.MsoChpDefault
{font-size:10.0pt}
@page WordSection1
{margin:1.0in 1.0in 1.0in 1.0in}
div.WordSection1
{}
-->
</style>
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span style=3D"font-size:10.5pt; font-family:"C=
alibri","sans-serif"; color:black">Windows 2008 R2 install h=
its a point where it cannot find the disks, and asks for driver. I as=
sume I need a .img file to mount to virtual floppy. I cannot
seem to locate this, can someone point me in the right direction? Is=
this supposed to be included with the base ovirt install?</span></p>
<div>
<p class=3D"MsoNormal"><span style=3D"font-size:10.5pt; font-family:"C=
alibri","sans-serif"; color:black"> </span></p>
</div>
<div>
<p class=3D"MsoNormal"><span style=3D"font-size:10.5pt; font-family:"C=
alibri","sans-serif"; color:black">Thanks,</span></p>
</div>
<div>
<p class=3D"MsoNormal"><span style=3D"font-size:10.5pt; font-family:"C=
alibri","sans-serif"; color:black">jonathan</span></p>
</div>
</div>
<br>
<hr>
<font color=3D"Gray" face=3D"Arial" size=3D"1">This is a PRIVATE message. I=
f you are not the intended recipient, please delete without copying and kin=
dly advise us by e-mail of the mistake in delivery. NOTE: Regardless of con=
tent, this e-mail shall not operate to
bind SKOPOS to any order or other contract unless pursuant to explicit wri=
tten agreement or government initiative expressly permitting the use of e-m=
ail for such purpose.</font>
</body>
</html>
--_000_9BE6F493F83A594DA60C45E6A09DC5AC01725C50AUSP01DAG0201co_--
12 years, 4 months
[Users] oVirt Weekly Meeting Minutes
by Mike Burns
Minutes: http://ovirt.org/meetings/ovirt/2013/ovirt.2013-01-16-15.00.html
Minutes (text): http://ovirt.org/meetings/ovirt/2013/ovirt.2013-01-16-15.00.txt
Log: http://ovirt.org/meetings/ovirt/2013/ovirt.2013-01-16-15.00.log.html
=========================
#ovirt: oVirt Weekly Sync
=========================
Meeting started by mburns at 15:00:23 UTC. The full logs are available
at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-01-16-15.00.log.html
.
Meeting summary
---------------
* agenda and roll call (mburns, 15:00:29)
* release status (mburns, 15:04:28)
* we're supposed to be posting the beta versions of packages (mburns,
15:04:52)
* (actually were supposed to do this yesterday) (mburns, 15:05:03)
* engine builds running now, to be posted when complete (mburns,
15:06:30)
* vdsm build is available in fedora updates testing, but not core
fedora yet (mburns, 15:06:48)
* vdsm packages downloaded from koji and loaded into beta area on
ovirt.org (mburns, 15:07:10)
* otopi and ovirt-host-deploy built and uploaded (mburns, 15:07:19)
* ovirt-node and ovirt-node iso are coming shortly, but needed vdsm
and ovirt-host-deploy uploaded first (mburns, 15:07:47)
* build run, now in smoketesting prior to posting (mburns, 15:07:58)
* mom from fedora is good (mburns, 15:10:26)
* still waiting on cli/sdk (mburns, 15:11:35)
* still waiting on log-collector image-uploader iso-uploader
guest-agent (mburns, 15:12:06)
* proposal: leave alpha packages in beta for missing packages and get
updated packages in background (mburns, 15:20:55)
* AGREED: will move into beta with alpha packages for guest-agent,
log-collector, iso and image uploaders, sdk/cli (mburns, 15:22:39)
* steps left to start beta: (mburns, 15:22:56)
* 1. upload new ovirt-engine rpms (mburns, 15:23:09)
* 2. finalize smoketesting, build, and post ovirt-node and
ovirt-node-iso rpms (mburns, 15:23:35)
* 3. send announcement (mburns, 15:23:45)
* ACTION: oschreib_ to follow up with mpastern and kroberts for beta
branching and packaging (mburns, 15:25:43)
* ACTION: mburns to finish ovirt-node related beta tasks (mburns,
15:25:54)
* ACTION: mburns to send out beta announcement (mburns, 15:26:01)
* one other request for maintainers -- please keep the tracker bug
updated with release blocking issues (mburns, 15:26:33)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=881006 (mburns,
15:26:48)
* Test Day scheduled for 24-Jan (Next Thursday) (mburns, 15:28:25)
* workshops (mburns, 15:29:33)
* no updates on Shanghai workshop, updates coming next week (mburns,
15:30:52)
* registration for Sunnyvale (NetApp) is open until tomorrow (mburns,
15:31:23)
* Board Meeting is arranged, good attendance expected (mburns,
15:33:06)
* for remote participation at the board meeting, please contact dneary
(mburns, 15:33:21)
* not really a workshop, but somewhat related -- I'll be attending
FUDCon this weekend and will propose both a talk on oVirt and will
work with people who want to deploy oVirt (mburns, 15:34:50)
* please feel free to say hi if you're also attending (mburns,
15:35:21)
* LINK:
http://wiki.ovirt.org/OVirt_Global_Workshops#oVirt_Workshop_at_Intel_Campus
(lh, 15:40:02)
* ACTION: lh to confirm with Intel we are "go" for Shanghai workshop
on 20 - 21 March (quaid, 15:41:08)
* Shanghai dates are confirmed for 20-21 March (mburns, 15:41:12)
* getting reconfirmation from Intel that 20-21 March dates are locked
(lh, 15:41:31)
* Infrastructure report (mburns, 15:42:10)
* AlterWay Servers are available (mburns, 15:42:20)
* dneary has connection information (mburns, 15:42:27)
* New hosts from RackSpace due in a few weeks (quaid, 15:46:15)
* I will be on vacation and travelling in Europe starting 25 Jan
through 08 Feb (with a work/FOSDEM break on 31 - 02), so we'll be
discussing coverage on infra@ (quaid, 15:47:10)
* Jenkins will be moving to Alter Way, migration plan forthcoming with
details to discuss before finalized on arch@ (quaid, 15:47:40)
* Other topics (mburns, 15:49:13)
* due to travel and the oVirt Workshop, mburns likely won't be around
for the weekly next week (mburns, 15:49:55)
* mburns will try to get someone else to handle this meeting next week
(mburns, 15:52:08)
Meeting ended at 15:56:02 UTC.
Action Items
------------
* oschreib_ to follow up with mpastern and kroberts for beta branching
and packaging
* mburns to finish ovirt-node related beta tasks
* mburns to send out beta announcement
* lh to confirm with Intel we are "go" for Shanghai workshop on 20 - 21
March
Action Items, by person
-----------------------
* lh
* lh to confirm with Intel we are "go" for Shanghai workshop on 20 -
21 March
* mburns
* mburns to finish ovirt-node related beta tasks
* mburns to send out beta announcement
* oschreib_
* oschreib_ to follow up with mpastern and kroberts for beta branching
and packaging
* **UNASSIGNED**
* (none)
People Present (lines said)
---------------------------
* mburns (91)
* aglitke (19)
* dneary (17)
* oschreib_ (12)
* lh (9)
* quaid (9)
* ovirtbot (5)
* goacid (1)
* ofrenkel (1)
* Rydekull (1)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
12 years, 4 months
[Users] ovirt-engine-sdk-java 1.0.0.1 released
by Michael Pasternak
Hi all,
I'm happy to announce first official release of ovirt-engine-sdk-java, change log
between sdk announcement and 1.0.0.1 is:
* Tue Jan 15 2013 Michael Pasternak <mpastern(a)redhat.com> - 1.0.0.1-1
- implement parametrized list() methods
- events can be added now (user defined events)
- events can be removed now
- host can be added now by using cluster.name (not only cluster-id)
- NIC now has "linked" property
- NIC now has "plugged" property
- VM has now ReportedDevices sub-collection
- VMNIC has now ReportedDevices sub-collection
- to host add/update added power_management.agents parameter
- to disk added permissions sub-collection
- to PowerManagement added Agents collection
- to VMDisk added move() action
- to host added hooks sub-collection
- to cluster added threads_as_cores property
- to host added hardwareInformation property
- to host added OS property
- added force flag to the host.delete() method
- added host.power_management.pm_proxy sub-collection
- added permissions sub-collection to the network
- added search capabilities to api.networks collection
- added deletion protection support to template/vm via .delete_protected property
More details can be found at [1].
[1] http://www.ovirt.org/Java-sdk-changelog
--
Michael Pasternak
RedHat, ENG-Virtualization R&D
12 years, 4 months
[Users] Unable to add FCP storage domain
by Gianluca Cecchi
Hello,
f18 server with oVirt engine
ovirt-engine-3.2.0-1.20130113.gitc954518.fc18.noarch
and f18 host
with
vdsm-4.10.3-0.78.gitb005b54.fc18.x86_64
DC is configred wth FCP as default.
Trying to add a LUN I get
Error while executing action New SAN Storage Domain: Error creating a
storage domain
I notice hat it creates PV and VG
pvs:
/dev/mapper/3600507630efe05800000000000001601
c6bb44ee-b824-44a0-a62c-f537a23d2e2b lvm2 a-- 99.62g 99.62g
vgs:
VG #PV #LV #SN Attr VSize VFree
c6bb44ee-b824-44a0-a62c-f537a23d2e2b 1 0 0 wz--n- 99.62g 99.62g
vdsm.log
Thread-26641::DEBUG::2013-01-16
00:34:15,073::lvm::359::OperationMutex::(_reloadpvs) Operation 'lvm reload
operation' released the operation mutex
Thread-26641::WARNING::2013-01-16
00:34:15,073::lvm::73::Storage.LVM::(__getattr__)
/dev/mapper/3600507630efe05800000000000001601 can't be reloaded, please
check your storage connections.
Thread-26641::ERROR::2013-01-16
00:34:15,073::task::833::TaskManager.Task::(_setError)
Task=`1dea04e7-56e1-49c3-a702-efa676ef1e7e`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 840, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 42, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 2424, in createStorageDomain
domVersion)
File "/usr/share/vdsm/storage/blockSD.py", line 505, in create
numOfPVs = len(lvm.listPVNames(vgName))
File "/usr/share/vdsm/storage/lvm.py", line 1257, in listPVNames
return [pv.name for pv in pvs if pv.vg_name == vgName]
File "/usr/share/vdsm/storage/lvm.py", line 74, in __getattr__
raise AttributeError("Failed reload: %s" % self.name)
AttributeError: Failed reload: /dev/mapper/3600507630efe05800000000000001601
Thread-26641::DEBUG::2013-01-16
00:34:15,074::task::852::TaskManager.Task::(_run)
Task=`1dea04e7-56e1-49c3-a702-efa676ef1e7e`::Task._run:
1dea04e7-56e1-49c3-a702-efa676ef1e7e (2,
'c6bb44ee-b824-44a0-a62c-f537a23d2e2b',
'3600507630efe05800000000000001601',
'x3XSZx-avUC-0NNI-w5K4-nCOp-uxp5-TD9GvH', 1, '3') {} failed - stopping task
Thread-26641::DEBUG::2013-01-16
00:34:15,074::task::1177::TaskManager.Task::(stop)
Task=`1dea04e7-56e1-49c3-a702-efa676ef1e7e`::stopping in state preparing
(force False)
Thread-26641::DEBUG::2013-01-16
00:34:15,074::task::957::TaskManager.Task::(_decref)
Task=`1dea04e7-56e1-49c3-a702-efa676ef1e7e`::ref 1 aborting True
In messages:
Jan 16 00:34:14 f18ovn03 vdsm Storage.LVM WARNING lvm vgs failed: 5 ['
x3XSZx-avUC-0NNI-w5K4-nCOp-uxp5-TD9GvH|c6bb44ee-b824-44a0-a62c-f537a23d2e2b|wz--n-|106971529216|106971529216|134217728|797|797|RHAT_storage_domain_UNREADY|134217728|67107328']
[' Skipping clustered volume group VG_VIRT04', ' Skipping clustered
volume group VG_VIRT02', ' Skipping clustered volume group VG_VIRT03', '
Skipping clustered volume group VG_VIRT01']
Jan 16 00:34:15 f18ovn03 vdsm Storage.LVM WARNING lvm pvs failed: 5 ['
NQRb0Q-3C0k-3RRo-1LZZ-NNy2-42A1-c7zO8e|/dev/mapper/3600507630efe05800000000000001601|106971529216|c6bb44ee-b824-44a0-a62c-f537a23d2e2b|x3XSZx-avUC-0NNI-w5K4-nCOp-uxp5-TD9GvH|135266304|797|0|2|107374182400']
[' Skipping clustered volume group VG_VIRT04', ' Skipping volume group
VG_VIRT04', ' Skipping clustered volume group VG_VIRT02', ' Skipping
volume group VG_VIRT02', ' Skipping clustered volume group VG_VIRT03', '
Skipping volume group VG_VIRT03', ' Skipping clustered volume group
VG_VIRT03', ' Skipping volume group VG_VIRT03', ' Skipping clustered
volume group VG_VIRT01', ' Skipping volume group VG_VIRT01', ' Skipping
clustered volume group VG_VIRT01', ' Skipping volume group VG_VIRT01']
Jan 16 00:34:15 f18ovn03 vdsm Storage.LVM WARNING
/dev/mapper/3600507630efe05800000000000001601 can't be reloaded, please
check your storage connections.
Jan 16 00:34:15 f18ovn03 vdsm TaskManager.Task ERROR
Task=`1dea04e7-56e1-49c3-a702-efa676ef1e7e`::Unexpected error
Jan 16 00:34:15 f18ovn03 vdsm Storage.Dispatcher.Protect ERROR Failed
reload: /dev/mapper/3600507630efe05800000000000001601
Can it be that the clustered VGs on other LUNs that are skipped is the
cause?
BTW: tomorrow I should have a SAN guy able to mask them ...
Gianluca
12 years, 4 months
[Users] BSOD's on Server 2008 Guests
by Neil
Hi guys,
I have 3 Server 2008 guests running on my oVirt 3.1 system, if there
is an incorrect shutdown(power failure) of the entire system and
guests, or even sometimes a normal reboot of the guests, the 2008
Servers all start with blue screens and I have to reboot them multiple
times before they eventually boot into Windows as if nothing was ever
wrong. The Linux guests all behave perfectly on the system so I highly
doubt there are any hardware issues.
Does this problem sound familiar to anyone? I don't want to go ahead
and run all the latest updates and possibly risk bigger issues, unless
there is a good reason to.
These are the details of my system.
Centos 6.3 64bit on nodes and engine using dreyou repo.
ovirt-engine-3.1.0-3.8.el6.noarch
vdsm-4.10.0-0.44.14.el6.x86_64
qemu-kvm-0.12.1.2-2.295.el6_3.2.x86_64
libvirt-0.9.10-21.el6_3.5.x86_64
2x Dell R720 with Xeon E5-2620 CPU's Nodes running the guests.
An FC SAN for storage
HP Micro Server for the engine
Please shout if any other details will help.
Thanks.
Regards.
Neil Wilson.
12 years, 4 months
[Users] ovirt node installation
by Mikael Bergemalm
Hi,
I'm trying to install an ovirt node using the
"ovirt-node-iso-2.5.5-0.1.fc17.iso" on a HP proliant DL360 G6 but when I
get to the graphical install page the server seems frozen and I can't
choose any options. Is there an all text-based installation I can use in
some way?
Regards,
Mike
12 years, 4 months
[Users] UI Plugins in nightly?
by René Koch (ovido)
Hi,
I installed oVirt engine on Fedora 18 with latest nightly RPMs and
wanted to test the UI plugins, but it seems as this feature is still not
available in nightly.
According to Oved blog post
(http://ovedou.blogspot.co.at/2012/12/ovirt-foreman-ui-plugin.html)
custom plugins should be put into: /usr/share/ovirt-engine/ui-plugins
UI plugins page on oVirt webpage
(http://www.ovirt.org/Features/UIPlugins)
proposed /usr/libexec/ovirt/webadmin/extensions as the folder for UI
plugins.
But neither of these folder does exist in my setup.
So I wanted to know if UI plugins are still not packaged and if they
will be included in final oVirt 3.2?
Thanks a lot.
--
Regards,
René Koch
Senior Solution Architect
============================================
ovido gmbh - "Das Linux Systemhaus"
Brünner Straße 163, A-1210 Wien
Phone: +43 720 / 530 670
Mobile: +43 660 / 512 21 31
E-Mail: r.koch(a)ovido.at
============================================
12 years, 4 months
[Users] Import VMs from abandoned Storage Domain
by Alex Leonhardt
Hi,
Am trying to import VMs from an abandoned storage pool -
all I'm getting is this :
2013-01-15 20:21:05,958 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
(ajp--0.0.0.0-8009-16) START, GetVmsInfoVDSCommand(storagePoolId =
38a9ac9d-fe31-4003-8111-3ac741470b6e, ignoreFailoverLimit = false,
compatabilityVersion = null, storageDomainId =
b9c2cf06-73ea-4dd4-900b-3af322ab223d, vmIdList = null), log id: 65b1c9a8
2013-01-15 20:21:06,013 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
(ajp--0.0.0.0-8009-16) FINISH, GetVmsInfoVDSCommand, log id: 65b1c9a8
2013-01-15 20:21:06,036 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
(ajp--0.0.0.0-8009-11) START, GetVmsInfoVDSCommand(storagePoolId =
38a9ac9d-fe31-4003-8111-3ac741470b6e, ignoreFailoverLimit = false,
compatabilityVersion = null, storageDomainId =
b9c2cf06-73ea-4dd4-900b-3af322ab223d, vmIdList = null), log id: 7b0bc172
2013-01-15 20:21:06,085 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
(ajp--0.0.0.0-8009-11) FINISH, GetVmsInfoVDSCommand, log id: 7b0bc172
2013-01-15 20:21:06,928 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageDomainsListVDSCommand]
(ajp--0.0.0.0-8009-11) START, GetImageDomainsListVDSCommand(storagePoolId =
38a9ac9d-fe31-4003-8111-3ac741470b6e, ignoreFailoverLimit = false,
compatabilityVersion = null, imageGroupId =
8d41be6c-a586-4bb6-be4b-f1241a4bf088), log id: 79952c27
2013-01-15 20:21:06,951 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageDomainsListVDSCommand]
(ajp--0.0.0.0-8009-11) FINISH, GetImageDomainsListVDSCommand, return: [],
log id: 79952c27
2013-01-15 20:21:06,954 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand]
(ajp--0.0.0.0-8009-11) START, DoesImageExistVDSCommand(storagePoolId =
38a9ac9d-fe31-4003-8111-3ac741470b6e, ignoreFailoverLimit = false,
compatabilityVersion = null, storageDomainId =
b9c2cf06-73ea-4dd4-900b-3af322ab223d, imageGroupId =
8d41be6c-a586-4bb6-be4b-f1241a4bf088, imageId =
a4782145-626c-4a6e-9e1e-fce5f1dd8f78), log id: 3085f2cd
2013-01-15 20:21:06,993 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand]
(ajp--0.0.0.0-8009-11) FINISH, DoesImageExistVDSCommand, return: true, log
id: 3085f2cd
2013-01-15 20:21:07,039 INFO [org.ovirt.engine.core.bll.ImportVmCommand]
(pool-3-thread-48) [8991b34] Running command: ImportVmCommand internal:
false. Entities affected : ID: b756284f-06f9-44cd-ba45-6cac3486fe37 Type:
Storage
2013-01-15 20:21:07,040 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(pool-3-thread-48) [8991b34] Try to add duplicate values with same name.
Type: UNASSIGNED. Value: vmname
2013-01-15 20:21:07,045 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(pool-3-thread-48) [8991b34] Try to add duplicate values with same name.
Type: UNASSIGNED. Value: vmname
2013-01-15 20:21:07,049 INFO
[org.ovirt.engine.core.utils.transaction.TransactionSupport]
(pool-3-thread-48) [8991b34] transaction rolled back
2013-01-15 20:21:07,050 ERROR [org.ovirt.engine.core.bll.ImportVmCommand]
(pool-3-thread-48) [8991b34] Command
org.ovirt.engine.core.bll.ImportVmCommand throw exception:
java.lang.StringIndexOutOfBoundsException: String index out of range: 6
at
java.lang.AbstractStringBuilder.deleteCharAt(AbstractStringBuilder.java:766)
[rt.jar:1.6.0_24]
at java.lang.StringBuilder.deleteCharAt(StringBuilder.java:280)
[rt.jar:1.6.0_24]
at
org.ovirt.engine.core.bll.ImportVmCommand.auditInvalidInterfaces(ImportVmCommand.java:933)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.ImportVmCommand.AddVmNetwork(ImportVmCommand.java:801)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.ImportVmCommand$3.runInTransaction(ImportVmCommand.java:488)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.ImportVmCommand$3.runInTransaction(ImportVmCommand.java:482)
[engine-bll.jar:]
at
org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInNewTransaction(TransactionSupport.java:204)
[engine-utils.jar:]
at
org.ovirt.engine.core.bll.ImportVmCommand.addVmToDb(ImportVmCommand.java:482)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.ImportVmCommand.executeCommand(ImportVmCommand.java:476)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.ExecuteWithoutTransaction(CommandBase.java:804)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:896)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1203)
[engine-bll.jar:]
at
org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:168)
[engine-utils.jar:]
at
org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:107)
[engine-utils.jar:]
at org.ovirt.engine.core.bll.CommandBase.Execute(CommandBase.java:911)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.ExecuteAction(CommandBase.java:268)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.MultipleActionsRunner.executeValidatedCommands(MultipleActionsRunner.java:182)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.MultipleActionsRunner.RunCommands(MultipleActionsRunner.java:162)
[engine-bll.jar:]
at
org.ovirt.engine.core.bll.MultipleActionsRunner$1.run(MultipleActionsRunner.java:84)
[engine-bll.jar:]
at
org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalWrapperRunnable.run(ThreadPoolUtil.java:64)
[engine-utils.jar:]
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
[rt.jar:1.6.0_24]
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
[rt.jar:1.6.0_24]
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
[rt.jar:1.6.0_24]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
[rt.jar:1.6.0_24]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
[rt.jar:1.6.0_24]
at java.lang.Thread.run(Thread.java:679) [rt.jar:1.6.0_24]
2013-01-15 20:21:07,232 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
(ajp--0.0.0.0-8009-14) START, GetVmsInfoVDSCommand(storagePoolId =
38a9ac9d-fe31-4003-8111-3ac741470b6e, ignoreFailoverLimit = false,
compatabilityVersion = null, storageDomainId =
b9c2cf06-73ea-4dd4-900b-3af322ab223d, vmIdList = null), log id: 44689362
2013-01-15 20:21:07,284 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand]
(ajp--0.0.0.0-8009-14) FINISH, GetVmsInfoVDSCommand, log id: 44689362
above only as I was somehow able to convince it, that it was a export
domain ...
question really is - how can i re-attach the Storage domain ?? Ideally w/o
having to re-import the VMs ... although i'd accept it as a work around -
however - re-creating is out of question, it'll take me 3 days ( and my job
) ...
Alex
--
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
12 years, 4 months
[Users] web admin portal not reachable after reboot
by Gianluca Cecchi
Hello,
tried to simulate some maintenance operations and restart the f18 server
where I installed the engine, version
3.2.0-1.20130113.gitc954518
I'm unable to connect to it after
shutdown -r now
The engine seems started correctly
Even after
systemctl restart ovirt-engine.service
I'm not able to connect via web
I can see this in httpd logs:
[Tue Jan 15 13:38:08.512923 2013] [mpm_prefork:notice] [pid 1132] AH00170:
caught SIGWINCH, shutting down gracefully
[Tue Jan 15 13:38:50.950219 2013] [core:notice] [pid 1097] SELinux policy
enabled; httpd running as context system_u:system_r:httpd_t:s0
[Tue Jan 15 13:38:51.014967 2013] [suexec:notice] [pid 1097] AH01232:
suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Tue Jan 15 13:38:52.000997 2013] [ssl:notice] [pid 1097] AH01886: SSL FIPS
mode disabled
[Tue Jan 15 13:38:52.146753 2013] [auth_digest:notice] [pid 1097] AH01757:
generating secret for digest authentication ...
[Tue Jan 15 13:38:53.000950 2013] [lbmethod_heartbeat:notice] [pid 1097]
AH02282: No slotmem from mod_heartmonitor
[Tue Jan 15 13:38:53.001039 2013] [ssl:notice] [pid 1097] AH01886: SSL FIPS
mode disabled
[Tue Jan 15 13:38:53.012520 2013] [mpm_prefork:notice] [pid 1097] AH00163:
Apache/2.4.3 (Fedora) OpenSSL/1.0.1c-fips configured -- resuming normal
operations
[Tue Jan 15 13:38:53.012539 2013] [core:notice] [pid 1097] AH00094: Command
line: '/usr/sbin/httpd -D FOREGROUND'
# getenforce
Permissive
# systemctl status httpd.service
httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled)
Active: active (running) since Tue, 2013-01-15 13:38:53 CET; 9min ago
Main PID: 1097 (httpd)
Status: "Total requests: 0; Current requests/sec: 0; Current traffic: 0
B/sec"
CGroup: name=systemd:/system/httpd.service
├ 1097 /usr/sbin/httpd -DFOREGROUND
├ 1267 /usr/sbin/httpd -DFOREGROUND
├ 1268 /usr/sbin/httpd -DFOREGROUND
├ 1269 /usr/sbin/httpd -DFOREGROUND
├ 1270 /usr/sbin/httpd -DFOREGROUND
└ 1271 /usr/sbin/httpd -DFOREGROUND
Jan 15 13:38:53 f18engine.ceda.polimi.it systemd[1]: Started The Apache
HTTP Server.
# systemctl status ovirt-engine.service
ovirt-engine.service - oVirt Engine
Loaded: loaded (/usr/lib/systemd/system/ovirt-engine.service; enabled)
Active: active (running) since Tue, 2013-01-15 13:38:53 CET; 9min ago
Process: 1233 ExecStart=/usr/bin/engine-service start (code=exited,
status=0/SUCCESS)
Main PID: 1274 (java)
CGroup: name=systemd:/system/ovirt-engine.service
└ 1274 engine-service -server -XX:+TieredCompilation -Xms1g -Xmx1g
-XX:PermSize=256m -XX:MaxPermSize=256m -D...
Jan 15 13:38:51 f18engine systemd[1]: Starting oVirt Engine...
Jan 15 13:38:53 f18engine engine-service[1233]: Started engine process 1274.
Jan 15 13:38:53 f18engine engine-service[1233]: Starting engine-service: [
OK ]
Jan 15 13:38:53 f18engine systemd[1]: Started oVirt Engine.
On browser I get after trying
server is taking to much to answer...
Before shutdown it was ok....
12 years, 4 months
[Users] VMWare vSphere resource pools access control like feature?
by Jiri Belka
Hi,
in vSphere you can create a resource pool[1] and define to it access control
and delegation...
<quote>
Access control and delegation - When a top-level administrator makes a resource
pool available to a department-level administrator, that administrator can then
perform all virtual machine creation and management within the boundaries of the
resources to which the resource pool is entitled by the current shares,
reservation, and limit settings. Delegation is usually done in conjunction with
permissions setting
</quote>
Is it possible in oVirt? The usage here is to assing roles to a resource pool.
You could do this with different DC/cluster in oVirt, but it looks like it is
impossible if having _just_ one host (thus one DC/cluster).
jirib
[1] http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.r...
12 years, 4 months
Re: [Users] oVirt Node (HyperVisor) - Memory Usage
by Doron Fediuck
-----Original Message-----
From: Alex Leonhardt [alex.tuxx(a)gmail.com]
Received: Friday, 11 Jan 2013, 11:28
To: oVirt Mailing List [users(a)ovirt.org]
Subject: [Users] oVirt Node (HyperVisor) - Memory Usage
Hi All,
I've just had a little check on a hyper-visor (based on Centos 6.3)
VDSM versions:
vdsm.x86_64 4.10.0-0.44.14.el6
vdsm-cli.noarch 4.10.0-0.44.14.el6
vdsm-python.x86_64 4.10.0-0.44.14.el6
vdsm-xmlrpc.noarch 4.10.0-0.44.14.el6
BUT - my concern is more that a VMs virtual memory (VSZ) allocation is much
higher than that of its configuration ?
qemu 24233 11.0 1.0 *3030420* 1008484 ? Sl 2012 2189:02
/usr/libexec/qemu-kvm -S -M rhel6.3.0 -cpu Conroe -enable-kvm *-m
2048*-smp 4,sockets=1,cores=4,threads=1 -name
Alex
--
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
Alex,
What usage do you see and what did you configure?
12 years, 4 months
Re: [Users] Network reconfiguration in ovirt 3.1
by Itamar Heim
On 12/27/2012 03:48 PM, Dan Kenigsberg wrote:
...
> On the other hand, we need to define the management network when a host
> is added to the setup, and nagging for confirmation for each of them may
> lead us to the blind baboon acking syndrome.
>
> Since I'd like to normalize the way ovirtmgmt is created, and make it
> more like other networks, I've written up
> http://www.ovirt.org/Features/Normalized_ovirtmgmt_Initialization
>
> If the Add Host dialog had a checkbox saying "define ovirtmgmt network
> automatically", would it satisfy you? (any other comment to that feature
> page is welcome).
1. in general, I like it. iiuc, you don't really need the management
network for normal work (engine would connect to host by its fqdn/ip by
default). if you do want to set a vlan/bond/SLA/roles/etc - then you
just configure it.
2. need to see how to preserve backward compatibility - i.e., bootstrap
will probably need to handle this for older engines.
3. iirc, in ovirt-node the bridge is created prior to bootstrap, which
wouldn't be needed any more.
Itamar
12 years, 4 months
[Users] iso domain creation error
by Jithin Raju
Hi,
I ran engine-setup for ovirt 3.1fresh installation , while configuring iso
domain i get this error:
"Should the installer configure NFS share on this server to be used as an
ISO Domain? ['yes'| 'no'] [yes] : yes
Local ISO domain path: /iso
Error: directory /iso is not empty"
my /iso is a separate ext4 filesystem which has "lost+found" directory.
I had to remove "lost+found" directory to get the iso domain created by
engine-setup.
Thanks,
Jithin
12 years, 4 months
Re: [Users] What do you want to see in oVirt next?
by Sigbjorn Lie
On 01/03/2013 05:08 PM, Itamar Heim wrote:
> Hi Everyone,
>
> as we wrap oVirt 3.2, I wanted to check with oVirt users on what they
> find good/useful in oVirt, and what they would like to see
> improved/added in coming versions?
>
> Thanks,
> Itamar
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
I would also like to see single sign on capabilities (Kerberos) on the
WebAdmin and UserPortal when using IPA as authentication source.
Regards,
Siggi
12 years, 4 months
[Users] libvirt dependency vs. a way to have Solaris hosts in ovirt-engine
by Jiri Belka
Hi,
as you know Solaris (Open Indiana) has qemu-kvm although they don't use
libvirt (in fact libvirt tighted to much to Linux specifics).
If vdsm whould interact with qemu-kvm without libvirt it would open a
way to have Solaris hosts in ovirt-engine.
libvirt is another abstraction after vdsm layer, or if vdsm could use
"plugins" to interact with qemu-kvm (libvirt, "native", solaris-style),
it could use current mode, bypass libvirt and use solaris tools to talk
to their qemu-kvm.
jbelka
12 years, 4 months
[Users] centos issues..
by peter houseman
Hi,
I am currently trying to get ovirt engine and nodes up and running on
Centos 6u3. Unfortunately my lab does not have direct internet access so
the ovirt repo has been copied over from the
people.centos.org/hughesjr/ovirt31 repo as recommended by ovirt howto on
the Centos Wiki.
Everything installs fine with no dependency errors but as soon as I create
storage domains in the engine, warning messages appear in the vdsm.logs on
the hosts for the ISO and Data domains:
(retyped below)
"Warning... 390::3d::363::
Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace
72xxxxxx_volumeNS already registered"
plus similar warning message as above but ...."imageNS already registered"
plus
"Warning Storage.LVM::(reloadvgs) lvm vgs failed:5 Volume group "5bxxxx"
not found"
I have tried rebuilding the whole system and using NFS, ISCSI and Gluster
data domains but I still have the same warning messages.
Also, and I'm not sure if its associated, I have noticed info messages on
the ovirt engine log that:
"Autorecovering Storage domains is disabled, skipping"
Even though I am getting storage warning messages, the VMs are up and I can
log into them and run applications.
Any help appreciated.
Pete
12 years, 4 months
[Users] hypervisor install fails to detect proper CPU type
by Jim Kinney
I installed the F17 version of the hypervisor and was unable to join the
new node to the cluster. The failure was "wrong CPU type for cluster". The
system has an Intel Xeon x5660 CPU (Westmere family). There is another
system of the same class in the same cluster NOT using the hypervisor
(using a CentOS 6.3 install with dre-repo).
I reinstalled the failing system with CentOS and all is well now joining
the system to the cluster.
--
--
James P. Kinney III
*
*Every time you stop a school, you will have to build a jail. What you gain
at one end you lose at the other. It's like feeding a dog on his own tail.
It won't fatten the dog.
- Speech 11/23/1900 Mark Twain
*
http://electjimkinney.org
http://heretothereideas.blogspot.com/
*
12 years, 4 months
Re: [Users] What do you want to see in oVirt next?
by Itamar Heim
On 01/03/2013 11:26 PM, Alexandru Vladulescu wrote:
>
> I would like to add a request for the new upper coming version 3.2 if
> possible:
Hi Alexandru,
just to note my question was for the post 3.2 version, as 3.2 is
basically done.
>
> Although some of you use spice instead of VNC, as I am an Ubuntu user on
> my desktop and laptop, spice protocol is not working withing my OS; even
> if I tried to build it from source, search unofficial deb packs or
> convert if from rpm packages. I know spice is strongly supported on the
> Fedora community, but I, myself on the server side work on RH Enterprise
> and Centos, but as for desktop use, I have a very hard time make the
> spice plugin for firefox to work.
>
> Therefore for my solution I had setup a VNC reflector + some shell
> automation to make it work between 2 different subnets (one inside and
> one outside) -- this, somehow, adding to the initial scope.
>
> This would have been much more easier to have a VNC proxy inside the
> ovirt engine function, from where to make the necessary setups and
> assignments of the console to each VM, or even though I might sound
> funny to make a solution as vrde on Vbox, because works damn great and
> it's easy to setup or change.
>
> Last question, If I might ask, when is the 3.2 planned to be released
> (aprox)
last update was here:
http://lists.ovirt.org/pipermail/users/2013-January/011454.html
Thanks,
Itamar
12 years, 4 months
Re: [Users] Fwd: Successfully virt-v2v from CentOS 6.3 VM to Ovirt 3.2 nightly
by Matthew Booth
> [Users] Successfully virt-v2v from CentOS 6_3 VM to Ovirt 3_2 nightly.eml
>
> Subject:
> [Users] Successfully virt-v2v from CentOS 6.3 VM to Ovirt 3.2 nightly
> From:
> Gianluca Cecchi <gianluca.cecchi(a)gmail.com>
> Date:
> 09/01/13 15:55
>
> To:
> users <users(a)ovirt.org>
>
>
> Hello,
> on my oVirt Host configured with F18 and all-in-one and ovirt-nightly as of
> ovirt-engine-3.2.0-1.20130107.git1a60fea.fc18.noarch
>
> I was able to import a CentOS 5.8 VM coming from a CentOS 6.3 host.
>
> The oVirt node server is the same where I'm unable to run a newly
> created WIndows 7 32bit vm...
> See http://lists.ovirt.org/pipermail/users/2013-January/011390.html
>
> In this thread I would like to report about successful import phases and
> some doubts about:
> 1) no password requested during virt-v2v
> 2) no connectivity in guest imported.
>
> On CentOS 6.3 host
> # virt-v2v -o rhev -osd 10.4.4.59:/EXPORT --network ovirtmgmt c56cr
> c56cr_001: 100%
> [===================================================================================]D
> 0h02m17s
> virt-v2v: c56cr configured with virtio drivers.
>
> ---> I would expect to be asked for the password of a privileged user in
> oVirt infra, instead the export process started without any prompt.
> Is this correct?
> In my opinion in this case it could be a security concern....
virt-v2v doesn't require a password here because it connects directly to
your NFS server. This lack of security is inherent in NFS(*). This is a
limitation you must manage within your oVirt deployment. Ideally you
would treat your NFS network as a SAN and control access to it accordingly.
* There is no truth in the rumour that this stands for No F%*$&"£g
Security ;)
> Import process has begun for VM(s): c56cr.
> You can check import status in the 'Events' tab of the specific
> destination storage domain, or in the main 'Events' tab
>
> ---> regarding the import status, the "specific destination storage
> domain" would be my DATA domain, correct?
> Because I see nothing in it and nothing in export domain.
> Instead I correctly see in main events tab of the cluster these two messages
>
> 2013-Jan-09, 16:16 Starting to import Vm c56cr to Data Center Poli,
> Cluster Poli1
> 2013-Jan-09, 16:18 Vm c56cr was imported successfully to Data Center
> Poli, Cluster Poli1
>
> SO probably the first option should go away....?
I'm afraid I didn't follow this. Which option?
> I was then able to power on and connect via vnc to the console.
> But I noticed it has no connectivity with its gateway
>
> Host is on vlan 65
> (em3 + em3.65 cofigured)
>
> host has
> 3: em3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
> ovirtmgmt state UP qlen 1000
> link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff
> inet6 fe80::21c:c4ff:feab:3add/64 scope link
> valid_lft forever preferred_lft forever
> ...
> 6: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> state UP
> link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff
> inet6 fe80::21c:c4ff:feab:3add/64 scope link
> valid_lft forever preferred_lft forever
> 7: em3.65@em3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> state UP
> link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff
> inet 10.4.4.59/24 <http://10.4.4.59/24> brd 10.4.4.255 scope global
> em3.65
> inet6 fe80::21c:c4ff:feab:3add/64 scope link
> valid_lft forever preferred_lft forever
> ...
> 13: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> master ovirtmgmt state UNKNOWN qlen 500
> link/ether fe:54:00:d3:8f:a3 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::fc54:ff:fed3:8fa3/64 scope link
> valid_lft forever preferred_lft forever
>
> [g.cecchi@f18aio ~]$ ip route list
> default via 10.4.4.250 dev em3.65
> 10.4.4.0/24 <http://10.4.4.0/24> dev em3.65 proto kernel scope link
> src 10.4.4.59
>
> ovirtmgmt is tagged in datacenter Poli1
>
> guest is originally configured (and it maintained this) on bridged
> vlan65 on CentOS 63 host. Its parameters
>
> eth0 with
> ip 10.4.4.53 and gw 10.4.4.250
>
> from webadmin pov it seems ok. see also this screenshot
> https://docs.google.com/open?id=0BwoPbcrMv8mvbENvR242VFJ2M1k
>
> any help will be appreciated.
> do I have to enable some kind of routing not enabled by default..?
virt-v2v doesn't update IP configuration in the guest. This means that
the target guest must be on the same ethernet segment as the source, or
it will have to be manually reconfigured after conversion.
Matt
--
Matthew Booth, RHCA, RHCSS
Red Hat Engineering, Virtualisation Team
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
12 years, 4 months
[Users] Testing High Availability and Power outages
by Alexandru Vladulescu
This is a multi-part message in MIME format.
--------------070301050308080004000702
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Hi,
Today, I started testing on my Ovirt 3.1 installation (from dreyou
repos) running on 3 x Centos 6.3 hypervisors the High Availability
features and the fence mechanism.
As yesterday, I have reported in a previous email thread, that the
migration priority queue cannot be increased (bug) in this current
version, I decided to test what the official documentation says about
the High Availability cases.
This will be a disaster case scenarios to suffer from if one hypervisor
has a power outage/hardware problem and the VMs running on it are not
migrating on other spare resources.
In the official documenation from ovirt.org it is quoted the following:
/High availability /
//
/Allows critical VMs to be restarted on another host in the event of
hardware failure with three levels of priority, taking into account
resiliency policy. /
//
* /Resiliency policy to control high availability VMs at the cluster
level. /
* /Supports application-level high availability with supported fencing
agents. /
As well as in the Architecture description:
/High Availability - restart guest VMs from failed hosts automatically
on other hosts/
So the testing went like this -- One VM running a linux box, having the
check box "High Available" and "Priority for Run/Migration queue:" set
to Low. On Host we have the check box to "Any Host in Cluster", without
"Allow VM migration only upon Admin specific request" checked.
My environment:
Configuration : 2 x Hypervisors (same cluster/hardware configuration) ;
1 x Hypervisor + acting as a NAS (NFS) server (different
cluster/hardware configuration)
Actions: Went and cut-off the power from one of the hypervisors from the
2 node clusters, while the VM was running on. This would translate to a
power outage.
Results: The hypervisor node that suffered from the outage is showing in
Hosts tab as Non Responsive on Status, and the VM has a question mark
and cannot be powered off or nothing (therefore it's stuck).
In the Log console in GUI, I get:
Host Hyper01 is non-responsive.
VM Web-Frontend01 was set to the Unknown status.
There is nothing I could I could do besides clicking on the Hyper01
"Confirm Host as been rebooted", afterwards the VM starts on the Hyper02
with a cold reboot of the VM.
The Log console changes to:
Vm Web-Frontend01 was shut down due to Hyper01 host reboot or manual fence
All VMs' status on Non-Responsive Host Hyper01 were changed to 'Down' by
admin@internal
Manual fencing for host Hyper01 was started.
VM Web-Frontend01 was restarted on Host Hyper02
I would like you approach on this problem, reading the documentation &
features pages on the official website, I suppose that this would have
been an automatically mechanism working on some sort of a vdsm & engine
fencing action. Am I missing something regarding it ?
Thank you for your patience reading this.
Regards,
Alex.
--------------070301050308080004000702
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<br>
Hi,<br>
<br>
<br>
Today, I started testing on my Ovirt 3.1 installation (from dreyou
repos) running on 3 x Centos 6.3 hypervisors the High Availability
features and the fence mechanism.<br>
<br>
As yesterday, I have reported in a previous email thread, that the
migration priority queue cannot be increased (bug) in this current
version, I decided to test what the official documentation says
about the High Availability cases. <br>
<br>
This will be a disaster case scenarios to suffer from if one
hypervisor has a power outage/hardware problem and the VMs running
on it are not migrating on other spare resources.<br>
<br>
<br>
In the official documenation from ovirt.org it is quoted the
following:<br>
<h3> <span class="mw-headline" id="High_availability"> <font
color="#333399"><i><small>High availability </small></i></font></span></h3>
<font color="#333399"><i><small>
</small></i></font>
<p><font color="#333399"><i><small>Allows critical VMs to be
restarted on another host in the event of hardware failure
with three levels of priority, taking into account
resiliency policy.
</small></i></font></p>
<font color="#333399"><i><small>
</small></i></font>
<ul>
<li><font color="#333399"><i><small> Resiliency policy to control
high availability VMs at the cluster level.
</small></i></font></li>
<li><font color="#333399"><i><small> Supports application-level
high availability with supported fencing agents.
</small></i></font></li>
</ul>
<br>
As well as in the Architecture description:<br>
<font color="#333399"><br>
<small><i>High Availability - restart guest VMs from failed hosts
automatically on other hosts</i></small></font><br>
<br>
<br>
<br>
So the testing went like this -- One VM running a linux box, having
the check box "High Available" and "Priority for Run/Migration
queue:" set to Low. On Host we have the check box to "Any Host in
Cluster", without "Allow VM migration only upon Admin specific
request" checked.<br>
<br>
<br>
<br>
My environment:<br>
<br>
<br>
Configuration : 2 x Hypervisors (same cluster/hardware
configuration) ; 1 x Hypervisor + acting as a NAS (NFS) server
(different cluster/hardware configuration)<br>
<br>
Actions: Went and cut-off the power from one of the hypervisors from
the 2 node clusters, while the VM was running on. This would
translate to a power outage.<br>
<br>
Results: The hypervisor node that suffered from the outage is
showing in Hosts tab as Non Responsive on Status, and the VM has a
question mark and cannot be powered off or nothing (therefore it's
stuck).<br>
<br>
In the Log console in GUI, I get: <br>
<br>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<span style="color: rgb(255, 255, 255); font-family: 'Arial Unicode
MS', Arial, sans-serif; font-size: small; font-style: normal;
font-variant: normal; font-weight: normal; letter-spacing: normal;
line-height: 26px; orphans: 2; text-align: start; text-indent:
0px; text-transform: none; white-space: nowrap; widows: 2;
word-spacing: 0px; -webkit-text-size-adjust: auto;
-webkit-text-stroke-width: 0px; background-color: rgb(102, 102,
102); display: inline !important; float: none; ">Host Hyper01 is
non-responsive.</span><br>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<span style="color: rgb(255, 255, 255); font-family: 'Arial Unicode
MS', Arial, sans-serif; font-size: small; font-style: normal;
font-variant: normal; font-weight: normal; letter-spacing: normal;
line-height: 26px; orphans: 2; text-align: start; text-indent:
0px; text-transform: none; white-space: nowrap; widows: 2;
word-spacing: 0px; -webkit-text-size-adjust: auto;
-webkit-text-stroke-width: 0px; background-color: rgb(102, 102,
102); display: inline !important; float: none; ">VM Web-Frontend01
was set to the Unknown status.</span><br>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<br>
There is nothing I could I could do besides clicking on the Hyper01
"Confirm Host as been rebooted", afterwards the VM starts on the
Hyper02 with a cold reboot of the VM.<br>
<br>
The Log console changes to:<br>
<br>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<span style="color: rgb(255, 255, 255); font-family: 'Arial Unicode
MS', Arial, sans-serif; font-size: small; font-style: normal;
font-variant: normal; font-weight: normal; letter-spacing: normal;
line-height: 26px; orphans: 2; text-align: start; text-indent:
0px; text-transform: none; white-space: nowrap; widows: 2;
word-spacing: 0px; -webkit-text-size-adjust: auto;
-webkit-text-stroke-width: 0px; background-color: rgb(102, 102,
102); display: inline !important; float: none; ">Vm Web-Frontend01
was shut down due to Hyper01 host reboot or manual fence</span><br>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<span style="color: rgb(255, 255, 255); font-family: 'Arial Unicode
MS', Arial, sans-serif; font-size: small; font-style: normal;
font-variant: normal; font-weight: normal; letter-spacing: normal;
line-height: 26px; orphans: 2; text-align: start; text-indent:
0px; text-transform: none; white-space: nowrap; widows: 2;
word-spacing: 0px; -webkit-text-size-adjust: auto;
-webkit-text-stroke-width: 0px; background-color: rgb(102, 102,
102); display: inline !important; float: none; ">All VMs' status
on Non-Responsive Host Hyper01 were changed to 'Down' by
admin@internal</span><br>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<span style="color: rgb(255, 255, 255); font-family: 'Arial Unicode
MS', Arial, sans-serif; font-size: small; font-style: normal;
font-variant: normal; font-weight: normal; letter-spacing: normal;
line-height: 26px; orphans: 2; text-align: start; text-indent:
0px; text-transform: none; white-space: nowrap; widows: 2;
word-spacing: 0px; -webkit-text-size-adjust: auto;
-webkit-text-stroke-width: 0px; background-color: rgb(102, 102,
102); display: inline !important; float: none; ">Manual fencing
for host Hyper01 was started.</span><br>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<span style="color: rgb(255, 255, 255); font-family: 'Arial Unicode
MS', Arial, sans-serif; font-size: small; font-style: normal;
font-variant: normal; font-weight: normal; letter-spacing: normal;
line-height: 26px; orphans: 2; text-align: start; text-indent:
0px; text-transform: none; white-space: nowrap; widows: 2;
word-spacing: 0px; -webkit-text-size-adjust: auto;
-webkit-text-stroke-width: 0px; background-color: rgb(102, 102,
102); display: inline !important; float: none; ">VM Web-Frontend01
was restarted on Host Hyper02</span><br>
<br>
<br>
I would like you approach on this problem, reading the documentation
& features pages on the official website, I suppose that this
would have been an automatically mechanism working on some sort of a
vdsm & engine fencing action. Am I missing something regarding
it ?<br>
<br>
<br>
Thank you for your patience reading this.<br>
<br>
<br>
Regards,<br>
Alex.<br>
<br>
<br>
<br>
</body>
</html>
--------------070301050308080004000702--
12 years, 4 months
[Users] ovirt fails to attach gluster volume
by Jithin Raju
Hi All,
I have a fresh installation of ovirt 3.1 with Datacenter type posix.
ovirt+ 1 node.
I created a gluster volume and able to mount it locally.
mount -t glusterfs fig:/vol1 /rhev/data-center/mnt/fig:_vol1
df -h gives:
fig:/vol1 50G 3.9G 43G 9%
/rhev/data-center/mnt/fig:_vol1
looks fine.
when i try the same from ovirt GUI i receieve an error failed to add
storage domain.
GUI parameter passed:
nodename:/volume_name
VFS type:glusterfs
mount options:vers=3 (tried empty also).
I have reported the same one week back and I got replies like its a bug.
I would like to know is there a work around .
vdsm log:
Thread-2474::DEBUG::2013-01-11
12:26:26,370::task::588::TaskManager.Task::(_updateState)
Task=`efb3b3cc-5645-4f87-92cb-b9ecb8ccce48`::moving from state init ->
state preparing
Thread-2474::INFO::2013-01-11
12:26:26,371::logUtils::37::dispatcher::(wrapper) Run and protect:
validateStorageServerConnection(domType=6,
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '',
'connection': 'fig:/vol1', 'iqn': '', 'portal': '', 'user': '', 'vfs_type':
'glusterfs', 'password': '******', 'id':
'00000000-0000-0000-0000-000000000000'}], options=None)
Thread-2474::INFO::2013-01-11
12:26:26,371::logUtils::39::dispatcher::(wrapper) Run and protect:
validateStorageServerConnection, Return response: {'statuslist':
[{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-2474::DEBUG::2013-01-11
12:26:26,371::task::1172::TaskManager.Task::(prepare)
Task=`efb3b3cc-5645-4f87-92cb-b9ecb8ccce48`::finished: {'statuslist':
[{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-2474::DEBUG::2013-01-11
12:26:26,371::task::588::TaskManager.Task::(_updateState)
Task=`efb3b3cc-5645-4f87-92cb-b9ecb8ccce48`::moving from state preparing ->
state finished
Thread-2474::DEBUG::2013-01-11
12:26:26,372::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-2474::DEBUG::2013-01-11
12:26:26,372::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-2474::DEBUG::2013-01-11
12:26:26,372::task::978::TaskManager.Task::(_decref)
Task=`efb3b3cc-5645-4f87-92cb-b9ecb8ccce48`::ref 0 aborting False
Thread-2475::DEBUG::2013-01-11
12:26:26,410::BindingXMLRPC::156::vds::(wrapper) [135.250.76.71]
Thread-2475::DEBUG::2013-01-11
12:26:26,411::task::588::TaskManager.Task::(_updateState)
Task=`f377d9bb-c357-49f9-8aef-483f0525bec9`::moving from state init ->
state preparing
Thread-2475::INFO::2013-01-11
12:26:26,411::logUtils::37::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=6,
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '',
'connection': 'fig:/vol1', 'iqn': '', 'portal': '', 'user': '', 'vfs_type':
'glusterfs', 'password': '******', 'id':
'c200ffa7-a334-4d8d-b43e-3f25f3e8a84c'}], options=None)
Thread-2475::DEBUG::2013-01-11
12:26:26,419::__init__::1249::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n
/usr/bin/mount -t glusterfs fig:/vol1 /rhev/data-center/mnt/fig:_vol1' (cwd
None)
Thread-2475::ERROR::2013-01-11
12:26:26,508::hsm::1932::Storage.HSM::(connectStorageServer) Could not
connect to storageServer
Traceback (most recent call last):
File "/usr/share/vdsm/storage/hsm.py", line 1929, in connectStorageServer
conObj.connect()
File "/usr/share/vdsm/storage/storageServer.py", line 179, in connect
self._mount.mount(self.options, self._vfsType)
File "/usr/share/vdsm/storage/mount.py", line 190, in mount
return self._runcmd(cmd, timeout)
File "/usr/share/vdsm/storage/mount.py", line 206, in _runcmd
raise MountError(rc, ";".join((out, err)))
MountError: (1, 'Mount failed. Please check the log file for more
details.\n;ERROR: failed to create logfile
"/var/log/glusterfs/rhev-data-center-mnt-fig:_vol1.log" (Permission
denied)\nERROR: failed to open logfile
/var/log/glusterfs/rhev-data-center-mnt-fig:_vol1.log\n')
engine log:
2013-01-11 12:28:21,014 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand]
(ajp--0.0.0.0-8009-4) [29437bcd] START, V
alidateStorageServerConnectionVDSCommand(vdsId =
ee2b26ba-5bb1-11e2-815e-e4115b978434, storagePoolId =
00000000-0000-0000-0000-000000000000, storageType = PO
SIXFS, connectionList = [{ id: null, connection: fig:/vol1 };]), log id:
658913d
2013-01-11 12:28:21,046 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand]
(ajp--0.0.0.0-8009-4) [29437bcd] FINISH,
ValidateStorageServerConnectionVDSCommand, return:
{00000000-0000-0000-0000-000000000000=0}, log id: 658913d
2013-01-11 12:28:21,053 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(ajp--0.0.0.0-8009-4) [29437bcd] Running command: AddStor
ageServerConnectionCommand internal: false. Entities affected : ID:
aaa00000-0000-0000-0000-123456789aaa Type: System
2013-01-11 12:28:21,056 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(ajp--0.0.0.0-8009-4) [29437bcd] START, ConnectStora
geServerVDSCommand(vdsId = ee2b26ba-5bb1-11e2-815e-e4115b978434,
storagePoolId = 00000000-0000-0000-0000-000000000000, storageType =
POSIXFS, connectionList
= [{ id: c200ffa7-a334-4d8d-b43e-3f25f3e8a84c, connection: fig:/vol1 };]),
log id: 322d95a9
2013-01-11 12:28:21,187 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(ajp--0.0.0.0-8009-4) [29437bcd] FINISH, ConnectStor
ageServerVDSCommand, return: {c200ffa7-a334-4d8d-b43e-3f25f3e8a84c=477},
log id: 322d95a9
2013-01-11 12:28:21,190 ERROR
[org.ovirt.engine.core.bll.storage.POSIXFSStorageHelper]
(ajp--0.0.0.0-8009-4) [29437bcd] The connection with details fig:/vol1
failed because of error code 477 and error message is: 477
2013-01-11 12:28:21,220 WARN
[org.ovirt.engine.core.dal.job.ExecutionMessageDirector]
(ajp--0.0.0.0-8009-2) [522a5ac5] The message key AddPosixFsStorageDoma
in is missing from bundles/ExecutionMessages
2013-01-11 12:28:21,242 INFO
[org.ovirt.engine.core.bll.storage.AddPosixFsStorageDomainCommand]
(ajp--0.0.0.0-8009-2) [522a5ac5] Running command: AddPosixFs
StorageDomainCommand internal: false. Entities affected : ID:
aaa00000-0000-0000-0000-123456789aaa Type: System
2013-01-11 12:28:21,253 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(ajp--0.0.0.0-8009-2) [522a5ac5] START, CreateStorage
DomainVDSCommand(vdsId = ee2b26ba-5bb1-11e2-815e-e4115b978434,
storageDomain=org.ovirt.engine.core.common.businessentities.storage_domain_static@9c3f6ce6,
ar
gs=fig:/vol1), log id: 6a3a31b8
2013-01-11 12:28:21,776 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--0.0.0.0-8009-2) [522a5ac5] Failed in CreateStorageDomainVDS
method
2013-01-11 12:28:21,777 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--0.0.0.0-8009-2) [522a5ac5] Error code StorageDomainFSNotMou
nted and error message VDSGenericException: VDSErrorException: Failed to
CreateStorageDomainVDS, error = Storage domain remote path not mounted:
('/rhev/data
-center/mnt/fig:_vol1',)
2013-01-11 12:28:21,780 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--0.0.0.0-8009-2) [522a5ac5] Command org.ovirt.engine.core.vd
sbroker.vdsbroker.CreateStorageDomainVDSCommand return value
Class Name:
org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
mStatus Class Name:
org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
mCode 360
mMessage Storage domain remote path not mounted:
('/rhev/data-center/mnt/fig:_vol1',)
Thanks,
Jithin
12 years, 4 months
[Users] ovirt-cli 3.2.0.9 released
by Michael Pasternak
* Sun Jan 13 2013 Michael Pasternak <mpastern(a)redhat.com> - 3.2.0.9-1
- ovirt-cli DistributionNotFound exception on f18 #881011
- adding to help message ovirt-shell configuration details #890800
- wrong error when passing empty collection based option #890525
- wrong error when passing empty kwargs #891080
For more details can be found at [1].
[1] http://wiki.ovirt.org/Cli-changelog
--
Michael Pasternak
RedHat, ENG-Virtualization R&D
12 years, 4 months
[Users] ovirt-engine-sdk-python 3.2.0.8 released
by Michael Pasternak
* Sun Jan 13 2013 Michael Pasternak <mpastern(a)redhat.com> - 3.2.0.8-1
- events can be added now (user defined events)
- events can be removed now
- vm can be removed now, but it's disk/s kept (added disks.detach_only property to VMDisks)
- to host add()/update() methods() added power_management.agents parameter
- host can be added now by cluster.name (not only cluster-id)
- to disk added permissions sub-collection
- to NIC added "linked" property
- to NIC added "plugged" property
- to VM added ReportedDevices sub-collection (holds data reported by the guest agent)
- to VMNIC added ReportedDevices sub-collection (holds data reported by the guest agent)
- to PowerManagement added Agents collection
- to VMDisk added move() action
- to cluster added "threads_as_cores" property
- to CpuTopology added "threads" property (indicating amount of available threads)
- to Host added "libvirt_version" property
- to Host added "hardware_information" property
For more details can be found at [1].
[1] http://wiki.ovirt.org/Python-sdk-changelog
--
Michael Pasternak
RedHat, ENG-Virtualization R&D
12 years, 4 months
[Users] Can I move local_cluster in all-in-one setup?
by Gianluca Cecchi
Hello,
working on f18 and ovirt nightly as of
ovirt-engine-3.2.0-1.20130106.git0cb01e1.fc18.noarch
Can I configure ootb an all-in-one setup with vlan tagging for the to-be
created ovirtmgmt lan?
In that case how has to be the network config for the host before running
engine-setup?
classic eth0 + eth0.vlanid or other things?
In case the answer is no, can I configure it after creation?
I saw in another thread that in general I can create a temporary DC and
move the clusters there so that I can then edit the ovirtmgmt network
making it vlan tagged.
Is this possible in all-in-one too?
I created another Datacenter named tempdc and I try to move my
local_cluster there.... but I don't see how I can.
I edit local_cluster, but the option to change DC is greyed out....
Thanks,
Gianluca
12 years, 4 months
[Users] API usage - 3.1
by Tom Brown
Trying to get going adding VM's via the API and so far have managed to get quite far - I am however facing this
vm_template = """<vm>
<name>%s</name>
<cluster>
<name>Default</name>
</cluster>
<template>
<name>Blank</name>
</template>
<vm_type>server</vm_type>
<memory>536870912</memory>
<os>
<boot dev="hd"/>
</os>
</vm>"""
The VM is created but the type ends up being a desktop and not a server -
What did i do wrong?
thanks
12 years, 4 months
Re: [Users] trouble with pci passthrough
by ahuser
----_com.android.email_2982956924371410
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Hi Itamar,
Thanks for your reply.
I tried and ro this with virsh + xml file and virt-manager.
Confusing is that after the installation oVirt the device is not passed cleanly.
Von Samsung-Tablet gesendetItamar Heim <iheim(a)redhat.com> hat geschrieben:On 01/06/2013 12:59 AM, Andreas Huser wrote:
> hi everybody
>
> i have trouble with pci passthrough of a parallel port adapter. I need this for a key dongle.
> The Server is a single machine and i want to use them with the all-in-one plugin from ovirt.
>
> I do some tests with:
> Fedora 17, CentOS6.3, Oracle Linux 6.3
> latest kernel, qemu-kvm and libvirt from repos. No extras or advanced configurations. Only a simple standard Server.
>
> I install "yum groupinstall virtualization" + virt-manager and some other.
> I configure iommu, modul blacklist and some other.
>
> Then i starting a Windows Server 2003 and assign the parallel adapter to the running server. I look in the device manager and found the adapter card.
> The dongle work finde and the Datev Lizenz Service are online.
>
> .. so far so good
>
> but when i install on the same Server ovirt. With same kernel qemu-kvm and libvirt!
> And i attach the adapter card to the windows server 2003 look in the device manager and found the card with a error "device cannot be start (code 10)"
>
> I am now looking for several days after the error and have diverse tried but I can not keep going.
>
> can someone help me?
>
> Thanks & greetings
> Andreas
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
how did you attach the device via ovirt?
----_com.android.email_2982956924371410
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: base64
PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0
L2h0bWw7IGNoYXJzZXQ9VVRGLTgiPjwvaGVhZD48Ym9keT48ZGl2PkhpIEl0YW1hciw8L2Rpdj48
ZGl2Pjxicj48L2Rpdj48ZGl2PlRoYW5rcyBmb3IgeW91ciByZXBseS48L2Rpdj48ZGl2PkkgdHJp
ZWQgYW5kIHJvIHRoaXMgd2l0aCB2aXJzaCArIHhtbCBmaWxlIGFuZCB2aXJ0LW1hbmFnZXIuPC9k
aXY+PGRpdj48YnI+PC9kaXY+PGRpdj48c3BhbiBjbGFzcz0iQXBwbGUtc3R5bGUtc3BhbiIgc3R5
bGU9ImZvbnQtZmFtaWx5OiBBcmlhbCwgSGVsdmV0aWNhLCBzYW5zLXNlcmlmOyAtd2Via2l0LXRl
eHQtc2l6ZS1hZGp1c3Q6IG5vbmU7IGZvbnQtc2l6ZTogMTJweDsiPkNvbmZ1c2luZyBpcyB0aGF0
IGFmdGVyIHRoZSBpbnN0YWxsYXRpb24gb1ZpcnQgdGhlIGRldmljZSBpcyBub3QgcGFzc2VkIGNs
ZWFubHkuPC9zcGFuPjwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+PGJy
PjwvZGl2PjxkaXY+PGRpdiBzdHlsZT0iZm9udC1zaXplOjEwMCUiPlZvbiBTYW1zdW5nLVRhYmxl
dCBnZXNlbmRldDwvZGl2PjwvZGl2PiA8YnI+SXRhbWFyIEhlaW0gJmx0O2loZWltQHJlZGhhdC5j
b20mZ3Q7IGhhdCBnZXNjaHJpZWJlbjo8YnI+
----_com.android.email_2982956924371410--
12 years, 4 months
[Users] Cannot see FC LUNs in Direct LUN screen
by Dave Olker
This is a multipart message in MIME format.
------=_NextPart_000_004B_01CDEF53.FC4B9BF0
Content-Type: multipart/alternative;
boundary="----=_NextPart_001_004C_01CDEF53.FC4B9BF0"
------=_NextPart_001_004C_01CDEF53.FC4B9BF0
Content-Type: text/plain;
charset="us-ascii"
Content-Transfer-Encoding: 7bit
Hi folks,
I'm very new to oVirt and Linux so I apologize if this is pilot error.
I'm running Fedora 17 on my oVirt 3.1 Engine and my oVirt 3.1 Node. I have
updated (via yum) to the latest patches on both systems. I have a 3.1
cluster created with a single node and I've created several VMs using boot
disks located in FC storage domains. I'm now trying to use the new "Direct
Lun" feature to present disks to my VM guests outside of a storage domain.
I've verified that my oVirt Node can see the LUNs correctly and multipath
shows they are active:
[root@atcwin3 ~]# multipath -v2
create: 360014380024d0ad00000f00000f80000 undef HP,HSV300
size=50G features='1 queue_if_no_path' hwhandler='0' wp=undef
|-+- policy='round-robin 0' prio=25 status=undef
| |- 0:0:15:2 sdz 65:144 undef ready running
| `- 2:0:15:2 sdbr 68:80 undef ready running
`-+- policy='round-robin 0' prio=5 status=undef
|- 0:0:14:2 sde 8:64 undef ready running
`- 2:0:14:2 sdaw 67:0 undef ready running
create: 360014380024d0ad00000f00000fc0000 undef HP,HSV300
size=50G features='1 queue_if_no_path' hwhandler='0' wp=undef
|-+- policy='round-robin 0' prio=25 status=undef
| |- 0:0:14:3 sdf 8:80 undef ready running
| `- 2:0:14:3 sdax 67:16 undef ready running
`-+- policy='round-robin 0' prio=5 status=undef
|- 0:0:15:3 sdaa 65:160 undef ready running
`- 2:0:15:3 sdbs 68:96 undef ready running
...
And these disks are visible in the "New Domain" screen if I want to add them
to a new FC domain:
But when I try to add any of these disks to a VM via Direct Lun I get an
empty list:
So the oVirt Node can see the disks just fine. oVirt Engine can see and use
the disks in a storage domain but I'm not able to add them to a VM using
Direct Lun mode.
I'm not sure where to look next. Any suggestions would be greatly
appreciated.
Thanks,
Dave
------=_NextPart_001_004C_01CDEF53.FC4B9BF0
Content-Type: text/html;
charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta =
http-equiv=3DContent-Type content=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 14 =
(filtered medium)"><!--[if !mso]><style>v\:* =
{behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]--><style><!--
/* Font Definitions */
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Tahoma;
panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
p.MsoAcetate, li.MsoAcetate, div.MsoAcetate
{mso-style-priority:99;
mso-style-link:"Balloon Text Char";
margin:0in;
margin-bottom:.0001pt;
font-size:8.0pt;
font-family:"Tahoma","sans-serif";}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:#1F497D;}
span.BalloonTextChar
{mso-style-name:"Balloon Text Char";
mso-style-priority:99;
mso-style-link:"Balloon Text";
font-family:"Tahoma","sans-serif";}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US link=3Dblue =
vlink=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal><span =
style=3D'color:#1F497D'>Hi folks,<o:p></o:p></span></p><p =
class=3DMsoNormal><span =
style=3D'color:#1F497D'><o:p> </o:p></span></p><p =
class=3DMsoNormal><span style=3D'color:#1F497D'>I’m very new to =
oVirt and Linux so I apologize if this is pilot error. =
<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'color:#1F497D'><o:p> </o:p></span></p><p =
class=3DMsoNormal><span style=3D'color:#1F497D'>I’m running Fedora =
17 on my oVirt 3.1 Engine and my oVirt 3.1 Node. I have updated =
(via yum) to the latest patches on both systems. I have a 3.1 =
cluster created with a single node and I’ve created several VMs =
using boot disks located in FC storage domains. I’m now =
trying to use the new “Direct Lun” feature to present disks =
to my VM guests outside of a storage domain.<o:p></o:p></span></p><p =
class=3DMsoNormal><span =
style=3D'color:#1F497D'><o:p> </o:p></span></p><p =
class=3DMsoNormal><span style=3D'color:#1F497D'>I’ve verified that =
my oVirt Node can see the LUNs correctly and multipath shows they are =
active:<o:p></o:p></span></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal><span =
style=3D'font-family:"Courier New";color:#1F497D'>[root@atcwin3 ~]# =
multipath -v2<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Courier New";color:#1F497D'>create: =
360014380024d0ad00000f00000f80000 undef =
HP,HSV300<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Courier New";color:#1F497D'>size=3D50G =
features=3D'1 queue_if_no_path' hwhandler=3D'0' =
wp=3Dundef<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Courier New";color:#1F497D'>|-+- =
policy=3D'round-robin 0' prio=3D25 =
status=3Dundef<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Courier New";color:#1F497D'>| |- 0:0:15:2 =
sdz 65:144 undef ready running<o:p></o:p></span></p><p =
class=3DMsoNormal><span style=3D'font-family:"Courier =
New";color:#1F497D'>| `- 2:0:15:2 sdbr 68:80 undef ready =
running<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Courier New";color:#1F497D'>`-+- =
policy=3D'round-robin 0' prio=3D5 status=3Dundef<o:p></o:p></span></p><p =
class=3DMsoNormal><span style=3D'font-family:"Courier =
New";color:#1F497D'> |- 0:0:14:2 sde 8:64 =
undef ready running<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Courier New";color:#1F497D'> `- =
2:0:14:2 sdaw 67:0 undef ready =
running<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Courier New";color:#1F497D'>create: =
360014380024d0ad00000f00000fc0000 undef =
HP,HSV300<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Courier New";color:#1F497D'>size=3D50G =
features=3D'1 queue_if_no_path' hwhandler=3D'0' =
wp=3Dundef<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Courier New";color:#1F497D'>|-+- =
policy=3D'round-robin 0' prio=3D25 =
status=3Dundef<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Courier New";color:#1F497D'>| |- 0:0:14:3 =
sdf 8:80 undef ready running<o:p></o:p></span></p><p =
class=3DMsoNormal><span style=3D'font-family:"Courier =
New";color:#1F497D'>| `- 2:0:14:3 sdax 67:16 undef ready =
running<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Courier New";color:#1F497D'>`-+- =
policy=3D'round-robin 0' prio=3D5 status=3Dundef<o:p></o:p></span></p><p =
class=3DMsoNormal><span style=3D'font-family:"Courier =
New";color:#1F497D'> |- 0:0:15:3 sdaa 65:160 undef ready =
running<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Courier New";color:#1F497D'> `- =
2:0:15:3 sdbs 68:96 undef ready =
running<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-family:"Courier =
New";color:#1F497D'>...<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'color:#1F497D'><o:p> </o:p></span></p><p =
class=3DMsoNormal><span =
style=3D'color:#1F497D'><o:p> </o:p></span></p><p =
class=3DMsoNormal><span style=3D'color:#1F497D'>And these disks are =
visible in the “New Domain” screen if I want to add them to =
a new FC domain:<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'color:#1F497D'><o:p> </o:p></span></p><p =
class=3DMsoNormal><span style=3D'color:#1F497D'><img width=3D890 =
height=3D787 id=3D"Picture_x0020_2" =
src=3D"cid:image001.png@01CDEF53.F7DD4550"></span><span =
style=3D'color:#1F497D'><o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'color:#1F497D'><o:p> </o:p></span></p><p =
class=3DMsoNormal><span =
style=3D'color:#1F497D'><o:p> </o:p></span></p><p =
class=3DMsoNormal><span style=3D'color:#1F497D'>But when I try to add =
any of these disks to a VM via Direct Lun I get an empty =
list:<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'color:#1F497D'><o:p> </o:p></span></p><p =
class=3DMsoNormal><span style=3D'color:#1F497D'><img width=3D900 =
height=3D829 id=3D"Picture_x0020_3" =
src=3D"cid:image002.png@01CDEF53.F7DD4550"></span><span =
style=3D'color:#1F497D'><o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'color:#1F497D'><o:p> </o:p></span></p><p =
class=3DMsoNormal><span =
style=3D'color:#1F497D'><o:p> </o:p></span></p><p =
class=3DMsoNormal><span style=3D'color:#1F497D'>So the oVirt Node can =
see the disks just fine. oVirt Engine can see and use the disks in =
a storage domain but I’m not able to add them to a VM using Direct =
Lun mode.<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'color:#1F497D'><o:p> </o:p></span></p><p =
class=3DMsoNormal><span style=3D'color:#1F497D'>I’m not sure where =
to look next. Any suggestions would be greatly =
appreciated.<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'color:#1F497D'><o:p> </o:p></span></p><p =
class=3DMsoNormal><span =
style=3D'color:#1F497D'>Thanks,<o:p></o:p></span></p><p =
class=3DMsoNormal><span =
style=3D'color:#1F497D'><br>Dave<o:p></o:p></span></p><p =
class=3DMsoNormal><span =
style=3D'color:#1F497D'><o:p> </o:p></span></p></div></body></html>
------=_NextPart_001_004C_01CDEF53.FC4B9BF0--
------=_NextPart_000_004B_01CDEF53.FC4B9BF0
Content-Type: image/png;
name="image001.png"
Content-Transfer-Encoding: base64
Content-ID: <image001.png(a)01CDEF53.F7DD4550>
iVBORw0KGgoAAAANSUhEUgAAA3oAAAMTCAIAAAB4yvK9AAAAAXNSR0IArs4c6QAAAAlwSFlzAAAS
dAAAEnQB3mYfeAAA/7VJREFUeF7svQ981dV9/3/uzR9iEQFrDFQXYksVXbSxCoUtrrS06BidlOA3
f1wdmdg6H20H2lXo9iVL9p3QfVtl7X623XBhvz0kyX4k1cJYZdLGSZsY/EM1E1TUkEpJgAoYbQEh
+b3POZ8/53M/f+7n88nn3txP8vrs1oX7OX/e53nOPZ/3ff85N7F9+3bmfo2MjCQSifPnz584cWJw
cPD48eOnTp06ffr0uXPnPGrl1C0aQk7JA2FAAARAAARAAARAII4EkslkYWHhpEmTPvCBD1x44YVF
RUWkJfoZSOJ//a//ZS8nVTTZhPE39UHvyDfpvzHSOP2AQBkQAAEQAAEQAAEQAAE3Avn5+fLW8PAw
KYfyv3SRfijVRQ/VM+93f/d3U9qVFeiSimZeXh5prxdccAEZNYeGht7VL1JsMSUgAAIgAAIgAAIg
AAITgcDRo0ffe+896eImGydZN6WiaNglVWNlCpBEijOddFXSUsl73t/ff+DAAdIt6W9yoFO1j3zk
IzNnziQtkxRQ+ud///d/TwS4GCMIgAAIgAAIgAAIgMDChQtJSyRdk4yPv/zlL0lRLCgomDZtGimK
l1566dVXX01aopuZ06JuSrsmNfTmm28+//zz1CjpsKTATpkyZfr06TfffPMNN9wwdepUSRwxkVh5
IAACIAACIAACIDBBCBi+ckrm+dnPfkZmx9/+9re//vWv6X0ZzXnjjTeSxind4ymOdVPdNO699tpr
zzzzDLEjKyYpsOXl5XV1daS00j89zKQThDWGCQIgAAIgAAIgAAITloChCp45c6anp+fRRx89cuQI
2SXJRknGzptuumnGjBl2G6epbko3el9f3969e99//336m3zot91225IlS8i0KR3odNEtyk9//fXX
X331VbKA0t/0phHoGYh+a2trTU1NoCooDAIgAAIgAAIgAAIgkGkCpNpxzTAv74ILJp8+/Rv655Qp
F33wkkvLZs266qqPfuy6a8ndTT7wkydPNjc379mz54Mf/CCpiBTQWVlZSX9LrdIQ0uJMHxgYIA2S
4kDPnj1LBtLq6uo//MM/vOiii2SX1Oi//Mu/kKue/kllSKulMuSwh7qZ6SlH+yAAAiAAAiAAAiCQ
TQKauilOJZIWzSTpnnn5kyYVFOQXXPY7pdd//Ia5N1QUFxeTpbKjo2P37t2kZZJa+KEPfaiiooLs
nVTLMFbmkaNcSk8K5UsvvUTWSlIiKXzzs5/97Be+8AVKSCfVkyr89Kc//eEPf/jiiy+Se500WZma
5J30nhbKihUrtm3blrYYCoAACIAACIAACIAACGSZAI+/pAOP5HFHIyOkSp57/31ubTz923feeefg
wYO/GjiaSOaV/+41ZWVlZLIkvZMsmlSA4jhJDTXOSOKqqiE6KZp0kR5KeidVW7VqFR2wRE2TWvn4
448/9thjpGuS6kkN0UW3yENvmEl9HvKZZUzoDgRAAARAAARAAARAIAQBLeNHqn3iIhUxLz9PaoBn
zpx++9fH9r3w/M6d/7n3uX2XFBf/2Z/9GcVeSjvo4cOHSR81TJsWdZNiMX/zm99QhtFll11GZ79T
c7KnXbt2/ehHPyJNVL5DGifpoHTJ4z3lAJClHmIiUQUEQAAEQAAEQAAEcpOA9GDrlk3j/3OVj2I0
SWbKRn//7G9ff/Xl7Tv+45VXDxZfcglpj2TXJDc4pa5T7KU6Ls26SSZM+plKOs6dTKCknNKBR6TG
Sh96Z2cntSstnUH1S6ihubmGIBUIgAAIgAAIgAAIBCfA9U5SEcnmyC2PI4xMjwcOvPyTn3a++95v
fu/3fo/URQrIpIQh0iqpjNG+pm6SHkq3SdcsKSm56qqrqDQ1R8ZOylJ/++23pYfdQ3eUv0Ikr+Ci
owYIgAAIgAAIgAAIgED8CJCbOzly7qXel57/xYuUHnTNNddQijmZKcmOSb9ClKpu/upXvyKFkhzt
H/3oRxcsWCANmZSH/sYbb0gzpwpAzRCikrJRSjCii/6gf8p0dbrCHZAUP9iQGARAAARAAARAAAQm
BgHDwCmHSzbK0+8N7X+591dHBuiXhy655BLKKafYS/rloVR1kxztVFlaNylPiG6Tyrh//34qTaZN
Q32k9w1fPr1J2iSduvQ7v/M7c+bMoaR3uugP+ie9SbekzgqNc2KsPYwSBEAABEAABEBgohDQss7J
XU7WymF27uwZykz/xYsvkYecbJykT5LxkZTOVHWTPOnSXU7hm9IhTk53cr3LN40MdKk+8paHh+mM
JDpaafbs2VdeeSVpmfSzQ3TRH/RPepNuUQFZEhrnRFl9GCcIgAAIgAAIgMDEIMBd33SQOz/LfeT8
yPCpU+/84qXeyZMny4BMuki3dLZukj1S/lIlXQcOHKDDNWVRw5lu5CiRVlpaWvrhD3+YfqroAx+4
gIq9Ky76g/5Jb9ItKkDFZGs+Yzr//Nv0S0P8+vafq3Olvf3I+j/MwgwaMmiiaP8vO517jS+rFLIA
Gl2AAAiAAAiAAAjEmgBpnFLNYwlSAt8S3nOyUfITOs+dI8+5qW5+f/UP+xhLUSup5r59+ygWU9oy
VRT0T1IiyWNO5yUVFhaSt/3s2felFVNe9E96i25RASpGhVNaMFozTpi3o77sxqyolgHmePI1f9o6
9ipnAIFRFARAAARAAARAAASyQYDUzXNnzw69c0oaKKWpUZ6XJC8tM107zFNJMKfATXK9ywpGafqb
bKR0UhL9ThG9SQ2Rbvn+++fePzty7kyCXvQH/VO8yfugYlRYmlXtw3VTQ3nJydd8xmLgzAastH2Q
ymk1u6atEWWB791HvzBfU3Nn039G2SraAgEQAAEQAAEQAIFREqBfHxomdVB6s9X/ynYTf/SRP7tl
0+d/vnUrudGPHDmycuXKP/mTPyFjJ/2CpdQ11cx0so6SV/7yyy+Xv01EJtQE/V/+b85NGhwueJtr
r+9fnH+mJHHuA/wG7y5x7Nixt956i/zy6uHybmMif/EnLzNuHn6q5r7v8X9pb7/38r8qutYfrn/k
T6+ZrBVWbunvm2/5eceUKE1nhlCihkVgi3x6p1T+4GxjVFoRpZq1PWYZFWNqkymCmf/cxlYYJFKa
G+XiQXUQAAEQAAEQAAEQSEeAK4RUZiTR2tb6t3/7t08//TTFVZIyafixzR+xJLWU64s85JNf0h6Z
kuVDb9Ih8nRRE3Sdf//8WXbqdNErpy98/vQHXuIv+qPoFXqTbskysryjddNV+PcOH36Pbrp71Ekp
azV1TSqquLr/c+8hXplNnlomO/jDubOkVjp51lwt+LNsqnjnvUN7fVoK/7PpzqcOi0Yum61bXXks
paIcW4Qwh3bZJ9VCJOcjjzyivkP3DYtpiq4px5XGg08lVBJqc+kWB+6DAAiAAAiAAAiAwCgJcOMi
T9JJJkylMrVJ/Q4lFTHLCe3Sr56S5UP/pB/KJDvl2bNnuC75/sj7hUfev/D1kcKTSZZPL/qD/klv
0i26T8WoMFVxVDdbWlpcRnjqyWe5cufiUf/D9SuEWZNMf8K9XPOvL3MFc/I1K0TAp65v6oqhplsq
GuifzxYmVP/aJi/9vYNS35xWLHTWP/+2VDUNIaQ6qguhDIwMjqaQVGLyZK2OFNvUYP/8M2JYWnlt
VKba7LYadAk0fVhRiEe5flAdBEAABEAABEAABNIQ4Dqe9muXriUNdZNsmVp0pyxrZJSnaIpSB6XQ
TH7Rz6fnv83y30ucL+S5SWRFPV9I/6Q36ZYs4pGW7pWu/r0nhS522SftwZK6tfLws3oU4382bZMK
pzRf6vqmVAz/sHga1+EOc21QfSegtplCUNdYX96mC/G9+zSF07ChiirvvfykCAgwpKJ3tDq6nHrT
WnAmDyDgllPdaKnpt25TaGDQkOFjAQIgAAIgAAIgAALZJZDmRyVVu6cZ3ekhIuX3SEc6N27SaZ10
lLtUVKWSKl70Jt0yinmlBLn2pKuQ7h518hsbl6aaaQ50TY8T2qdUTg8fvI8bJ5V3RqdtSh2WsZPH
FG983ymrF99ewuUdA4JxCJPFSe+5YN471ZfdBYXeQAAEQAAEQAAEQCAQAV3d9Pdb52SPlD9ZqSmT
586ef28yO/uBRN77UtGkP+if9CY50mUZ+bOWPs/dtIj+n026R3225X3TOe6uq8r4Ta59itJcJxPK
oPFOcG1Ts2daNcxArNMUpshNXcsU/nTd1x5lH2gLBEAABEAABEAABLJNIHlA9pigX6o8Rz9z+aMf
/UieduQoiFQ36TxOcX4nmTjPvP/ORe8fmzl8+sJkcphe9Af9k96kW/z++XNUOKS6SeGSmkf9MjNb
ncTSjIhGjKOM35SXzGQ3PNeXzV4vzJDcCClNntOK14vAzWBxm2aoptBcqf1jJ0U/Fke3noAU0t6o
Veeqpj4M0YfFgprt9YH+QAAEQAAEQAAEQMAnAdIh/+d//ufiiy+mI9jVKu5JRE4Nk7pJ9YXBUmib
598/+5v804Mlv+2/4vTh36EX/2OwhN6kW9zb/j63blKVMNZNrtVpQZkWWYxUIDOuU3dCm4GeWqHL
ruHZN4cPci1UqIiTr7nGn7bJc92Vy7A7agGjWuKQkhlk5A75Tnd3njo9w0lLHPI5vygGAiAAAiAA
AiAAAjlKILlp0+fLfMtGWiMdvUkaJJ0AL/VN0ijPvpf/m19dOvTaR+lFf9A/uZop7J9UjApTlZDq
Jlc4hUfdqm/qSqgRvSmVwcNPKVZBJQ9HD2/UzaLBbZt690oPRmaQrpbqMoQ+hl1PfddGZYRvpkkV
8j15KAgCIAACIAACIAACY0EgsHWTzKTSwCm0TbqGSbEcZu8PJ87wF3uf/ine5OqmNG16JKf7GLKu
16lF6RjMlNBGfhqQxQVt+LtNb7SpgYbwT4vjhiw98ERy/fAhIZxdBh/DU4p87z4lXJM35pjqHqxN
lAYBEAABEAABEACBMSaQ2L59O4mwdetWI3+cjsMkYyQFQjqKJs9FotPg6cfQKcRT/laQoVAafxjq
pmzW0bpJvmq3XsaYCroHARAAARAAARAAARAISIB0yNraWlmJdEWHXxXy06CqTZLlktKApP2SDJmk
VsozkqTtU94ipVPqmsF+VciPKCgDAiAAAiAAAiAAAiAQBwJhnOnyB4ekcimjM+V/jT9kZCcVcPxp
ojhggYwgAAIgAAIgAAIgAALREAimbhqmSqlH0j9JpyQTpjyMUx6xycM5dQe6/Wcwo5EarYAACIAA
CIAACIAACMSEQGB109A45QCl3kkXeejpMv4p744uSSgmCCEmCIAACIAACIAACICAO4Ew6iZ4ggAI
gAAIgAAIgAAIgIBPAiHVTZ/naPos5lNWFAMBEAABEAABEAABEIgdgZDqZuzGCYFBAARAAARAAARA
AATGhADUzTHBjk5BAARAAARAAARAYKIQgLo5UWYa4wQBEAABEAABEACBMSHg+qtCYyINOgUBEAAB
EAABEAABEIgpATqSKIJfFYrp4CE2CIAACIAACIAACIDAWBGAM32syKNfEAABEAABEAABEJgQBKBu
TohpxiBBAARAAARAAARAYKwIpIndfOGVvrGSDP2CAAiAAAiAAAiAAAjEiEDFlbMcYzcTzVu30TB2
bW+XP3ROV0tLi3E8O6mb119VFqNxQlQQAAEQAAEQAAEQAIHsEyClMUXdXPy5KilG8oL8SfTKvkzo
EQRAAARAAARAAARAYBwTkEomvRC7OY5nGUMDARAAARAAARAAgbEnAHVz7OcAEoAACIAACIAACIDA
OCYAdXMcTy6GBgIgAAIgAAIgAAJjTwDq5tjPASQAARAAARAAARAAgXFMAOrmOJ5cDA0EQAAEQAAE
QAAExp5AovXft5MUj/9wKw5CGvvZ8JSgsbExxyWEeCAQawINDQ2xlh/CgwAIgMDYErAfhHTr5+uk
SFA3x3ZqAvRO6iYehwF4oSgIBCGAz1cQWigLAiAAAg4EPNRNONOxYkAABEAABEAABEAABDJIAOpm
BuGiaRAAARAAARAAARAAAaibWAMgAAIgAAIgAAIgAAIZJAB1M4Nw0TQIgAAIgAAIgAAIgEBIdfP4
9q/cyK+vbD+uMnzp27a3gBgEQAAEQAAEQAAEQGCiETh79uwPnjkpRx1S3RR1FyxY0NX4ry9NNHwY
LwiAAAiAAAiAAAiAQFoCPd+/+87HXhululn2xS/WspZ/sho403aNAiAAAiAAAiAAAiAAAuOcQEG+
sGk+vvOpw6OybjJ27Z82kIGz0UXh5K519bJ43jW/u1Lm29xOmvpvZSp0B76TF3+czxiGBwIgAAIg
AAIgAAKxIpBI5q/6OFu6+i8+fuEo1U12yefIwOnhUa9tfta4mssab7HGenY13rJrsbzfTHbSelIk
/+nKH4t//7hhQUu9UEDFxVXNWxrL9NZ+3MBS24rVDEBYEAABEAABEAABEBjvBBZ8pfXmK1iycLTq
pjRwunjUr73v2fuuNUleu5g0011dSm7RgoYf6wX4TcZqm7/7uUtEjUsWLKZmd2n65kv/2tilFCYt
t4GbVRE3Ot7XKcYHAiAAAiAAAiAQZwIfuIhNvmD06mYaA6fqUK9v8eA188oFrndf2tXCSP2Uiqi8
LikrY6yvz5IXH+fZgOwgAAIgAAIgAAIgML4I9Hy/ZufrfEijyUzXkbgYOEWsZX2L6U8nh3mo63hf
HyOP/S2WQFBP3TVUN6gEAiAAAiAAAiAAAiAQDYGR4XM/eIa1/59/+O/BSNRNZwOn5gC3+NPDyS8s
meRLN8NAtb90z3u4ZlELBEAABEAABEAABEAgMwTePzfMG1665PqiaNRNiuC8j+f6/NM/kRlSu4RJ
sqxMdYCHHg13tHe9eiR0fVQEARAAARAAARAAARDIMoGP3/H9TZ/5KCmbUTjTheziTKSuLmMcMtfH
PJSTgjhD+79FZhAlrpuZ6tQNNWj5d5YJojsQAAEQAAEQAAEQAAE3AoWFhXd8fFoij02ZEp26KT3q
ynXJ577bXGtGXNKJR2FjN6lRaozORurjRyXpV31fw58qee+YbhAAARAAARAAARAAgRwiMGUqu/BC
Lk+i9d+30/97/Idbh4eFi52xlpaWRCIh/37hlb7rryrLIcEnsCiNjY0NDQ0TGACGDgIZJIDPVwbh
omkQAIGJQYCUxoorZ9XWatbHZDJ56+fr5NAjc6ZPDJIYJQiAAAiAAAiAAAiAQDACUDeD8UJpEAAB
EAABEAABEACBQASgbgbChcIgAAIgAAIgAAIgAALBCCB2MxivMSxNsWVj2Du6BoFxTwCx0eN+ijFA
EACBjBLwiN2EuplR8mgcBEAABEAABEAABCYEAa9UIfrddHrhAgEQAAEQAAEQAAEQAIEICUglk16I
3YyQKpoCARAAARAAARAAARBIJQB1E2sCBEAABEAABEAABEAggwSgbmYQLpoGARAAARAAARAAARCA
uok1AAIgAAIgAAIgAAIgkEECUDczCBdNgwAIgAAIgAAIgAAIQN3EGgABEAABEAABEAABEMggAaib
GYSLpkEABEAABEAABEAABKBuYg2AAAiAAAiAAAiAAAhkkADUzQzCRdMgAAIgAAIgAAIgAALJETYC
CiAAAiAAAiAAAiAAAiCQIQLJ88PnMtQ0mgUBEAABEAABEAABEACBZH6y4MzwaYAAARAAARAAARAA
ARAAgUwQSPz1w1/+xO/cvHXr1uHhYdlBS0tLIpGQf7/wSt/1V5VlomO0GZRAY2Nj0CooDwIg4IdA
Q0ODn2IoAwIgAAIg4EGAlMaKK2fV1tbKMslksq6uTv6duPp7l6xgdQf/+xjUzRxfQ6Rufv3rf5nj
QkI8EMg+gf98snM0nb70/F6om6MBiLogAAIgIAl4qZuTtrMZr09a8MznR4a1nCFYN3Nz3UDdzM15
gVRjToDUzVuX3BxOjMd3PgF1Mxw61AIBEACBFAIe6mbyzDD75UfOnBt5H9RAAARAIL4ETge/4jtY
SA4CIAAC8SKQZK+zkdcTOAwpXtMGaUEABEAABEAABEAgLgSSm0s3//Pl/1yYKIyLxJATBEAABEAA
BEAABEAgRgSSJZNK6BUjiSEqCIAACIAACIAACIBAjAgkz42co1dQiQd2bKLMFcu1acdAoFb2bWls
DFrHrQPelnJF1GyUEgZCg8IgAAIgAAIgAAIgMH4I5P1J3Z8kE8mXXnppZEQL4FyxYoVx7ubAr0/O
vGSafbjvvtr93JEZtzasrlkorznvdm/f/sRTfdMWVszwR2dg31OvnP7QjfOvvNBfeZdSpPh+u+UX
7IYvrf3iUlWUAJK49R+RhKManlL5qaee+v3f/321tZfbv9vyZM8zz1hfb130iWuKfXV67KfNj+x4
+pmel9/78PVXTPZVxdJ9x3dannnvwxVpqx77yZZHOk9QH4ccBX6m560pn7jmmNHa/vbvtFK7YUTy
HoM+Xo2YBVTGOg2E9WVvpMd+0vwv258+POUTV9sm2HrLHKlRWBR4w/faCCT2WBc++EbfnI/OPncu
8Nfm/Pz8V157/eiRX9HWMdaDQP8gAAIgEHsCpDTO+OC09vZ2ORJSJq+99lr5dzKiwc1YurrhSzdM
PfR4RJZFn2Lt2/KD505NveFLq5eaSq4U5RKfLcS92JTy2q9+5avm6zOXHX7yO80/PeZjXC//d++Q
qF7/aX/qqXubpPh+p32/8/2XO1oOzaqt/5TWR6rAXPiqa3yIO7oiXMKW3os+Y7CqKX/nye98p+Pl
0TWb3drFn15aPoUd3m8X+ljvoSF22WeWE0g+0h1sqVwStdOf+c53m39Cy6H40/V8yO2xGnB28aI3
EAABEACBTBHIkwe+h7JuFs2xmjIvvLKo76lf7D+uWzjJGf2Dx8kmp1/73p2j2zLp1uOHGDtz5Dn9
prRGelRxIrDvx4+/cmrWLV+sTLWoXnjlleZbwgD6hNaRIoTorfPdOWWvbtbvm3fdJORiuDUo3j8w
bSH7sRh4BAZWddB26+ax/T1vnLn0WosZsPiaD//m5ed6X01vHTz20jO9Ry+67tM+TaF2/sf2P8O7
J+umaGrSh52sqsd+sqOr4BPVNwpl00lgvd3iqz/xCWkoPS7aLY/Quknm1R+9Pqm89kt/VGYMY/IV
11/01jO/+IVm44y+0zAfWROpS+3JF554uff1oVQj5cs/3nGAlS9dTFPx0x0/P1r8BxpwNvmKy9/b
/1zvoDCITr5i0uEnn9zvZBwNI2zu1LFbN//ugb/f9V+7f/LTp1Jee37W9ck/qDQkh3UzdyYRkoAA
CIwDAh7WzSjVTcZmsL6nXjl0WtM3yRl9eqHpb592oOVRUu6ExjmjYuE0Kqr4wKUP3qOKwzxIbXNh
jYf/nmuA29803P7C59+tq73cWz545LmDH1opPfHT+p746W5NSXSV0KNBEWIw+Io+ar9hBT5XmD91
k3Gl4q2eA0eZ4Y3mvuztT1s9yOQ7/tHrZxkbeuOZZ6S/lXuTnzT98vt1L3mql1m09gavoulGl735
7629Q2pTyoBefmL74VlcDRLveambph9Zan7sje07NHkMrzf3ET91YsqJzhaKAdAkdBpdCtFjz3Q+
TypYjdR5zau4+L39va8fFiO1daphkaW94Oyf8uGBHeTjliENhvOaV3G5JVA4TAoHpGvwbouC9E1D
ZqPM/v9+8o2LpIpJWvQnLGr/b97cJ74KCP978SXvvdz1PzyuIXjshM9lOhbF7Orm/E/M7eruOX/+
vCpOYWHh+r9eq74DdXMspgt9ggAIjFsCWXCma+xmXDLVpFixsmFlhfLPilns1MFnPROKgleZeolX
rOi+HwtfuyHGjKU1N0w99dyP9xliqZ74CpKQHdpn3rSviPQNzrrVMuqxWFTF06ewoRPSn87jO8mH
rHnbhQeZu9qvrvpqDfllyf/61a8uv4bUuO88eZj/LX3NdGuod4cvj7zmpTWasoz32E+eOcwumhHC
VT/U28s0x7eQ2fTUD/U+Iz3FKykEwGV0KdCPnxhiU6Y7hFcUf3qlbEdeQ70nrtbDEnhQgoxJSAfn
8JP/wZYa3PRaokG3W/7Edl46xZ/+xGVs6NBLZriEoHzZ1c4BCcde4l5242bxjIuocq+fWIuxWLmR
9Uma5b2rv0L/NVqkv//6G1+PrAM0BAIgAAIgEIRAVLGbbn2qOePcfe7jClHFtfd9h9jU2TeqCumM
y6cxdvItZ6XXoi07NbovYIM+hpupIu8MkE5x7KdcExEhfeISwX8WXUW+/al6UrOqrtZlKb52lqmw
jkJAJz1vqLflO9/9jvISkYW267LP6AGdQr86/IxRasqsck0/9Dm6YwPv+BvDlPKbDJ3NVNnTwZlS
/ke6wprKzfmWT7FdZb76aq5vGirjfhGCa0qu1nu5nQzPU8r/wJhZdg2vrH0V8UclrqUuvHDyfWs0
jVPqmslkpre7uLKC3CAAAiCQaQIR778Dx08ZEovDkh4/RNY+7bqVbIfeV4gqp467m0sH3jrJ2Knn
fmA5Jcmn0usoaOQNpgMyivvCqHiM7HpkY1PUuxbye7uqG+ThlYogLzX6S+h5qcZNW6qQY6KSxRhp
VZHMBn2Ojlv0Irgig+NTbHeRr/kD5TvDy/sPsymzrrVbkCkV/btPHibcpvlWb1J8FZkA1+TJk6WN
E7rmBJhtDBEEQCCnCUSrbnLjH5tVUcGHrLmdg3iWg1ZJ5/wWlkzyluv6rvH/1UT2INMTeYNBOvdf
luszU6brCggpHGrqumMmOFdNhJapud1ruaM9Fpef0V0igguOO4yHAka/8910ydqZgONHbHf+xeWz
KNrhv+koAOlI/4TtaAEeOSrOHbDrmrGY1siEJBsnxWvCrhkZUDQEAiAAAqEIRKluDuzoJN/1DbcI
bVMYAqdd7vMQTiF78CpS3+y0HzA/sGPHPmqRO8e9zJ+BkUXeYGAJ0lcQvlrN4yxcwmqcn3P1l58W
qglpoobbPX0/6UsIs2I4S5pFOxQGPEN9Nvv1OToKIbC6440WRMPlf+B9DFPkcPyK7QFYeO3pQCR+
/pHFV87ryGBTHpbrpmuGCqdNP98oAQIgAAIgAALOBKJSN7kbnI7AnHWrbjiccePsqaomqB0sZErB
VUVr5lDaKvYhVKykwz7JXa6e9ilFEbYskRl06PHGLVz31C+SxPJv16Vhl3CUDWZhEZKqQY7wKeVL
pb2r+FM8r6T3P4zAR+kRdrLnKR52OinTdKanmAZ5pKDvcYgoQ0ezYtomDhsnRO5v57qT3YAXZHTX
LP/MZRQ0ahk45etwX/NS/UBQL5Hc4KQdhlMBf5PCZ8r9/FRNg95B2maKI31/u1gAtS7fHHiAg5Pq
HmokqAQCIAACIAACvgiM5iAkOvLHuLTfGFLO/rnwyvniYCGtCB2JtPC05XeEZlRoP0UkSvDzh2an
q+IwJOpG/0kjrScpyi3Sripvdz5qnLv5FD9+qWaZ+C0j288GiZOMzONE7RLO8GwwpbqvCfBfyPkg
pONHe9VfFaIzb8pr7l7+u8Y5N8XXfOLD7+37yU+0g5D4fe34Sf2EHDp3s/jqD7+3/+fd+mFJZz7x
1aUfoOMd6Yig668ovuL6D7/3zI/0g37OfKJ21rHe05ebByHRKUL8QEc6wvIZ/XAiY1DFlOPzi+PG
QY9eByGZZwDxM4lYefmkbnkQ0htDZKeTaUy/6ZOH+hhHhbqPLpUrlfzEFDp18knjR5ioj9q7P1+u
kUo9d9M8P8gLzm9SjghVTh1yb5D0ZLdJUQ5C4k2dmmY9VFUdVDGj466On73MOGJT3Dz2k//4+dGz
7Kx1VZjHM/EzoX4769O/P94PQvL5scJBSD5BoRgIgAAI+CHgcRBSYmtLKzWx/UePDw8Py7ZaWlqM
H7F84ZW+668q89MHymSaACU8ff3rf5npXqJunwIfW098Ihu/GxS15GPdHhmq6ceBjF9jikQcilV9
kn0m2pCJSAQbXSP/+WTnrUtuPn36dNBmioqKHt/5xEvP76Ww7qB1UR4EQAAEQCCFACmNFVfOqq2t
le9T3Pzn/vhW7e/8wiJ6ARkIZIYA9/oefsbnEZ6ZESGerdJPjF70CT+Ofv/D43lFyqlY/iuiJAiA
AAiAAAiEISCVTHpFFbsZRgjUmRAEKG7yot4Wfz/jPiGA+BvkNVXRmoS5mZkOHsjCz9P7Gx9KgQAI
gAAITCACUDcn0GSP1VBJc/pqtE7hsRpJjPst/nS9epJ/jEfiJjp5xoNe45AChgQCIAACOUkg8e/t
j5FgP9zWhtjNnJwgU6h4xm7mOFSINx4IUOzmaIaB2M3R0ENdEAABEDAI2GM3P7+iWt6FuhmbdULq
ZmxkhaAgECsCSBWK1XRBWBAAgRwlAHUzRycGYoEACIAACIAACIDA+CDgoW4idnN8TDFGAQIgAAIg
AAIgAAI5SgDqZo5ODMQCARAAARAAARAAgfFBAOrm+JhHjAIEQAAEQAAEQAAEcpQA1M0cnRiIBQIg
AAIgAAIgAALjgwCpmyPjYyQYBQiAAAiAAAiAAAiAQA4SSJ4bOZeDYkEkEAABEAABEAABEACB8UEg
mZ8oODN8enwMBqMAARAAARAAARAAARDINQLJ5w7vmpQsyjWxIA8IgAAIgAAIgAAIgMD4IJB8PP/f
n/vVk+NjMBgFCIAACIAACIAACIBArhFIvvk7x1vP//MIG841ySAPCIAACIAACIAACIDAOCCQPDPM
fvmRM0gYGgdziSGAAAiAAAiAAAiAQA4SSLLX2cjrCRyGlINzA5FAAARAAARAAARAYBwQSPxj6w8S
jD39+E9HhjWds6WlJZGg9/hFv7a+48WqcTBODAEEQAAEQAAEQAAEQCBzBJZe115x5aza2lrZRTKZ
/PyKau3vSwtKigtKEkzTLzMnBFoGARAAARAAARAAARCYgAT4Me8I3JyAE48hgwAIgAAIgAAIgEB2
CNAx7/n0yk5n6AUEQAAEQAAEQAAEQGCiEaDfTMcFAiAAAiAAAiAAAiAAApkiAHUzU2TRLgiAAAiA
AAiAAAiAABGAuollAAIgAAIgAAIgAAIgkEECUDczCBdNgwAIgAAIgAAIgAAIQN3EGgABEAABEAAB
EAABEMggAaibGYSLpkEABEAABEAABEAABKBuYg2AAAiAAAiAAAiAAAhkkADUzQzCRdMgAAIgAAIg
AAIgAAJQN7EGQAAEQAAEQAAEQAAEMkgA6mYG4aJpEAABEAABEAABEACB5Lmzp+kFECAAAiAAAiAA
AiAAAiAQIQGpZNIrsX37dmp369atw8PDsoOWlpZEIiH/fuGVvh0vVkXYMZoCARAYrwSGX/7j8To0
jAsEYkEgec2PYiFnLguJfcz/7NjX29Lr2iuunFVbWysbSSaTdXV18m+om/7BoiQIgIAXAdqmGxoa
wAgEQCAcgTVr1oSrKGtNmzYN6uZoAMq6tI99/et/Ofp2cr+F/3yyczRCvvT8XqibowGIuiAAAiEJ
QN0MCQ7VQEAQIHXzW9/6VjgYX/va16BuhkOXUmtCqZu3Lrk5HLTHdz4RVN1EqlA41KgFAiAAAiAA
AtET+E3wK3oh0OKEIXA6+BWODdTNcNxQCwRAAARAAARAAARAwBcBqJu+MKEQCIAACIAACGSBwEjw
KwtSoQsQGCUBqJujBIjqIAACIAACIBAZgeDa5khkfaMhEMgYAaibGUOLhkEABEAABEAgIAGomwGB
oXg8CIRTN//mz2977n+nvG79wbyMD1n2uy3I4X5Oot723J/PzbSsIURNI9If3/zc/3aBzG/p0/GX
n9TOu8r0CMO2X/uFW22LRxO+8wsfDdsq6uUkge6NdIav7drYnWFhB1prEimdOEuSEMX4vZrWARLK
/CtiCblEymURLmOdBhpDbkgRSOTxWpjOwA56jTWKsXrOZmPcP2n+7ne+0/GyQ1fHLLeO/bT5O1TS
rXA2RM3xPsKpm06DKrzx5tsmurpAal/GFNnaqUWMnT7SY2NPnX7sIvPdoku/pmqcmRQpx9c2xMsN
AtUtRyzmmq4N6xZo2l06AbkKFEY37etr27Bwvq31VEm4WGvtxdJJFfQ+H8XM2ooug8KRln0Lwg0s
aNcoH0cCsG6GnrVMPO8+vbR8Cju8365vHus9NMQu+8zyaxh7uf2739nBln71K1+lV+30Z77z3eaf
HAs9ivFaMTnCQod9DPXtu+Fv/z/5+lbfWUI0pezDf5NJUn/zPd7XiuA/m/DmLzQ5NYG/tzdyMbnR
TlH7QovqIthHF84oZKdPv2K7/TeXcV1Tm4snjg7RP4qmLRSW5hSRIh9yuAZb/u1xY9lsP87bMGZn
4b+9Fq5N1IoRgflrj7RUt9WuFuZEz2ugb1+6Io73uzvXOWmbrm3NXzsy0lozI1Rf6SqRWXPBOq7n
KnrtjJrWrg1s3YIwmnS6/nB/fBGYPXv2JU5XWVlZ7g00C89Zj0Fn7HlXXD6L65v7U/p++b97h6aU
/wEpm8d++sxhdtknPlUsSxR/ihTUod6nnQyiuTdpVon+7oG//+v1TfZX0//ZOHrRk+eHz42+Ffrl
y397+Vn+y+sXlRue7nmf7HTw8M7dxt+89Qdf0Py/3DNulDRNg7JYqo9V9VDrf3/0B38Z0hWb4u/2
37jqDpYGXXrna2WFHOQlZdLlnepMd6ZhFEs7imkfLGJDA79qcZmsKTM+xH3oPU8t5Nr/41/qcRCJ
V/UUo/PPb5ZTJgblPAVysHJqqJjXMIMbemXLmo1cE/Xmv2EGJVMkS0CFOqjgnUax/NFGIAIzalZv
YG2Pder6ZoqjW/q1SU+bWdvGSCvTDYGOxRw6Dqpt2lzonWYIgKERcm94TWu39IlLCbmIpoPcWXcc
6HysjW1YbdNl569sqWbrNikqt9KpxaLrNmpp+VVFMCTwuOVT7EDTicIRE1Ctmy+88MKFF16Y0gG9
8+abb6rFIpYg6ub8PncWhVYP7I/gCMdQfC3XN5+x2iv37zdUzOJP1X/1K1Wkd8b/+suvrS4sFKqM
ctE76/967egHl8xPFpwZ5oriaK/XOge4gfOSqSL8jmzaN186xWiTPLz/m+sN+lV4Y5nm/73iY7eZ
JS/5kAgAJd2r7ApFnillFW7xmld8rOJGcjKLa0rZNW7xo7wXU39VJfEatmPj5rLWOnWVzWw6DQ2W
fhTzPnAJY8dPORj//ubwO7wjTjhdTGo6MaZccpGcsuOnmNsUqMOnefkciWVcKe2T5h0wkLTllZNk
ndVUZ9ns8beNZXPFx8xVccXH9DDWUXc62oWP+iEIlJVVs7a+PqrJ9aUF6zYYnmbD8kkmQPqbMX5r
7XzXYvbOA2ubKU200a/9auIIr7epR7bVbmKb+EOem0JJp1M85KKkpoWq7ZFbn1U7GaJoeKpJta22
b6GuPvBoA0PldoajdbFuwWop0MgIsdJriZtut/yJHWJKUSUyAu8oFwVx7t69e/LkyUbr9Pfzzz+v
lqG/I+t7VA05P2cDPHfkOCJTD0Y1GLVy8ac/cRkbOvSS6R8/9hNu0LzaWcU89hL3srvcjEyojDRE
muW9q7+iapz0919/4+uRdJZ85pdPTErqClsETU65cBrXF69SPLx/u08aPj+nWJ6E1b3vTdGhcATL
MoUf5B6t1770f03ft/S3alqsXcDTR7/F7XmyKVk9usuhceHUNvy/v+Afjyuu+iT7t8dlOAE73ieN
i8qVngZLN4raq6ZNYWd/7eh9/NETWtccE9lWNesg+aytIvkXg8IV3KZAG77Fd68NVbZ/9tkntLnj
E1d0aVWQvC7W86tXaBnIYIAZRaT7vnlYCXvQKP1/YkkU3ngD5XtF0Wl06wUtBSKwr48WtFC7FE/z
jIXLdE1UbcxnMarioW221c60pi056IfUwoYu3fEtzLCKDbJ62UJthxlo3bROKUjD2EQRAqbBVpPd
dzhAdctKI4jU1MXTjbq6ZZNuN03l5nzLp9iBphGFoyaQkid08cUXGxon6ZrPPfccreKUMlGLEF17
gZ47Wrch1APb8y66EYiWrr6a65u9ur65XzjSb3LUNl9ubxVe9qsjFiFbzV144eT71mgap9Q1k8lo
knyS7YnWnrf+K7qBDL17kjHu+WXsnU4tFO+1L70ivrVc+AE9aVpqTid/Lcyqwmj32pF3U4WQjlqL
/cwmp+5f3tsrtFK3yxpT8sTf+BuvQ+PzPnSVGFqvjB/90RM8DPH/PuXm4xaF0tIwvOSuo7jqQgrc
PNlpzxMSHYhoSKmv88vFGOxfDJOObQqsjfQ81Wlil7d4xpg0JMuJc/2e4DwF0kZeeNVVHxWpURYN
+81XNM6aQZcvp0g69bcaUCpqAhVlyrdDwyssPOiuV9piXrZNW6qQY8CmxRg5fyG5/YUZll+mxNxq
Kfz8xsXlNktqFWaUVURBLe2o/XbiU2y/zaFclgh88IMffPLJJ6WumZeXl6VeA3fj9JwN8dyJTD0I
PACPCtf8AcVj6vbNl8mRPmXWtVqsplKL56o/eXhKee3KT9tvRilOZtuilSZtnBHqmiRx8o3Lj/+/
Z34wioQhY9hcJXJx+HqisRvttEBGVdF09COH6i7ANLl1GqAJf0XTdfTRmRS88+5vPJVaaY/UlM6A
Sp5dygBT4D1EYe0OcBn+9IWeGna0nQaQD0UjIaD4mDVVynBMCw+6/fJZbLSe9CCjc0h0t+W5m5bK
1JZFTGa6ZCGfow4iNfMjdqAGUThiAo6Z6aRxUhwn2Zkc70YsQe40Nyr1INJh8IShod7/poQh6Uj/
hE2h3N/+HWHXjLmuKamRjZPiNaOya8o2k2eG2S8/cubcyPujnJk/vlloh9LsJ82WFy3UjlHUzenu
mS6WzqUFUfOcmka7UQroVv2Ky8QZnPM+uVANQ3QrLb29RkaUks7ScsotAnZ0NLhsHIgwG9svPfVK
O/TUYiS2ihREDNcpsDZigabZqkebnKj706+yadgUtCCt4zIZX+jfEXWaobWFZp0JCJeu5pju3lLb
JhUgzyOJ/BYLmJPuJJ/FREnqq2PspVAjbb5ze3Op7nijhGhY8aA7kvI5av/rzK/Y/ltEyegJBD10
k8pHL0RULQZ57vjp0109cH8E+2k3fRmZMLT/ZX7+kc1Xzg/dfJKU0M98Nd52zfQYRlEiyV5nI68n
Qh2GRE5bM/9GnAE01PfG33BhNO+5XkBm8xi+dX/iytyX/61lAo3aXOfQ6SvvimhLEe9oSWzyElDL
iNJiokU6lD5qUU3PTFfaGDUNHsV49pVXHA8Jkv73FBe2tbAmUnAxHKZAG742szdPY6aOrbVvjRYP
cfi/5k+fUmQN3KRRavJIN/3ZZ5+jsM6oOvW3JlEqAgIy41wJLWSqD7p7o7szXdEDXYpRqKRjXk4w
sc0Tiro3LqAITXteOQ855cn1ymlO0g7pYK2cv7ZrAw8aVW+RZZOfjmQEXnrJl37UAUbnX+wAjaJo
xATGlboZ/Lnjh6aHeuDwCPbToo8yImHo8DM7SNtMcaTvb28RZs2v8jM4cbkRSG4u3fzPl/9zYSI1
9T0wMp4jYp6bSEGN8gxIeXFTpd+ISX6aj0jB0SqKdoL6ZP2Irx/eJMoe75M5SWkvMyRZFKWkGW3U
P3pbZj45ZCyNhgYdPOR2wLvojA74NFOF+Bs0EXquUopI/sVwnwIavgHqzV883qlG3FL7xsSlSJIW
q1JA+tNTAjfp32/+QsstE38bY4ym0yAComwgAikJOjNrGRkzjahJcQqnGQTZuZDnWGvpOdI0SAGS
Na1lHsUMacgYyIxknkAyqoWrW1qYFpQpUuZdrK50WOeRFmYkH8lhOZblx3qKs+2Na8E+FYG7pF5w
Qo7Pv9ghO0C1URMYV+qmyHAIrQzYWXqoBx6P4FHPiWxAJAwNDZlHbIp3hXOddIHeFu1XhfDbQs68
E9u3b6c7W7duNQzyLS0ttC3K4i+80rfjxaqIpgrNjDsCFOLJzc/kQA9x9r47DYoQKLuCvqLoOViU
sUQWzah7GXezMeYDGn75jxsaGsZcDAgAAjElsGbNmm9961svvxz4gPBrrrnma1/72rRp05LXBP8V
lJjCypjYtI99/et/mbHmc6jh/3yy89YlN58+HfgozKKiosd3PvHS83vt623pde0VV86qpePkxEXR
n3V1ddrf50bO0SuHAECUXCZgHkcv4ii0MAktTz8iwWu/8CF+6mqa1KiIOkMzIAACIJBTBPAjljk1
HRAmKgLJ/EQ+vaJqDu2McwKU/256tMVYyXfvO0wiLRyResV/n4lMmxn4odG0/aMACIAACIwxgXHm
TB9jmug+ZwjAmZ4zUwFBQCDmBOBMj/kEQvwxJiCd6eGEgDM9HDd7rYnmTA/HLYQzHepmONSoBQIg
kEoA6ibWBAiMhgCpm6OpjtjN0dAz6k4odXM0xILGbkLdHA1t1AUBEDAJ0DYNHCAAAmNIAKlCo4eP
fcw/w0CpQlA3/YNFSRAAARAAARAAARAAAWcCXpnpYAYCIAACIAACIAACIAACmSOQzFzTaBkEQAAE
QAAEQAAEQAAEoG5iDYAACIAACIAACIAACGSQANTNDMJF0yAAAiAAAiAAAiAAAlA3sQZAAARAAARA
AARAAAQySADqZgbhomkQAAEQAAEQAAEQAIHk0Lvv0QsgQAAEQAAEQAAEQAAEQCBCAlLJpFdyyoWT
6RVh02gKBEAABEAABEAABEAABKSSSS8407EYQAAEQAAEQAAEQAAEMkgA6mYG4aJpEAABEAABEAAB
EAABqJtYAyAAAiAAAiAAAiAAAhkkAHUzg3DRNAiAAAiAAAiAAAiAANRNrAEQAAEQAAEQAAEQAIEM
EoC6mUG4aBoEQAAEQAAEQAAEQADqJtYACIAACIAACIAACIBABgkktm/fTs1v3bp1eHhY9tPS0pJI
JOTfL7zSd/1VZRnsH02DAAiAAAiAAAiAAAjEnwApjRVXzqqtrZVDSSaTdXV12t/xHx1GAAIgAAIg
AAIgAAIgkLsE4EzP3bmBZCAAAiAAAiAAAiAwDghA3RwHk4ghgAAIgAAIgAAIgEDuEoC6mbtzA8lA
AARAAARAAARAYBwQgLo5DiYRQwABEAABEAABEACB3CUAdTN35waSgQAIgAAIgAAIgMA4IAB1cxxM
IoYAAiAAAiAAAiAAArlLAOpm7s4NJAMBEAABEAABEACBcUAA6uY4mEQMAQRAAARAAARAAARylwDU
zdydG0gGAiAAAiAAAiAAAuOAANTNcTCJGAIIgAAIgAAIgAAI5C4BqJu5OzeQDARAAARAAARAAATG
AYHE9u3baRhbt24dHh6W42lpaUkkEvJv+rX1668qGwfjHAdDaGxsHAejwBBAIAcJNDQ05KBUEAkE
QAAE4kWAlMaKK2fV1tZKsZPJZF1dnfwb6mZsppLUTTwUYzNbEDSLBNasWTOa3qZNm4ZP1mgAoi4I
gAAISAJQN8fDSoC6OR5mEWPIAAFSN7/1rW+Fa/hrX/sa1M1w6FALBEAABFIIeKibiN3EagEBEBgP
BH4T/BoPw8YYQAAEQCAOBKBuxmGWICMIgAAIgAAIgAAIxJYA1M3YTh0EBwEQUAiMBL/ADwRAAARA
IDsEoG5mhzN6AQEQyCyB4NrmSGYFQusgAAIgAAI6gZDq5vHtX7nxxm+/lMrxpW/f6PT2qHGL7r6y
/bi1Ied3g3fGpbY1HrwZ1AABEBhDAlA3xxA+ugYBEAABbwIh1U1gnRAEujcmEjWtA+nHOtBaQ2e1
ymtjd/ryKAECkROgk4ODXpHLgAZBAARAAAQcCUDdHJ8LgyuK2dL7qK+ZtRVd0rh0pGXfgqz1PD7n
DqMKRwDWzXDcUAsEQAAEskAgs+qmcK4rl8X9Lnzh+hWNM9uzSWdh6N36Fsa6Gm/RRbHHCGRhHiLu
YqBvXxQtzl87MtJaM8OzqYHWTetYdcvK+bLUjJrVG9i6Tlg4o+CPNsISmD179iVOV1lZWdgmUQ8E
QAAEQCA8gcypm1z1q+9r+PGz+vXjhgV9fVr4Jb95S2NZs3bvxw2s8ZbRapykON7SyIwOm8tIg9RV
R3dhrr3v2Wb6taUFpqD3XRseZ7ZrciOmcknHN3m2Z9a2MbZugWHhtJQTVk/h/jbtn+Kfqt9cvPGZ
Wv1NaS118pnPqGm166T7+nx44LMNC/2NbwKqdfOFF1648MILU8ZL77z55ptqsfENBKMDARAAgdwh
kDF183jXri5W+8XPXWKM9ZLPffe72j9f+tfGLlLwDMXuks81NCzoavxXW+6RCkoxQEpD5C2NXeb9
49v/qYXVNus9MHbtfaRGtvyTyC/yFCZ3JiOYJFz3W7Bug+bE5m7s6rba1aRwkgJIfzPGb60loyOp
igv2tRzRHrNdG0gN3dhttUIOdD5G+mnbY526lti9pbatumXNx1SR1i1YzTbpLvPqdQscozq7Ny5Y
xzasTmMSDTZSlAYBHwTeUS4K4ty9e/fkyZONevT3888/r5ahv320iiIgAAIgAAIREMiYunkJ91q1
1DukrzP20q4WtmDxAlMTZUwUN4yfjgNTDJDSKErmUrPgkVe72IIrZ6o1r11cy0jpJX3TS5gIII5N
E8KsKPRJec1YuKyatfX1pUgjnN2K/scd5KLW/IWm17uvr616w4ZqQ9/s7hR1Pmhpq7plk65FOvSl
WT4XrKtuOWIKNTZo0OtEJJCSJ3TxxRcbGifpms899xz5AVLKTERMGDMIgAAIjAWBjKmbZF0UXmrS
OFPjM49zlSjVVskDKEdxiTbLylQN1qJ5ugozij5zpKrh4RYedNtFmiTbsNBQSpX7pr5J2mX1spUL
KzRtVWqojnXcxyy1XzKyLntsZvbSlHJkDiBGThL44Ac/+OSTT0pdMy8vLydlhFAgAAIgMCEIRK9u
KiZG0jiNi1RP0jBFfKawNdpslVTS9IQHZp/eOuoiTOCecqeCpmcaWeHCgx7k0vVNoW0unKH/k3vW
zdyfIA3ysjNqNpEcSBYKyg3lR0vAMTOdNE6K40wmk453R9sl6oMACIAACPgjEFLdvIR84axlV0qs
pYOT3JBChFJq18wrF7CuV4/4k9BnKYc23cVRhfHZfu4Vk+GVPCTT03ddVuau/EkFs5XS2CvKKP9c
/nMjb5e0T58jFllItjOXqpEB7JMfikVFIOihm1Q+qq7RDgiAAAiAgDeBkOomu+RzX6REnHolm5yS
v8khbiQH8Vxwy5FCivInMoNS4zopsXw0RxClCmQRx0sYSioyYjxjt1qUUM3ujc7OdJkTtMk4rN2S
gy4UzNraNs1zLlTTdcFSfeav7aI2FhgKp5ADuUKxW0rxFxjqZvznECMAARAYtwTCqps885tydej8
Ij0yUx5CZCabk/1Tidu88cZ61mx6yylLnWr3mXGddL+v4U9HdQSRVSCLONwY6y4MDUUdyWiU3myu
k/lryXvODzuSV+dCnpuuKZZSx6R7PHuckoO6KmopoFJcwvduHKbJ9U3GdFukyACilPZgYZs8+ehI
2SZTjjT21mxCQl8ThwDUzYkz1xgpCIBA7Agktm/fTkJv3brVcC21tLSQ5iBH8sIrfddfVRa7UY1L
gRsbGxsaGsbl0DAoEBgNgTVr1nzrW996+eWXgzZyzTXXfO1rX5s2bRo+WUHRoTwIgAAI2AmQ0lhx
5azaWi14kuLm6+rqZLHw1k2ABgEQAIHcIYAfscyduYAkIAACIJBCAOomlgQIgMB4IABn+niYRYwB
BEBgnBKAujlOJxbDAoEJRuDa4NcEI4ThggAIgMCYEUDs5pihD9oxYjeDEkP5CUKAYjdHM1LEbo6G
HuqCAAiAgEHAI3YT6mZs1gmpm7GRFYKCQKwIIFUoVtMFYUEABHKUANTNHJ0YiAUCIAACIAACIAAC
44MAMtPHxzxiFCAAAiAAAiAAAiAQPwJIFYrfnEFiEAABEAABEAABEIgRAaibMZosiAoCIAACIAAC
IAAC8SMAdTN+cwaJQQAEQAAEQAAEQCBGBKBuxmiyICoIgAAIgAAIgAAIxI8A1M34zRkkBgEQAAEQ
AAEQAIEYEYC6GaPJgqggAAIgAAIgAAIgED8CUDfjN2eQGARAAARAAARAAARiRADqZowmC6KCAAiA
AAiAAAiAQPwIQN2M35xBYhAAARAAARAAARCIEQGomzGaLIgKAiAAAiAAAiAAAvEjAHUzfnMGiUEA
BEAABEAABEAgRgSgbsZosiAqCIAACIAACIAACMSPANTN+M0ZJAYBEAABEAABEACBGBGAuhmjyYKo
IAACIAACIAACIBA/AlA34zdnkBgEQAAEQAAEQAAEYkQA6maMJguiggAIgAAIgAAIgED8CEDdjN+c
QWIQAAEQAAEQAAEQiBEBqJsxmiyICgIgAAIgAAIgAALxIwB1M35zBolBAARAAARAAARAIEYEoG7G
aLIgKgiAAAiAAAiAAAjEjwDUzfjNGSQGARAAARAAARAAgRgRgLoZo8mCqCAAAiAAAiAAAiAQPwJQ
N+M3Z5AYBEAABEAABEAABGJEAOpmjCYLooIACIAACIAACIBA/AhA3YzfnEFiEAABEAABEAABEIgR
AaibMZosiAoCIAACIAACIAAC8SMAdTN+cwaJQQAEQAAEQAAEQCBGBKBuxmiyICoIgAAIgAAIgAAI
xI8A1M34zRkkBgEQAAEQAAEQAIEYEYC6GaPJgqggAAIgAAIgAAIgED8CUDfjN2eQGARAAARAAARA
AARiRADqZowmC6KCAAiAAAiAAAiAQPwIQN2M35xBYhAAARAAARAAARCIEQGomzGaLIgKAiAAAiAA
AiAAAvEjAHUzfnMGiUEABEAABEAABEAgRgSgbsZosiAqCIAACIAACIAACMSPANTN+M0ZJAYBEAAB
EAABEACBGBGAuhmjyYKoIAACIAACIAACIBA/AlA34zdnkBgEQAAEQAAEQAAEYkQA6maMJguiggAI
gAAIgAAIgED8CEDdjN+cQWIQAAEQAAEQAAEQiBEBqJsxmiyICgIgAAIgAAIgAALxI5DYvn07Sb11
69bh4WEpfktLSyKRkH+/8Erf9VeVqcPq7e2N3yghMQiAAAiAAAiAAAiAQKQEysvL1fZIaay4clZt
ba18M5lM1tXVyb/DqJsprUcqORoDARAAARAAARAAARDIdQJkf/SvbsKZnuvTCflAAARAAARAAARA
INYEoG7GevogPAiAAAiAAAiAAAjkOgGom7k+Q5APBEAABEAABEAABGJNAOpmrKcPwoMACIAACIAA
CIBArhMInyp04ED/hZcWRzK+d48ei6opN3my0EUkKNBItAQw78QzNyHkplTRLr+oWgMr/yTByj8r
lJywBIonJ48dO5afn19YWEjJ4wYH41Qi+c7IyIj6Dv1N1+nTp4uLNd0vUKrQqNTNmZeVRjJbRw73
R9WUmzxZ6CISFGgkWgKYd+KZmxByU6pol19UrYGVf5Jg5Z8VSk5YAkWFZ7KvbsKZPmHXGwYOAiAA
AiAAAiAAAtkgAHUzG5TRBwiAAAiAAAiAAAhMWAJQNyfs1GPgIAACIAACIAACIJANAlA3s0EZfYAA
CIAACIAACIDAhCUAdXPCTj0GDgIgAAIgAAIgAALZIAB1MxuU0QcIgAAIgAAIgAAI5DiBvLy8goIC
+m/kck5IdfNox1dmXXzFlzuOpeLc+z16f9amF8z3B7d/2XhH3r34e88bt9W7altqO1otqmi8lBYi
n1A06EEg+3Oh9GiuN/syy/qsPb/pill3bj/KmPgsfOVHg6EleOFh4xPh9nFI27asqH6yGOMSOn5I
07bmWEAwd/jIB2otRz7XGVrGESBSFkNO7YeOH8PQU+9WMQc+14HGhMIgYCNAWiYdwzl//nz6b+Qa
54RUNyXiM+zt4/Qfh+vUMfae9W31nb//XsfBdy237eXlbbf3//7zF9/97+Ef8PiMREggk3NBj5+b
/96Qdfuqef93r1hvc//80NsvPLu61LbMIhyXd1PH3nqJsWunJNgbT//HDvb53/toydsnw3RO6sXy
b7Kz77CTyifCbdm7d1Ayf/Hn6e7f795rtPNCdxO9s3j+7yddPqRhxHX/yAdqLQc/13//+Ttb+wIN
wqXwKBA5Lga1mzHi5vYxDIQrwGc2+PoPJAkKg0CmCEhd8+qrry4qKrrlllsi1zgnsLoZesp2rfq/
jw866qnuTa5vfvbtZ+WreT0Ve+IvW54K9YAPLTUq6gS85uLYj+60G6HpOcrffHivwdD+jgPeo798
mWtMm3/M5/2JL9Pf//izrrfp/3EryPU3buovZpOlDU95GYZGR0kimsTB7l0/ZDeXzmRsYPCHpHfO
vITlF+m2SVMkxcxvkVOYRRkvT7omXZv+9OLrN5tw2PGOr2kjUh0FXrIXz/+jJYLPvrOy2N5u3vL6
2ptLEvlsEmMuNDxlFhJKtnZ/gtqgYtyVVe783sNyGciRelxj97nW1hXfUpr52vrhT/cMvnPaWX6X
wWqT6IQoxVCdYvg0weq+IPfF4Awvi9xcP4ZcMs91pS4D5TNL1Rw+DhF9NNEMCIwRAdI16SJdc0ZJ
yazS0rk33njvfffKN6OSCOpmUJLrv/xltusbLd0ng1ZkRcXsg/T69OqONVS36dnnU22ogVtEhbAE
HOeClMh5XyH1S7sMw+f18/k3BPbCL1/hyqKhDH3+7z4199faO+5y7Fq1o5ddWDz3PmHRvPa8OelO
VpD1VQu4odFNkrDDNepJReHqr26nbzyrbrnx4tu5VtdU/42OE0WyTNPyz3Ozovb3XR2HxHi/Z75J
//zhV/+q443TVlneZzqKpvpbVv3YaOH/7k1xFDgP4dLfv4Xrmy/9alDwkSrCl3//WsYmTUtLwyqz
1iONVAxTXH//ecXMTDNpnegdX7n6igcVdZn98O+/KZbB4j+6MsHeSRmpbQBj9rlO0LriW8rUyZpM
5zRRLfLv+Y5lVSuD9UIkW7QsUc3waanFF8ysTT9PYWIuBufZlu9mlZvTxzDdp8yCUdoXBBAfHwev
ceMeCOQcAUPX/MiHP1z24Q/PnDnzoosu+ujsj27513+N0MYJdTPwxN+wdPPNrGnrjkFfz1Gn5i//
MHcd0vW+1SkfWBRUGDUBcy76Ov6FdK+bpTGSXvv/brFuhP547Xf+kLFd//HcURF98fzPuIt88R/d
cAkr+IC7BJf+/mKuQrF/XHnxHBET/KZQDnTdQFT8+Oo3/4d39+O/40viy82ruYJ11l2S0Y635HP/
+PYvnti8mPel2VwX/93+Zx9YfuFx9r5onP/TMMfuWrXtpzRk6UmUWISZdld//7sli//x7X/j/xBN
rZxriKa18GPeC3v1l2+l1ch5zZL5f0gEfvjUc4PnT7Nj3eTiZ1++YS7LY5OPpqeh9Sj8BlqPR3+2
i+uamhVN4tUu2SD7/N/9WBnRP3x/66sK3C8/wQf7wPJLEmySpoinR5/Vz/WuVbfcINeVtDF//pM3
lJgiGvL/uqPl2y6D9UDkMVZrLWFYber6ScmnXRZDemqMZZab28fQx7piyjJQLDyOHwfnuCw/w0cZ
EBhzAqRurly58lOf+tT1H//4Dddf//EbPk7/n65rrr76qaeeUn9UfTSiQt0MTu+S5Xfcx3b91Td+
dDB4XdTIWQKH+0nFkTa/G/nr6r/aRf9+6fARsmeUzF8ilKFnB8+cZjKs8Mu1aXWRks99T1PIxKCd
7ED09oXsgyMd//hXP2R/8cTKa1neBWzyWx6SRIDvzKv/sYt9/kMzWd6rP/tHxq76UEmigH2QVGfe
tvBf0z+L597xXa6ivXp48LfiS9G04r3/zrHcTFX4NcLOMTZZ151J7A9eLG9wvYe3cPXvVMg3hn09
iYv/+O57SY996mcDp4Wvn62/gWgUMpaeht7jp3//60aPspawj5IV7ao/4Y1rl3br7ptpyFOMkf7w
yFuGIfPzf7eUa7rcF3EJd+Xn/EWq8yM0nHxNM1bk/03/Ts7BabBvuCPyGLCgt/jvaiXYhX/5du8L
b9O6pW/OToshF8i5fAx9rCt1GeRbh+L4cciF0UIGEAhB4Pz581u2bHn44Yf/9m//9stf+cr/+l/V
iz6zaJ64PvnJTw4PD4do014F6qY3xv63dH+cWu5jX31iNfvhN374s9+eDzMLb71lemzD1Eed6Aj4
mQuytynKkBZWyJWhovS6SKV4Hj8rTX3cDmSPoBjc3rRqB1u/5QtcxZl2odfYSJLRXdyZLgIGfviN
Wy6+vo7rzd+tv/iGtZZTGsh/rV7DA+IkBzXtyUuIaz9k6j2BhJ1beT83IT/7ws9+vJ3d/ABXaAp8
0fDskeyjQooUdYG/lccuslouh0mD1i8vu7XLsPyspUBEvAorsZvPPst1TVKd1eGkym8frLF3eSBy
kWDXkX76RiHBTr6Y6+Xe69Z71Fng5uNjaMqofsocloE82MTvxyGyCUdDIJA5AqRunhXX+9ZLvkN3
I+ka6qaKsfTyz9E/m1qe0NPGNbcpT6qwXnPryPbzj6vu56aDgNfzm4TzixtvcI0xAXUuLitdStJY
nuLkbOUPcnHpytB/cE/6zcK6460MaVkID+/lz+Orl999nzbWlAiKYz/6BoUY/sWW1eVkLpIGwsu9
JRkNM3Km7/8md6VzZzH3hMrx/u1ySgAXznRj8e/dJsJYr/xQyfFnuWvb4iJPJwIp4iEuESP7w79a
uWonmUg/zk2kXKHxRyO1R1nrH39GGfh0HfvR980jArRb33/iuJRRjvTzM20f8mBDyP7nWo/d5FbY
4lTVWRPeY7BywTsi0io3PSfwOdH72YvC3iEX+WgO0qK0m4zuh64fw4tDfsqOi0iPQB+HYAsJpUFg
nBKYwOqmEvyk75jagSy7/urqcplaKzIkFn/y9ymEK8U6UvzHD3z7Zv+Loqn+xostbfrQV/y3jpJB
CLjMRdnyPyONkFaF5kwXLvXGjmH9QX79is23kDL0DVoSn/8D4S8W1h3Xw/a0bOtv3iwzf5fzEDrb
d4yjHU1Cq/uHlbw7vkIo//3SNJIEGayt7NGfPb2L3fyhUpa3dz93pf8OrW2ecaI50ylKRC5+kVuz
ePPSa9mlv8O/GNH7HMgtq3iIgfX6x3prZnpo+T6ue8MX/9HHDRNpOBpaxF5T/fUcvpoBpuGlqfys
zFsXIxXu5qSDCdR7MDn/uZb0HAc7Q8YWOyFiJZdfwwdOM0uLwUrPXkvLb5Ok/C2G7HFz/RheEvJT
dsnlaT4OoZc/KoLA+CYwgdVN68TS2YGUEVz8x48c+DeRcKBf9C32Af00FmuNkuV3W0r6XyiUWsHb
1L1R/iuiZOQE1Lm4/qtvP/oVde43/7iBm/20Q68uXf5FaaEUqokeJ6eVd0gzv3T5/7NH+tDlRQF2
PBNItYlKS4nleo+neKeRZDQURMjaR2eWJN7m2d9f+fhcllRDAtY3y4QbusjwKXJlLvh9eZaCeO/v
9ou0G83uJVRwcf2PkbY/GuHmzid/uv4Fz8jRCUWDIvaEHVcTe7NIatIuarBzg5k8xC1VlOpkczcH
HUlufq7dB+uJ6B6KF9KuLzeL/DDtSgmF1FY1BWCEXgyZ5eb+MQy1rnh6n9vHIeiCQXkQmFAEEtu3
8+DErVu3GtGgLS0tiURCUnjhlb7rrypTifT29paXk9+PHTjQP/Oy0khgHTkcWVNu8qR2cZq9PUTZ
DpaL9L+Lp4l33mW//q15y3yfsZPHGEUxUIy8liOhlzTf0eupJeXf6kWpFaOJdooE+0RoxL600s7F
e2+z08ZskeXPmiyi3aU8Ei0zhlNMXRUqWetKMwLsjCr5J9m7MifcuPTGvSXxP32pEMSi5Sswn38K
ktpSlAfcUCr36rkXsNNy/SvDN4SRX5NIZm0NGwMUhc+QU37E/IDwYdoYSsndPvWyI/VDJ8u70XjH
s0d5lw+lgCXftwhzRiGvdierpIRCGrTH5HPtvIxdwDrK7zZYGpcHIuPDQnPNfmuhp06Hycq6GNQs
q6xxc15XLh/DtOtKXQbqENw+Dl5bgf9PLEqCQIYJFBWeOXbsWH5+fmFhoZp4buh+sv+RkRH1Hfqb
rtOnTxcXF8sChkJoyEtKY8WVs2pra+U71HhdXZ38e6KqmxmeSzSfIwSy8E0mR0bqIYY/CIq6mXJa
U2ZG6E+qzPQdt1bByv+MgZV/Vig5YQmMiboJZ/qEXW8YOAiAAAiAAAiAAAhkgwDUzWxQRh8gkPME
xAnwz/HoUlwgAAIgAAIgEC0BONOj5YnWcosAPGs0H7kJITelyq3lq0sDVv7nBaz8s0LJCUuAnOkv
vfTSoUOHfv3rX6sQvGM3p02bNnv27Msvvzxc7CasmxN2vWHgIAACIAACIAACE5EA6Zrz58//gn79
ibiqrNfy5SuWLfv8rbcuW7r0c4sX33zllXPefPPN0LCgboZGh4ogAAIgAAIgAAIgED8ClJl+8cX8
gBVKPz937vz7587RTwj95jen1dd777039O57p04NHTv+61++9Sv6afWBwYHQQ4W6GRodKoIACIAA
CIAACIBALAmQokkXHYLJr/PD4qcs31dfZ86cPSOu3/72tCwwmnGOKnbzwku1s5dGIwHVfffosaia
cpMkC12MEgKqZ4IA5j07n68Qc4ep8Q8NrMDKPwGUBIG0BIonJ5ubm++44w5SN88LPZIrnOeHT546
pdY9RzZPfvFfVCelc+rUKT/v2nPbitvCxW6GVzfTjgcFQAAEQAAEQAAEQAAEcooA6Y7ZVzfhTM+p
NQBhQAAEQAAEQAAEQGC8EQhv3Tz57nhjgfGAAAiAAAiAAAiAwPgmcEEBrJvje4YxOhAAARAAARAA
ARCYeATgTJ94c44RgwAIgAAIgAAIgEAWCUDdzCJsdAUCIAACIAACIAACE48A1M2JN+cYMQiAAAiA
AAiAAAhkkQDUzSzCRlcgAAIgAAIgAAIgMPEIQN2ceHOOEYMACIAACIAACIBAFglA3cwibHQFAiAA
AiAAAiAAAhOPQAzUzebNm/y8/M/d4Laa6VMS4lXTPmitN9h6p3YrMf3Bbsu9no2utVxvdT9ktCb/
UNt0b1CRMPFQjy6FkM38p9uAebGNe1PuhhFe7dfWoEvvzpKnmxuPGfFo0GseHXocaF8pJ52/7tw2
oBUxyZh3qYDBOV0v1ile2aouqL0PWtp0XnLp4GTkvuMiyUhPaRsV85LyWWOMo9NhOmF0qJK2p3FQ
QMXiPBx1B3PdLtwXLf84pOyKlg+OMSkpvcs5Mj9W+u30AjPZvuMOY1sbvkY3DuYZQwCBcUggBuom
Uf/iXXd5v/zPDGkPc+ordg2NnKDX7opVs5W9lbba+9kD8tZQ1/qGBeZTkG4t2rf5oEstt1uDfb2s
Wqslm713vqnluNQSEjK9rw1Ni3yomOb4ux+aXdth1zVDCK814tSgu64ZQnJ6IJm1Di7bOdui6rk1
6FHLSTp6bs1c1a7NxYHm6o76mdrkzlsrptt4da2n+o1da+bxZrx7EZroArbbqHtkM6udk/LgrGo5
oLR/oJlZlpz/hRtlyQBzGmW3o2nLilH7eFqV+9E0Pz7q8gU5u7bcWJAHW3oXperlvhatgYNrnzN3
Lj1ifEB2XUcr3OFb+vcbqtc30sdqS+oX3fRkZ1Tds4GxdXuM79Vm71tWtbP1ldqe6Wd06XtDCRAA
gTEiEA91k+Dsf+2QfB14vf/VN956ve9w/+GBgaPH3z5xMgi67q31bcubV86Vdeat3FzVtupRacUc
aH943fKlC0u05ubXNVezhk6xe4pbzZuq5L15a3c1Wms532LsUF8HqyjVW1Tk9Gpwz4421rha6Ys1
7bHaWd0GzJ8NC5pS74YVntpxbtCt+4FQknfvaWAm25KaB5qrmx6WNkKPBp1rua6Ewc6d9Nza3Sqp
lqxoJY1Tn1xLpcFtm5rYhl3atwIP2RgbbP1Gfdv63SNSMRXXjKotRzZXrVtss9UZJUpWrF7P2nY+
rdtWXSXO2I1gc5oxMUbb8Pw1uzew9sf2pHgnRttsnOvzBcm/ppoLsqTmEfG12TQ6Blq0ZEpctI5W
+CMrZhhc5t5LK7xt1f0WK/7g04/RRld5+7LljlpjWqjzFtJ3PPsut3fPOlbVUic/X35Gl7YjFAAB
EBg7ArFRNyWivLxkYUH+pEJ6FRSJF/0RhN78NUOW3dOsyzWS6iU3mRsrKSUnhtZyxVQoK+Wl5q3S
Ul1Z8bhF9fr3scaFmmqrSulZK81w+k13v8V1xe2vpBMfOUGPYZ99eYvh1qDc/M2ABL9uTdUlaoYE
9HQ2MSv20or0aoRLLVd0/Lmr6oWshHph+/ptoRSkQZrfRjxl2/tobYfxLDQ7nlG5tNqwx7jJo66l
IKt31GU953TUrWe3gVlly1lb/6HsdprDvfEFaXxNNeXkX5s7dnTKlR5o0XIl0mmFV93TtWtLjfIl
Wn4tXDi3ZOGSKt/fjS0k51c2MtvXP/FlT//+72d0OTw5EA0EQIDFSd0kXbMgX1c0JxUUTSosLCzI
y8sLPY2D21ZzB+vtwlmjWSLNqCZrHFJ16SyzH6GsGJfrrf7+tuUvbnKIF+RV3WpxfYU1bNKCSns2
Lm4w3UlUran+sSXSp3+whdXPNIUUTmHVDuFHQncxuBHXrUGLu980n3hJTrrmYtYlXXJkWWxaZPPH
WWZRqhFpUNjmvc3/SuCGE5vhmT/S2Ia7FVuOU4NStoH+F81noVqMvqUo9s7UBrj1VPfU+5c2spJe
iySyTrLUEP/AWj5EWeo3R7sRyllpmV06vllpZuBAi1YokdeVOfhm5s23fIXuIZe33EWtH9ggoObe
3pJqGbV82fMzuiD9oSwIgEDWCcRG3UwkE/n5eYWF+UVk2uS6ZkEB6Z7iCgNNJIjM4aYszUXOLZFs
3eIpnZVapN2RJTv08D7+YLNfwjbmcYvxLbKDLTNC9+7u17VDr1rc1SuCSkWWiYi5VNWXFL9z+mCp
kMJ7QJUBCXpoATMNxu6SEwpdrSdf9k3kdNPtUjYblZgI7XJt0LNW+vUglHjTiqlVkK5zPdCC3vTq
pa/favB27bSdYt3MhCFacuzFPniAU3FRnLQ1qY4myPPqfmjROjPmJP2UT4gSzlZzcxn7XrScFi/s
qL+moBQu72WVWphK2FgRm2VUbVb2mG50E2KKMUgQiC+B2KibyUQyL5nMTybJnJlPf+lXIpEIQ19L
EBE6pZlwQKkkwnvOLxHArsVuhumBCT3shOJ1mlu5Ib12KHNyFzEtmUmkzqh2VnXPdfYIhxPWfy2e
/+S89btLTii0uEkeD6omM5XU3N3ITGOniC0zZHFt0LNWmqEIhzIlA6Vagm2ucxakF2v2tJJmm5Lj
wiMOa+e4B3f6n4dxVbJRs3ybKSnkXVUvq9ZOMcq9FDdiJN6NKxbZG4zrovUvwmDr9xWXN2165Bb3
s8vZepCWURkrzyM1qdn196gue/8yoSQIgEAuEoiNupmXTJCSSf70FF0zpLqp6pRmwoE1rYdbBUS+
JP/DfonCHrfsNfw0KPfZ3brWK1JnlO3b6j107t3acVTCW1p1cmJ6Sm7GepImfZAcZ+Y1994Rnnol
rbmz++4m77x0kno26Fgr/SfMCF60aSp2awq15iqb6Km338z4oZKaqpQSO5siE88zs4eppRd8opdI
zUx3ixuZ0JzUBWmCsEYd+F60ZaVVrKO/zxuoSBIi/XKmYZwWZmmnNPN0MyOy6LSKotkNlWYSXurH
zW106TrBfRAAgTEjEBt1s6AgjwI1KU+IrJvqRVbOUcHTPU3WcEyjSUOvsiQlqA5fZs1XsN6yi5au
QVtEmtWEac2NcHaU2zuNSnijZacUDQ/J9XxYoZAZ9mOLxqmbtdbO5dZTocqnQcF1wZRaaR6NlN4k
sqmcIlwteQlqO/ZexFEDwoqjZ2AEWoE8zwwXCERMQCxIJ+2Q70iaszvQohXmRsfADxGJJLL9ZJJQ
imWaHyXm9zANCwQunjiVQs89Mu/6GV3EQNEcCIBAtARGp6tFK4tna6Ro0ovCN+Ulfelk2gymbtpP
Suc6jfgazQ/jsH4pN44xEnFFqlWAcoC0lHOPW859CUXKo5Zzvq1pdlXFUB4k7uzCCe8xFyVl5VbD
nlbWQ3IDsizqqSWLfNiyUiqWDoUqo6zlIbVIb+LnFjlnU7lHCNh74bKR4ZOSG9prt9oPC0z3oeCL
Rw4QFwhER4AvyIYFtp+BEJHWen53oEXLY6wdVjg/WE07n8hMElKHIY+Q05MdgwyQwo14VlMPPyRE
y+DUq/sZXZCuUBYEQCDbBOKkblJukKFlSh96YE+6jMbTjnWkBuSZlDI7hO+SLrd4HGdH/WolW9zY
Dd1vyb6UIzPJXav3la7WIj34Txx9p+avqGJw/Sl9eFMo4b3WoThaxaAhfxSEghG1SEcnyY0oAt6s
SPIwFVZxFIARy9izkZ/r/k0Rs+XRIDXiVMtVamFeJbuma864c5qzu2xCPH5E6CJLZK0INqXAUKfT
r6Rw8qiB9LOW7Y0A/cWegL4grb9DtoBOQjC/YgVatHR82G7+MxNK7Lj+cwniE+oYf8I/GSIXMMzh
suIg5J0P01dHLffInBQ/o4v9FGIAIDCeCcRG3bx4+rTJkycXFRVJC6c0ag6LK9D88Gg8/sMYMll4
Zv89prmL50Hf02fcot/SMLfpeWvF78EY2eJ64gv17X6L98XMlNs9lYppzbtWI+XIy1jGWmZ1/q5v
5slDImmdn8DsceaOiSWU8B5UxTHpOo0pM1dd1yWTNkSko5Pk+qNCMN9UepCKUbyXVN8pi0j8gJMc
L8/EN9m6NuhZyy65PcIs5fckRQiE/UB+L9n4Y5WfzNpVbglc28BzvNTA0JQcF9tRA4FWLwqDAE81
s2bxG+ogX5Dyl4T03YP/wpA1TNnXojUo85RKdYXPXMXoV7LEJ9Qjm0d8UTQjzt0Fts0m9+B3tJvm
WLWAn9FheYAACOQsgcT27dtJuK1btxp6W0tLi2E1fOGVvuuvKlOl7+3tLS8nbyo7+W6WBkU/mO6n
p/pVq/0UQxkQAAEQAAEQAAEQmLAELig409zcfMcdd4yMjJw/P3ye/jdM/x0+eeqUyuTc++fEdf7s
2bNnzpyZOnXKz7v23LbituLiYlnMUAiNWqQ0Vlw5q7a2Vr5DlsG6ujr5dwzUzQm7IDBwEAABEAAB
EAABEIiWwJiom7FxpkfLGq2BAAiAAAiAAAiAAAhkhwDUzexwRi8gAAIgAAIgAAIgMEEJQN2coBOP
YYMACIAACIAACIBAdghA3cwOZ/QCAiAAAiAAAiAAAhOUANTNCTrxGDYIgAAIgAAIgAAIZIcA1M3s
cEYvIAACIAACIAACIDBBCUDdnKATj2GDAAiAAAiAAAiAQHYIQN3MDmf0AgIgAAIgAAIgAAITlED4
Y94nKDAMGwRAAARAAARAAARiS4B+Igi/KhTb2YPgIAACIAACIAACIJDzBPCrQjk/RRAQBEAABEAA
BEAABEAgIAHEbgYEhuIgAAIgAAIgAAIgAAJBCEDdDEILZUEABEAABEAABEAABAISgLoZEBiKgwAI
gAAIgAAIgAAIBCEAdTMILZQFARAAARAAARAAARAISADqZkBgKA4CIAACIAACIAACIBCEANTNILRQ
FgRAAARAAARAAARAICCB8Me8n3w3YFdhi//kpzv9VP30p5b4KUZljv3Xpq0vnhKFp177hdWfvlSp
d3THv/zbc0Pyjctv/YvqCvNe75Z/eOKQcy2PW24yuVQxZJty3Zc+x1qlnPT3n312BgsnQFrZ+JCP
z79v5TV2UZ1uedFzGWyIKt7TpDTILru5YUW5r5lPXyvMePdt+/bjh43+L7qh7q6lxfo/X25r/K+3
UmSzLTlfskdfiMv2jkXa6Pvw1eLAT/75By9dZP2sMaaK54TR9vH01VfsC/lAIXi+o41U2zqMcWdk
D/H6CKTZb70+Kfye9dNtGRqzftZiP7UYAAhkkcCYnLuZV1dXR2N86aWXRkZG5GBXrFiRSCTk3wO/
PjnzkmkqhKNHj156KVfQTp/NEps3+15b/rklV1/1UY/X/ldfu+KKj/oRSOgc0z573+rP/d7C+Rcd
+FFH529mz79isqhKe/GTH1jx5S9+mm793rS3nnj8v345bX75DO3WEyev/cLa2s861XK75SYQ3/Qd
W9v3n4/9gl33pT//k6XXf+Sg8veFQtcMLoBHLU22fdu+/9PjbNpHfq/CUJJ0qR1u0dPu8deKNA6z
j//031pev2jhNaq+bhuyF3D3CfOoJW4xfS76/uuJx99KJ4P+zPOuFXi8JMnmx35x0c0Nf7aMFgy9
5vzm+e27O/su0mEe+5+n3mA31Gkripf5yHvdu/+r21xyfpZsJsr0btn6wik26UPX3nClXPtjd737
5vPPHZ00R/ug6XJwdGc08ewYtY/n/7z7kbGXP6vk0qGQChl9peE71Ufe2/fciz97ZpSbmOcekv4j
4LHfKuScxsU/L+beQmL82+7fXsX3RvFZW3jRwZbHn9g39h+lrM4/OgOBaAgU5J3ft2/fxz72MWqO
dD/jOn3mjNrBsHaNnBdXUdGkX77V/7vX/O7kydpzw1AIjVqkNM744LT29nb5DimT1157rfw7Ns70
E6eG5OvkO0On3nn3naF3h979zXu/+e3p06eD4N/31Iunplx3i2bJK7/l2otOvfTTfaKFgZ90HZpS
dqOudVV88rqp7K19Lxu3rqvR7KDlKz97ubWW8y03uURH7lUumikUXHHpf3tUCXdLtE47uGqZU+V1
vrXv5bfI1KpzuHTp566berhrxzGvCfAA7lHNa5pe6jvFLl+ozAU7/D9yBuV/Ha8B91oeKDzHe3TH
9hdPWa0vMz5915euvejQf7W5SlL82YWXsVN9vxjwYpbxe/u2aXb6jPeUsQ4qVtw8i71z8KWjGesh
Ng0rKI4+2/cOWQQ1j03xZ1fXjXYT89ys0n8Ewn38bejJ7fDEIfqscSePfl1TTZ+1Uy897r3/xGYW
ISgIjHsCsVE35UyQppxMJJPif3l5iTz6f8lAQ6hYcZ9lzzInmO/UU8s+Zm5ntFn/hXQxi01c1QKL
L9Y1UY9bomn69v8P327UXlILcatCW6pQ/g4/QeW//QPz7y0vhxPAWzbhViNf21/QYzvlcrvVu+8w
syKaOU195DsMlrkDt/H5l/8ylLA0tZw/lr2uSl6aj3Go8b780+eGLrrhk6l+/BnXlk297HcrvHtU
11L2t5hj/9V5+KIbrr08+z1H2uOll0xhp96GuklQDRSXLv2z+yyxJcX0CWUnB4hSBvYQHx+BUB9k
2zI59ouDjp+1Ty+49bNK7EqkywuNgQAIREwgkK4Wcd9Bm+O6ZpJUTK5lyhf90/D7B21N6IKt3PH0
KaEcHD0+xKbNuJTikDTtUNF+6PbUixWXsdjEjcv1lu72bfiL++h162VvPa636VSFPypuvUyEK/3F
ffd9yfxbRlWGEcCrVvlKkkq1FpgD8rjlQFl75LsP1qxjAa4F0ZKDW/D5wg3sxR9YmWsVrbW4Psfe
6vyJ1DN6t1BwZFr1jjHPWmHGOzDwDlNs4eYY6VuKRywpV/Uuv9VnsGmIJZ2+CrdIsWtvXWp+r0pf
JydL8A+s5UORk1JmRSh3FC//D4Wb07YmxYh2Dwn8EUj5+PtGI1wTF11ui/ZhrLzCIeLcd7soCAIg
kE0CsVE3KZqUdMs80jiFpplHWqe4QsLizuJGSsQxXMPHjpxk7NB/fXvfNVw1pNeXyvp+8A+aPZIe
bPZL2gzcbzHpSNJdz4adz6uKy2g8qoS7FY6azZ4koMnLZbBGRzbgqVVI27551tCLPxbRC/rlUItx
q/PN0176N/GtgMez6uacSy/xGJZrLY86XuMdeNtq8HZt5p3nthrmbbHk2DtveYYfhJsbn7UGfvL4
c8xckz5rZb7YW4+bTgCBy5ZilSKDiAcwYioyL2AO9+COQnwZ02KHwm0UXpuV748AsXP6IPtGyjua
cnHsvx/5Hi8KgsD4JBAfdZM70vn/hD+da5nhdU2qLAxamk75z0b0D4XYGwnaMz69YJYeuxlq7o++
RemhY+s2DSW3e6VLl86/nHz9mzTLoojc0kqnHawduM01L3yCh17uVQRwmiZKV/qHJ9hn5beCL8zu
+7dGzSZ66VKP8brW8lI33cdrq8XbN9XKLabSTPmz2hcYITCPOHxuq3twZ8RTZm2Om5fYDZ9TAuAy
2l2AxukUCJXSfQ2fTfH1W7V2ijl+h+JA1IMjAnQW86I+Ucgk9MtvdfZgZICB60fAdb+1CpE6rsZt
6laQAYHRJAiAQDYJxEfdtOqaBqNRKZ1M6JRmwoHhdRLNG9oP/8N+icIet3gNJ2dfmipOsx9OgBAd
pVt611STHnBKsyzy45PI+2+M0Y9nMwW43pRU1IwjqFLFUGod3dFN3vOb9W8FIl0p1SZqH0W4Wox5
jpe9c8TM+KGSmsJkj4VV5eF5ZmxUX2PSzZHrfd2N7uCUDN1m1iqmaO1ucSBZk2cMO/KDwghHNjTy
zOwhAT8CKR9/K0TbuPSYkxkXX8SG3vbOrhvb3LsxXA3oGgRiQyBG6qbMENLsmlLLHJ2uaeiUPPrQ
Go5pzJ+hQlmSEhQnMpX0e0tZFB5V3JaO3158yxZyjZp6FeVRcaOmoaP7y9uweKjJnKxrabpxyzmu
0ahlC1MzkyE8BhSulmjQZbwV11zOhvqeDeEW53lmY3HxfAtm6vfcWy3sSbAhjcVsZLBPnrEnsgBt
ds1o95BQH4EwCV4i8NoxBEX46PkCPvpsBoGiaRAAgSgIxEbd5Lom96Kb8ZphdE2e/W19vnJFZNY1
lF9cXnFZiidXSx4iE+aNZRdZTFnH3qazeESUutetyy+iB7piANPmy6OK24yGFMBd7CiWDrUhMkYv
4dayS10Gmwa4i4bqUcv5cWU1S9tHF66WrR1zvKSGfuqGKe8891Rwfx9fPBJadi9x0oKp3HMjq7An
jWXeUnYJTITe5G8ZpJwZxAeegT0k/UfA44McZDKKPzbb6bPGD2miNcxPhzh6PEh7KAsCIDAGBGKj
blJqkJqHHkbX5HuuiMYzj4qUp8rJYzj5QZsut7gPaOjFViUbWs9nF+54l1u8QfMWnev5z40i98ij
itsKCCdAiI68l6BI2zfiDumocJHjLDQnl8GmBW5EgoofkvmHb4vAUI9aWvyoHhwpDuTTZ5DYulxe
tTyG7DFeLiQ/dvQJPXJUNCMiRClgzj1hVqbSLzB/eWgMPvTocrwSEOHUZNd0+gqRgT0k7UfA6+Mf
ZA5EHqH1s6YfaC/2H5GAjwsEQCCnCcTjRyzpV4W8KXZs3+n/RyzVH4JL/QlE80fe9J+O1Dv2+CVG
f7csP7vnUoX/HBzTfpVR/ZsL4a+X1N9ITP8DknzIlHnj9COWDrfUH6zz6MsyWA/g6g9LMjZLFcOj
lnrLyWnovFjS1wo4XtGN9Rf83Iegy4QfsUyZHX8/YpkTv7c59lu590+PWj9NhrTmksvMHuL1EZDf
wYxzBtx+ctbfT6q6/VomLaHWT9+1euynBxKAQEwIjMmPWMZD3fQzg/7VTT+toQwIgAAIgAAIgAAI
jD8CY6JuxsCZTnqkn9f4WxAYEQiAAAiAAAiAAAiMAwIxUDfHAWUMAQRAAARAAARAAAQmLAGomxN2
6jFwEAABEAABEAABEMgGAaib2aCMPkAABEAABEAABEBgwhKAujlhpx4DBwEQAAEQAAEQAIFsEIC6
mQ3K6AMEQAAEQAAEQAAEJiwBqJsTduoxcBAAARAAARAAARDIBgGom9mgjD5AAARAAARAAARAYMIS
gLo5YaceAwcBEAABEAABEACBbBAI/6tC2ZAOfYAACIAACIAACIAACERH4MyZM83NzXfcccfIyMj5
88Pn6X/D9N/hk6dOqZ2ce/+cuM6fPXuWqkydOuXnXXtuW3FbcXGxLNbb21teXq5WeeGVvoorZ9XW
1so3k8lkXV2d9nd08qMlEAABEAABEAABEAABEEglEN66efJd0AQBEAABEAABEAABEIgTAfxmepxm
C7KCAAiAAAiAAAiAAAj4IYBUIT+UUAYEQAAEQAAEQAAEQCAkAaibIcGhGgiAAAiAAAiAAAiAgB8C
UDf9UEIZEAABEAABEAABEACBkASgboYEh2ogAAIgAAIgAAIgAAJ+CEDd9EMJZUAABEAABEAABEAA
BEISgLoZEhyqgQAIgAAIgAAIgAAI+CEQA3WzefMmPy8/o5VlBrfVTJ+SEK+a9kFrvcHWO7VbiekP
dlvu9Wx0reVxS+uSmt2410XEvQ8mpq9sVQXxkFC5lXiox9piGAkH2ldKFPx157YBRxntEtrIKCTT
0kg7VbwFfw0G6Kv7IWNm5R/q/KZpx1rXOlkcTkrL9nWVdshZL5BmTrMnj1iBKZ81xlTxnAg7VMme
yGPWU3pWQrSUpW7sPPLD7rgRWVr22iHDbDJjxgsdgwAI5A6BGKibBOuLd93l/fIPlHbSOfUVu4ZG
TtBrd8Wq2Va15n72gLw11LW+YYH5FKRNdtG+zQddarnd0sTqfmh2bYebiD0bFzdY7nlIKG4xXYwN
TYsUjTOMhPSMmbmqvVo2eKC5uqN+pv3Bz2wSetH2EMP/JKklg4zLq4fBvl6mjVTM78iJe+dr5T1l
Fo/eBWy3XBX0OrKZ1c5JeWZXtRzQ7kqSzLKuwg08o7UCzWlGJfHZuJWw9vG06v0+Wxrnxbg6qC7X
kV2N6xZr339mVN2zgbF1e1K+phKRni2r2tn6Sv6JSLNDht8Gxzl4DA8EQMCbQDzUTRrD/tcOydeB
1/tffeOt1/sO9x8eGDh6/O0TJ4PMcffW+rblzSvnyjrzVm6ualv1qLRiDrQ/vG750oUlWnPz65qr
WUOnMEmKW82bquS9eWt3NVprOd8ShcXu3+QqYvdDi9ZZb3pJuGdHG2tcrYjBmvYowgeVcLBzJz1j
drfKBktWtJLGqQ/ZEMouoQdvD1BBZsksG4y8Vx+H+jpYRak+u0pJT5kHW79R37Z+98iaeUaNGVVb
jmyuWrfYZpAzSpSsWL2ete182tlUHA5EpLUCzWmkPUfW2Pw1uzew9sf2pHgnIms/rg3t3bOONXYp
y5XNvZeWq75fzVu4nhmbhjlGXquqpY4v8nQ7ZNBNJq4gITcIgEDEBGKjbspx5+UlCwvyJxXSq6BI
vOiPIEjmrxkaeWTFDIcqXPeqXnKTeYvUrxNDa7liKtSy8lLzVmmprpZ53KKK3GxGeuqRE/RodLoG
t21qqmrZ3Kjec5fQY5zhJCypeWRIVaRYSWkFY/v6lUe4k4RcDou7rV+XzJuG9JDqfueUSIDADTr2
5Y5osH8fa1yofc1Qi3nKvPfR2g7tMazWmVG5tFqagjwudcEEWaIZL+s2pxnvONoOZpUtZ239h6Jt
NO6tDfS/aB8CfUEybPnzK2m30b5FGyW79zQw/Zu29w4ZahuMO1TIDwIgEAWBOKmbpGsW5OuK5qSC
okmFhYUFeXl5oTkMblvNXcm3C71Bs36ZYU/WQMbq0llmP0ItMy73W/PWkmvVWbvlKhtZztjmb9aU
eihJqoSMazmsYZMWbyr8oYrSE0pCa9fcyKGaAF0kJK3REpBQrxpoXcWgWotZl3RJkxm1aZEZxhCu
QcY8hpzKtL+/bfmLm1yiVN3a4Q9vxeCtLIAVraoBKaUzrs9ZLUyhl2j0FX2suug7zUSL/ANrmbhM
dBK3NoW7XI0Csg1g7u0ty1P86T2dTczyTduoZNkh+bsRbDJxQwp5QQAEoiEQG3UzkUzk5+cVFuYX
kWmT65oFBaR7iisMCRHwPoc71jUXObd+MQpy6qzUQ/SW7NADGfmDzX4JK6DHrTRiDbTfX8sMz5S9
sE1CKsINrjzeVNgIRRCVpvREIqHQX81IAwohcJRwsPX7pObuFnZfunhogS69Fw2yoOiaPQ3kpmWm
aSpcg859uUHn9psOtsyIsLy7f6b2dcJL5r5+q1XbdUrbKZrTNNzSumIv9uWkmzfdqgvzWYqiDmlI
1nSrlIBmWx8iHsAILIlChNi04c2KvuIebFlulrElBpUsXFJl8acLT/qyypQ4E/v+E8kmExvKEBQE
QCBiArFRN5OJZF4ymZ9Mkjkzn/7Sr0QiEQaJsDtS2gfXKc2EA0ol0bUoJu0EMnYz+oubDVjLA45u
fV2Ns0vIndGLmJbndHDZztmuueSBJRZ+f4r6MmyxrhLyp86GSjOWkc2tdA4VsMpATjotSJTHGKi5
UyEbDDRE7iI8saXGeKSSzB31W4JOrjVFWnmQpySy8LDC2jnuwZ2BRI+wcPpVF2FngZpq1Mzeej4W
JbhY61sVeoqH7qUYFSPZK1BfcS+clpUIktHS2ijIOPUIDukk0Xc2+WXvHvOjoeFx3iHjzg7ygwAI
jBmB2KibeckEKZnkT0/RNUOqmxpwoVOaCQfWVBIeHCayOPkf9ksU9rjlMae6Q9MhcSW1liJhihWw
pOYBnksudKZRSmjEmBrPb3cJhRnY5fKkYUZnksZMBhi9jZANOvfl+4PkZ3JFY739ZsbP3Hv1/HSX
YFzjUc21pYx9V/E9SGvBAKsuZA8ZrJaame4eo5JBIWLXtIjaHCLdvW3VbPPbkUhl0/LTB59+LOXb
o3WQyv4zyk0mdvAgMAiAQKQEYqNuFhTkUaAm5QmRdVO9yMo5KiB6woE1HNNo0ohVsiQlWDUkj1vO
ooktnh4AmvuV+w2F8Sb1HE1Z20iJsAWrWTN7QkrItUCRz6TGmHpI6ALKGKmLGHqKtzC6GCZkXitk
g7xqYPLW+Ug7uTyvomNHZwi3OE8my7Er2KrLMeEhji8C4thg+zYi/A9qCiBf2E0P01m/A+KwC6cU
OqM/S0pWyE3Gl/AoBAIgMK4JjE5XyyIaUjTpReGb8pK+dDJtBlM37dux4cnlR4RYT6Qzjs4R0U6q
lYvyTrQ92uOWOxyR826c4yj8hsJ4wwMxPSR0TsUVRtawEooz9vhBPyn5TF4SGnZBfYCm8u0hRorH
XI0DC9egY19uzJ2ppkPHGM+raK/daj+nMN3K5yukqswjCSxdA9Hf95rT6HtDi2NBwBaXqQhhcd1w
BZS8Oj38OA4tV1IW9dh/wm4yYwECfYIACOQcgTipm5QbZGiZ0oce2JNeUnO39rVezoQ8c1Eew8kP
2hTf+O23uEepo361khJu7NEet0JNtoeE8tYi3SlGDwZulZTCh5JQWBzJrumRZO0wBuHEN/PKezby
zBjtchfDolNq5z5qGny4Bp2G7EpcotPOKOWlKD3CBzrS4+VgLTGyIoKW4lzdbULy0AB7PFyoFYFK
IOCbgJ6Zrh6Ar4XK6IcNy7bEkcM7H36sIyVJyGuHDLXJ+BYdBUEABMY3gdiomxdPnzZ58uSioiJp
4ZRGzWFxBZohCr/bdZ2RRzyz/x7TsMfNP/f06SnGM3cuVfzL89aKn4oxUsL1rBe+cbvfCiSZXthD
Qn6L/0aIEGM2JbaPSkLhXWX8l4QsScG2H/a0jUIcCK/QaFZShdxo6Hqb6GtT6UFu0zU0+DANOpH3
4M3RMTP9eU+lYtD1nEFhFOwqVygtbtjAs7XUPJWURBb10IBQawCVQCAkAZ7i07VeXZBiNdqOY+MJ
Qx3tbfZzvjz2H6+9LuptMOTwUQ0EQCBXCSS2b99Osm3dutXQ21paWgyr4Quv9F1/VZkqfG9vb3l5
Ob1z8t0sjYl+MN1PT/WrVvsphjIgAAIgAAIgAAIgMGEJXFBwprm5+Y477hgZGTl/fvg8/W+Y/jt8
8tQplcm598+J6/zZs2fPnDkzdeqUn3ftuW3FbcXFxbKYoRAatUhprLhyVm1trXyHLIN1dXXy7xio
mxN2QWDgIAACIAACIAACIBAtgTFRN2PjTI+WNVoDARAAARAAARAAARDIDgGom9nhjF5AAARAAARA
AARAYIISgLo5QScewwYBEAABEAABEACB7BCAupkdzugFBEAABEAABEAABCYoAaibE3TiMWwQAAEQ
AAEQAAEQyA4BqJvZ4YxeQAAEQAAEQAAEQGCCEoC6OUEnHsMGARAAARAAARAAgewQgLqZHc7oBQRA
AARAAARAAAQmKIHwx7xPUGAYNgiAAAiAAAiAAAjElgD9RBB+VSi2swfBQQAEQAAEQAAEQCDnCeBX
hXJ+iiAgCIAACIAACIAACIBAQAKI3QwIDMVBAARAAARAAARAAASCEIC6GYQWyoIACIAACIAACIAA
CAQkAHUzIDAUBwEQAAEQAAEQAAEQCEIA6mYQWigLAiAAAiAAAiAAAiAQkADUzYDAUBwEQAAEQAAE
QAAEQCAIAaibQWihLAiAAAiAAAiAAAiAQEACMVA3mzdv8vPyP/DBbTXTpyTEq6Z90FpvsPVO7VZi
+oPdlns9G11redySTfBmN+51EXHvg4npK1tVQTwkVG4lHuqxthhGwoH2lRIFf925bcBRRruENjIK
ybQ00k4Vb8FfgwH66n7ImFn5hzq/adqx1rVOFoeT0rJ9XaUdctYLpJnT7MkjVmDKZ40xVTwnwg5V
sifymPWUnpUQLWWpGzuP/LA7bkSWlr12yDCbzJjxQscgAAK5QyAG6ibB+uJdd3m//AOlnXROfcWu
oZET9NpdsWq2Va25nz0gbw11rW9YYD4FaZNdtG/zQZdabrc0sbofml3b4SZiz8bFDZZ7HhKKW0wX
Y0PTIkXjDCMhPWNmrmqvlg0eaK7uqJ9pf/Azm4RetD3E8D9Jaskg4/LqYbCvl2kjFfM7cuLe+Vp5
T5nFo3cB2y1XBb2ObGa1c1Ke2VUtB7S7kiSzrKtwA89orUBzmlFJfDZuJax9PK16v8+Wxnkxrg6q
y3VkV+O6xdr3nxlV92xgbN2elK+pRKRny6p2tr6SfyLS7JDht8FxDh7DAwEQ8CYQD3WTxrD/tUPy
deD1/lffeOv1vsP9hwcGjh5/+8TJIHPcvbW+bXnzyrmyzryVm6vaVj0qrZgD7Q+vW750YYnW3Py6
5mrW0ClMkuJW86YqeW/e2l2N1lrOt0Rhsfs3uYrY/dCiddabXhLu2dHGGlcrYrCmPYrwQSUc7NxJ
z5jdrbLBkhWtpHHqQzaEskvowdsDVJBZMssGI+/Vx6G+DlZRqs+uUtJT5sHWb9S3rd89smaeUWNG
1ZYjm6vWLbYZ5IwSJStWr2dtO592NhWHAxFprUBzGmnPkTU2f83uDaz9sT0p3onI2o9rQ3v3rGON
XcpyZXPvpeWq71fzFq5nxqZhjpHXqmqp44s83Q4ZdJOJK0jIDQIgEDGB2Kibctx5ecnCgvxJhfQq
KBIv+iMIkvlrhkYeWTHDoQrXvaqX3GTeIvXrxNBarpgKtay81LxVWqqrZR63qCI3m5GeeuQEPRqd
rsFtm5qqWjY3qvfcJfQYZzgJS2oeGVIVKVZSWsHYvn7lEe4kIZfD4m7r1yXzpiE9pLrfOSUSIHCD
jn25Ixrs38caF2pfM9RinjLvfbS2Q3sMq3VmVC6tlqYgj0tdMEGWaMbLus1pxjuOtoNZZctZW/+h
aBuNe2sD/S/ah0BfkAxb/vxK2m20b9FGye49DUz/pu29Q4baBuMOFfKDAAhEQSBO6ibpmgX5uqI5
qaBoUmFhYUFeXl5oDoPbVnNX8u1Cb9CsX2bYkzWQsbp0ltmPUMuMy/3WvLXkWnXWbrnKRpYztvmb
NaUeSpIqIeNaDmvYpMWbCn+oovSEktDaNTdyqCZAFwlJa7QEJNSrBlpXMajWYtYlXdJkRm1aZIYx
hGuQMY8hpzLt729b/uImlyhVt3b4w1sxeCsLYEWrakBK6Yzrc1YLU+glGn1FH6su+k4z0SL/wFom
LhOdxK1N4S5Xo4BsA5h7e8vyFH96T2cTs3zTNipZdkj+bgSbTNyQQl4QAIFoCMRG3UwkE/n5eYWF
+UVk2uS6ZkEB6Z7iCkNCBLzP4Y51zUXOrV+Mgpw6K/UQvSU79EBG/mCzX8IK6HErjVgD7ffXMsMz
ZS9sk5CKcIMrjzcVNkIRRKUpPZFIKPRXM9KAQggcJRxs/T6pubuF3ZcuHlqgS+9FgywoumZPA7lp
mWmaCtegc19u0Ln9poMtMyIs7+6fqX2d8JK5r99q1Xad0naK5jQNt7Su2It9OenmTbfqwnyWoqhD
GpI13SoloNnWh4gHMAJLohAhNm14s6KvuAdblptlbIlBJQuXVFn86cKTvqwyJc7Evv9EssnEhjIE
BQEQiJhAbNTNZCKZl0zmJ5Nkzsynv/QrkUiEQSLsjpT2wXVKM+GAUkl0LYpJO4GM3Yz+4mYD1vKA
o1tfV+PsEnJn9CKm5TkdXLZztmsueWCJhd+for4MW6yrhPyps6HSjGVkcyudQwWsMpCTTgsS5TEG
au5UyAYDDZG7CE9sqTEeqSRzR/2WoJNrTZFWHuQpiSw8rLB2jntwZyDRIyycftVF2Fmgpho1s7ee
j0UJLtb6VoWe4qF7KUbFSPYK1FfcC6dlJYJktLQ2CjJOPYJDOkn0nU1+2bvH/GhoeJx3yLizg/wg
AAJjRiA26mZeMkFKJvnTU3TNkOqmBlzolGbCgTWVhAeHiSxO/of9EoU9bnnMqe7QdEhcSa2lSJhi
BSypeYDnkgudaZQSGjGmxvPbXUJhBna5PGmY0ZmkMZMBRm8jZIPOffn+IPmZXNFYb7+Z8TP3Xj0/
3SUY13hUc20pY99VfA/SWjDAqgvZQwarpWamu8eoZFCI2DUtojaHSHdvWzXb/HYkUtm0/PTBpx9L
+fZoHaSy/4xyk4kdPAgMAiAQKYHYqJsFBXkUqEl5QmTdVC+yco4KiJ5wYA3HNJo0YpUsSQlWDcnj
lrNoYounB4DmfuV+Q2G8ST1HU9Y2UiJswWrWzJ6QEnItUOQzqTGmHhK6gDJG6iKGnuItjC6GCZnX
CtkgrxqYvHU+0k4uz6vo2NEZwi3Ok8ly7Aq26nJMeIjji4A4Nti+jQj/g5oCyBd208N01u+AOOzC
KYXO6M+SkhVyk/ElPAqBAAiMawKj09WyiIYUTXpR+Ka8pC+dTJvB1E37dmx4cvkRIdYT6Yyjc0S0
k2rlorwTbY/2uOUOR+S8G+c4Cr+hMN7wQEwPCZ1TcYWRNayE4ow9ftBPSj6Tl4SGXVAfoKl8e4iR
4jFX48DCNejYlxtzZ6rp0DHG8yraa7fazylMt/L5Cqkq80gCS9dA9Pe95jT63tDiWBCwxWUqQlhc
N1wBJa9ODz+OQ8uVlEU99p+wm8xYgECfIAACOUcgTuom5QYZWqb0oQf2pJfU3K19rZczIc9clMdw
8oM2xTd++y3uUeqoX62khBt7tMetUJPtIaG8tUh3itGDgVslpfChJBQWR7JreiRZO4xBOPHNvPKe
jTwzRrvcxbDolNq5j5oGH65BpyG7EpfotDNKeSlKj/CBjvR4OVhLjKyIoKU4V3ebkDw0wB4PF2pF
oBII+CagZ6arB+BroTL6YcOyLXHk8M6HH+tISRLy2iFDbTK+RUdBEACBsSJAp/wUFRVNnvyByz40
k15FkybJ14UXTp42bSpZ9iIRLDbq5sXTp02ePJmISAunNGoOiysQCAq/23WdkUc8s/8e07DHzT/3
9OkpxjN3LlX8y/PWip+KMVLC9awXvnG73wokmV7YQ0J+i/9GiBBjNiW2j0pC4V1l/JeELEnBth/2
tI1CHAiv0GhWUoXcaOh6m+hrU+lBbtM1NPgwDTqR9+DN0TEz/XlPpWLQ9ZxBYRTsKlcoLW7YwLO1
1DyVlEQW9dCAUGsAlUAgJAGe4tO1Xl2QYjXajmPjCUMd7W32c7489h+vvS7qbTDk8FENBEAgFIGF
f1D59b+87//8bRO9vvfwP8rXYz9sf+H5ZydNmhSqydRKie3bt9N7W7duNfS2lpYWw2r4wit9119V
plbq7e0tLy+nd06+G4kA6RuhH0xPX4ix+lWr/RRDGRAAARAAARAAARCYsAQuKDjT3Nx8xx13jIyM
nD8/TCa8B7/9reuuu+6iqVNVJhdccMEHPvCBSZM+cOLEyTNnzkydOuXnXXtuW3FbcXGxLGYohEYt
UhorrpxVW1sr3yHLYF1dnfw7BurmhF0QGDgIgAAIgAAIgAAIREvArm6q7f/612+r/3z77ROnT5O2
OVp1MzbO9GhZozUQAAEQAAEQAAEQAIHsEIC6mR3O6AUEQAAEQAAEQAAEcpGAR+wmudQjkRjqZiQY
0QgIgAAIgAAIgAAIxJLAH9+6bPHimxffbHnN+8Qnrvjwh6MaD9TNqEiiHRAAARAAARAAARCIH4F7
7/vaZz67eN68T9DrIx/5qHx96EOXT5t28W9/+9tIxgN1MxKMaAQEQAAEQAAEQAAEQMCZANRNrAwQ
AAEQAAEQAAEQmLgEELs5ceceIwcBEAABEAABEACBLBBA7GYWIKMLEAABEAABEAABEJi4BLIQuxn+
mPeJOy0YOQiAAAiAAAiAAAjEkwCd2a7+qtB5+mWh4WH6z8lTp9QBnXv/nLjOnz17dvTHvIdXN7P2
I5bxnE1IDQIgAAIgAAIgAAI5RyDlV4Wyo24iVSjn1gEEAgEQAAEQAAEQAIHxRADq5niaTYwFBEAA
BEAABEAABHKOANTNnJsSCAQCIAACIAACIAAC44kA1M3xNJsYCwiAAAiAAAiAAAjkHAGomzk3JRAI
BEAABEAABEAABMYTAaib42k2MRYQAAEQAAEQAAEQyDkCUDdzbkogEAiAAAiAAAiAAAiMJwIxUDeb
N2/y8/I/K4PbaqZPSYhXTfugtd5g653arcT0B7st93o2utbyuCWb4M1u3Osi4t4HE9NXtqqCeEio
3Eo81GNtMYyEA+0rJQr+unPbgKOMdgltZBSSaWmknSregr8GA/TV/ZAxs/IPdX7TtGOta50sDiel
Zfu6SjvkrBdIM6fZk0eswJTPGmOqeE6EHapkT+Qx6yk9KyFaylI3dh75YXfciCwte+2QYTaZMeOF
jkEABHKHQAzUTYL1xbvu8n75B0o76Zz6il1DIyfotbti1WyrWnM/e0DeGupa37DAfArSJrto3+aD
LrXcbmlidT80u7bDTcSejYsbLPc8JBS3mC7GhqZFisYZRkJ6xsxc1V4tGzzQXN1RP9P+4Gc2Cb1o
e4jhf5LUkkHG5dXDYF8v00Yq5nfkxL3ztfKeMotH7wK2W64Keh3ZzGrnpDyzq1oOaHclSWZZV+EG
ntFageY0o5L4bNxKWPt4WvV+ny2N82JcHVSX68iuxnWLte8/M6ru2cDYuj0pX1OJSM+WVe1sfSX/
RKTZIcNvg+McPIYHAiDgTSAe6iaNYf9rh+TrwOv9r77x1ut9h/sPDwwcPf72iZNB5rh7a33b8uaV
c2WdeSs3V7WtelRaMQfaH163fOnCEq25+XXN1ayhU5gkxa3mTVXy3ry1uxqttZxvicJi929yFbH7
oUXrrDe9JNyzo401rlbEYE17FOGDSjjYuZOeMbtbZYMlK1pJ49SHbAhll9CDtweoILNklg1G3quP
Q30drKJUn12lpKfMg63fqG9bv3tkzTyjxoyqLUc2V61bbDPIGSVKVqxez9p2Pu1sKg4HItJageY0
0p4ja2z+mt0bWPtje1K8E5G1H9eG9u5Zxxq7lOXK5t5Ly1Xfr+YtXM+MTcMcI69V1VLHF3m6HTLo
JhNXkJAbBEAgYgKxUTfluPPykoUF+ZMK6VVQJF70RxAk89cMjTyyYoZDFa57VS+5ybxF6teJobVc
MRVqWXmpeau0VFfLPG5RRW42Iz31yAl6NDpdg9s2NVW1bG5U77lL6DHOcBKW1DwypCpSrKS0grF9
/coj3ElCLofF3davS+ZNQ3pIdb9zSiRA4AYd+3JHNNi/jzUu1L5mqMU8Zd77aG2H9hhW68yoXFot
TUEel7pggizRjJd1m9OMdxxtB7PKlrO2/kPRNhr31gb6X7QPgb4gGbb8+ZW022jfoo2S3XsamP5N
23uHDLUNxh0q5AcBEIiCQJzUTdI1C/J1RXNSQdGkwsLCgry8vNAcBret5q7k24XeoFm/zLAnayBj
dekssx+hlhmX+615a8m16qzdcpWNLGds8zdrSj2UJFVCxrUc1rBJizcV/lBF6QklobVrbuRQTYAu
EpLWaAlIqFcNtK5iUK3FrEu6pMmM2rTIDGMI1yBjHkNOZdrf37b8xU0uUapu7fCHt2LwVhbAilbV
gJTSGdfnrBam0Es0+oo+Vl30nWaiRf6BtUxcJjqJW5vCXa5GAdkGMPf2luUp/vSeziZm+aZtVLLs
kPzdCDaZuCGFvCAAAtEQiI26mUgm8vPzCgvzi8i0yXXNggLSPcUVhoQIeJ/DHeuai5xbvxgFOXVW
6iF6S3bogYz8wWa/hBXQ41YasQba769lhmfKXtgmIRXhBlcebypshCKISlN6IpFQ6K9mpAGFEDhK
ONj6fVJzdwu7L108tECX3osGWVB0zZ4GctMy0zQVrkHnvtygc/tNB1tmRFje3T9T+zrhJXNfv9Wq
7Tql7RTNaRpuaV2xF/ty0s2bbtWF+SxFUYc0JGu6VUpAs60PEQ9gBJZEIUJs2vBmRV9xD7YsN8vY
EoNKFi6psvjThSd9WWVKnIl9/4lkk4kNZQgKAiAQMYHYqJvJRDIvmcxPJsmcmU9/6VcikQiDRNgd
Ke2D65RmwgGlkuhaFJN2Ahm7Gf3FzQas5QFHt76uxtkl5M7oRUzLczq4bOds11zywBILvz9FfRm2
WFcJ+VNnQ6UZy8jmVjqHClhlICedFiTKYwzU3KmQDQYaIncRnthSYzxSSeaO+i1BJ9eaIq08yFMS
WXhYYe0c9+DOQKJHWDj9qouws0BNNWpmbz0fixJcrPWtCj3FQ/dSjIqR7BWor7gXTstKBMloaW0U
ZJx6BId0kug7m/yyd4/50dDwOO+QcWcH+UEABMaMQGzUzbxkgpRM8qen6Joh1U0NuNApzYQDayoJ
Dw4TWZz8D/slCnvc8phT3aHpkLiSWkuRMMUKWFLzAM8lFzrTKCU0YkyN57e7hMIM7HJ50jCjM0lj
JgOM3kbIBp378v1B8jO5orHefjPjZ+69en66SzCu8ajm2lLGvqv4HqS1YIBVF7KHDFZLzUx3j1HJ
oBCxa1pEbQ6R7t62arb57Uiksmn56YNPP5by7dE6SGX/GeUmEzt4EBgEQCBSArFRNwsK8ihQk/KE
yLqpXmTlHBUQPeHAGo5pNGnEKlmSEqwaksctZ9HEFk8PAM39yv2GwniTeo6mrG2kRNiC1ayZPSEl
5FqgyGdSY0w9JHQBZYzURQw9xVsYXQwTMq8VskFeNTB563yknVyeV9GxozOEW5wnk+XYFWzV5Zjw
EMcXAXFssH0bEf4HNQWQL+ymh+ms3wFx2IVTCp3RnyUlK+Qm40t4FAIBEBjXBEanq2URDSma9KLw
TXlJXzqZNoOpm/bt2PDk8iNCrCfSGUfniGgn1cpFeSfaHu1xyx2OyHk3znEUfkNhvOGBmB4SOqfi
CiNrWAnFGXv8oJ+UfCYvCQ27oD5AU/n2ECPFY67GgYVr0LEvN+bOVNOhY4znVbTXbrWfU5hu5fMV
UlXmkQSWroHo73vNafS9ocWxIGCLy1SEsLhuuAJKXp0efhyHlispi3rsP2E3mbEAgT5BAARyjkCc
1E3KDTK0TOlDD+xJL6m5W/taL2dCnrkoj+HkB22Kb/z2W9yj1FG/WkkJN/Zoj1uhJttDQnlrke4U
owcDt0pK4UNJKCyOZNf0SLJ2GINw4pt55T0beWaMdrmLYdEptXMfNQ0+XINOQ3YlLtFpZ5TyUpQe
4QMd6fFysJYYWRFBS3Gu7jYheWiAPR4u1IpAJRDwTUDPTFcPwNdCZfTDhmVb4sjhnQ8/1pGSJOS1
Q4baZHyLjoIgAALjm0Bs1M2Lp0+bPHlyUVGRtHBKo+awuALNEIXf7brOyCOe2X+Padjj5p97+vQU
45k7lyr+5XlrxU/FGCnhetYL37jdbwWSTC/sISG/xX8jRIgxmxLbRyWh8K4y/ktClqRg2w972kYh
DoRXaDQrqUJuNHS9TfS1qfQgt+kaGnyYBp3Ie/Dm6JiZ/rynUjHoes6gMAp2lSuUFjds4Nlaap5K
SiKLemhAqDWASiAQkgBP8elary5IsRptx7HxhKGO9jb7OV8e+4/XXhf1Nhhy+KgGAiCQqwQS27dv
J9m2bt1q6G0tLS2G1fCFV/quv6pMFb63t7e8vJzeOflulsZEP5jup6f6Vav9FEMZEAABEAABEAAB
EJiwBC4oONPc3HzHHXeMjIycPz98nv43TP8dPnnqlMrk3PvnxHX+7NmzZ86cmTp1ys+79ty24rbi
4mJZzFAIjVqkNFZcOau2tla+Q5bBuro6+XcM1M0JuyAwcBAAARAAARAAARCIlsCYqJuxcaZHyxqt
gQAIgAAIgAAIgAAIZIcA1M3scEYvIAACIAACIAACIDBBCUDdnKATj2GDAAiAAAiAAAiAQHYIQN3M
Dmf0AgIgAAIgAAIgAAITlADUzQk68Rg2CIAACIAACIAACGSHANTN7HBGLyAAAiAAAiAAAiAwQQlA
3ZygE49hgwAIgAAIgAAIgEB2CEDdzA5n9AICIAACIAACIAACE5RA+GPeJygwDBsEQAAEQAAEQAAE
YkuAfiIIvyoU29mD4CAAAiAAAiAAAiCQ8wTwq0I5P0UQEARAAARAAARAAARAICABxG4GBIbiIAAC
IAACIAACIAACQQhA3QxCC2VBAARAAARAAARAAAQCEoC6GRAYioMACIAACIAACIAACAQhAHUzCC2U
BQEQAAEQAAEQAAEQCEgA6mZAYCgOAiAAAiAAAiAAAiAQhADUzSC0UBYEQAAEQAAEQAAEQCAggRio
m82bN/l5+R/44Laa6VMS4lXTPmitN9h6p3YrMf3Bbsu9no2utTxuySZ4sxv3uoi498HE9JWtqiAe
Eiq3Eg/1WFsMI+FA+0qJgr/u3DbgKKNdQhsZhWRaGmmnirfgr8EAfXU/ZMys/EOd3zTtWOtaJ4vD
SWnZvq7SDjnrBdLMafbkESsw5bPGmCqeE2GHKtkTecx6Ss9KiJay1I2dR37YHTciS8teO2SYTWbM
eKFjEACB3CEQA3WTYH3xrru8X/6B0k46p75i19DICXrtrlg126rW3M8ekLeGutY3LDCfgrTJLtq3
+aBLLbdbmljdD82u7XATsWfj4gbLPQ8JxS2mi7GhaZGicYaRkJ4xM1e1V8sGDzRXd9TPtD/4mU1C
L9oeYvifJLVkkHF59TDY18u0kYr5HTlx73ytvKfM4tG7gO2Wq4JeRzaz2jkpz+yqlgPaXUmSWdZV
uIFntFagOc2oJD4btxLWPp5Wvd9nS+O8GFcH1eU6sqtx3WLt+8+Mqns2MLZuT8rXVCLSs2VVO1tf
yT8RaXbI8NvgOAeP4YEACHgTiIe6SWPY/9oh+Trwev+rb7z1et/h/sMDA0ePv33iZJA57t5a37a8
eeVcWWfeys1VbaselVbMgfaH1y1furBEa25+XXM1a+gUJklxq3lTlbw3b+2uRmst51uisNj9m1xF
7H5o0TrrTS8J9+xoY42rFTFY0x5F+KASDnbupGfM7lbZYMmKVtI49SEbQtkl9ODtASrILJllg5H3
6uNQXwerKNVnVynpKfNg6zfq29bvHlkzz6gxo2rLkc1V6xbbDHJGiZIVq9eztp1PO5uKw4GItFag
OY2058gam79m9wbW/tieFO9EZO3HtaG9e9axxi5lubK599Jy1fereQvXM2PTMMfIa1W11PFFnm6H
DLrJxBUk5AYBEIiYQGzUTTnuvLxkYUH+pEJ6FRSJF/0RBMn8NUMjj6yY4VCF617VS24yb5H6dWJo
LVdMhVpWXmreKi3V1TKPW1SRm81ITz1ygh6NTtfgtk1NVS2bG9V77hJ6jDOchCU1jwypihQrKa1g
bF+/8gh3kpDLYXG39euSedOQHlLd75wSCRC4Qce+3BEN9u9jjQu1rxlqMU+Z9z5a26E9htU6MyqX
VktTkMelLpggSzTjZd3mNOMdR9vBrLLlrK3/ULSNxr21gf4X7UOgL0iGLX9+Je022rdoo2T3ngam
f9P23iFDbYNxhwr5QQAEoiAQJ3WTdM2CfF3RnFRQNKmwsLAgLy8vNIfBbau5K/l2oTdo1i8z7Mka
yFhdOsvsR6hlxuV+a95acq06a7dcZSPLGdv8zZpSDyVJlZBxLYc1bNLiTYU/VFF6Qklo7ZobOVQT
oIuEpDVaAhLqVQOtqxhUazHrki5pMqM2LTLDGMI1yJjHkFOZ9ve3LX9xk0uUqls7/OGtGLyVBbCi
VTUgpXTG9TmrhSn0Eo2+oo9VF32nmWiRf2AtE5eJTuLWpnCXq1FAtgHMvb1leYo/vaeziVm+aRuV
LDskfzeCTSZuSCEvCIBANARio24mkon8/LzCwvwiMm1yXbOggHRPcYUhIQLe53DHuuYi59YvRkFO
nZV6iN6SHXogI3+w2S9hBfS4lUasgfb7a5nhmbIXtklIRbjBlcebChuhCKLSlJ5IJBT6qxlpQCEE
jhIOtn6f1Nzdwu5LFw8t0KX3okEWFF2zp4HctMw0TYVr0LkvN+jcftPBlhkRlnf3z9S+TnjJ3Ndv
tWq7Tmk7RXOahltaV+zFvpx086ZbdWE+S1HUIQ3Jmm6VEtBs60PEAxiBJVGIEJs2vFnRV9yDLcvN
MrbEoJKFS6os/nThSV9WmRJnYt9/ItlkYkMZgoIACERMIDbqZjKRzEsm85NJMmfm01/6lUgkwiAR
dkdK++A6pZlwQKkkuhbFpJ1Axm5Gf3GzAWt5wNGtr6txdgm5M3oR0/KcDi7bOds1lzywxMLvT1Ff
hi3WVUL+1NlQacYysrmVzqECVhnISacFifIYAzV3KmSDgYbIXYQnttQYj1SSuaN+S9DJtaZIKw/y
lEQWHlZYO8c9uDOQ6BEWTr/qIuwsUFONmtlbz8eiBBdrfatCT/HQvRSjYiR7Beor7oXTshJBMlpa
GwUZpx7BIZ0k+s4mv+zdY340NDzOO2Tc2UF+EACBMSMQG3UzL5kgJZP86Sm6Zkh1UwMudEoz4cCa
SsKDw0QWJ//DfonCHrc85lR3aDokrqTWUiRMsQKW1DzAc8mFzjRKCY0YU+P57S6hMAO7XJ40zOhM
0pjJAKO3EbJB5758f5D8TK5orLffzPiZe6+en+4SjGs8qrm2lLHvKr4HaS0YYNWF7CGD1VIz091j
VDIoROyaFlGbQ6S7t62abX47EqlsWn764NOPpXx7tA5S2X9GucnEDh4EBgEQiJRAbNTNgoI8CtSk
PCGybqoXWTlHBURPOLCGYxpNGrFKlqQEq4bkcctZNLHF0wNAc79yv6Ew3qSeoylrGykRtmA1a2ZP
SAm5FijymdQYUw8JXUAZI3URQ0/xFkYXw4TMa4VskFcNTN46H2knl+dVdOzoDOEW58lkOXYFW3U5
JjzE8UVAHBts30aE/0FNAeQLu+lhOut3QBx24ZRCZ/RnSckKucn4Eh6FQAAExjWB0elqWURDiia9
KHxTXtKXTqbNYOqmfTs2PLn8iBDriXTG0Tki2km1clHeibZHe9xyhyNy3o1zHIXfUBhveCCmh4TO
qbjCyBpWQnHGHj/oJyWfyUtCwy6oD9BUvj3ESPGYq3Fg4Rp07MuNuTPVdOgY43kV7bVb7ecUplv5
fIVUlXkkgaVrIPr7XnMafW9ocSwI2OIyFSEsrhuugJJXp4cfx6HlSsqiHvtP2E1mLECgTxAAgZwj
ECd1k3KDDC1T+tADe9JLau7WvtbLmZBnLspjOPlBm+Ibv/0W9yh11K9WUsKNPdrjVqjJ9pBQ3lqk
O8XowcCtklL4UBIKiyPZNT2SrB3GIJz4Zl55z0aeGaNd7mJYdErt3EdNgw/XoNOQXYlLdNoZpbwU
pUf4QEd6vBysJUZWRNBSnKu7TUgeGmCPhwu1IlAJBHwT0DPT1QPwtVAZ/bBh2ZY4cnjnw491pCQJ
ee2QoTYZ36KjIAiAwPgmEBt18+Lp0yZPnlxUVCQtnNKoOSyuQDNE4Xe7rjPyiGf232Ma9rj5554+
PcV45s6lin953lrxUzFGSrie9cI3bvdbgSTTC3tIyG/x3wgRYsymxPZRSSi8q4z/kpAlKdj2w562
UYgD4RUazUqqkBsNXW8TfW0qPchtuoYGH6ZBJ/IevDk6ZqY/76lUDLqeMyiMgl3lCqXFDRt4tpaa
p5KSyKIeGhBqDaASCIQkwFN8utarC1KsRttxbDxhqKO9zX7Ol8f+47XXRb0Nhhw+qoEACOQqgcT2
7dtJtq1btxp6W0tLi2E1fOGVvuuvKlOF7+3tLS8vp3dOvpulMdEPpvvpqX7Vaj/FUAYEQAAEQAAE
QAAEJiyBCwrONDc333HHHSMjI+fPD5+n/w3Tf4dPnjqlMjn3/jlxnT979uyZM2emTp3y8649t624
rbi4WBYzFEKjFimNFVfOqq2tle+QZbCurk7+HQN1c8IuCAwcBEAABEAABEAABKIlMCbqZmyc6dGy
RmsgAAIgAAIgAAIgAALZIQB1Mzuc0QsIgAAIgAAIgAAITFACUDcn6MRj2CAAAiAAAiAAAiCQHQJQ
N7PDGb2AAAiAAAiAAAiAwAQlAHVzgk48hg0CIAACIAACIAAC2SEAdTM7nNELCIAACIAACIAACExQ
AlA3J+jEY9ggAAIgAAIgAAIgkB0CUDezwxm9gAAIgAAIgAAIgMAEJRD+mPcJCgzDBgEQAAEQAAEQ
AIHYEqCfCMr+rwrBuhnb9QLBQQAEQAAEQAAEQCAOBMJbN7P2m+lxwAgZQQAEQAAEQAAEQCAGBPAj
ljGYJIgIAiAAAiAAAiAAAiAQiACc6YFwoTAIgAAIgAAIgAAIgEAwAlA3g/FCaRAAARAAARAAARAA
gUAEoG4GwoXCIAACIAACIAACIAACwQhA3QzGC6VBAARAAARAAARAAAQCEYC6GQgXCoMACIAACIAA
CIAACAQjAHUzGC+UBgEQAAEQAAEQAAEQCEQgBupm8+ZNfl7+hz24rWb6lIR41bQPWusNtt6p3UpM
f7Dbcq9no2stj1uyCd7sxr0uIu59MDF9ZasqiIeEyq3EQz3WFsNIONC+UqLgrzu3DTjKaJfQRkYh
mZZG2qniLfhrMERfjnORpp3uh4xVQX9YJ4vDUe86rqu0Q856gTRzmj15xApM+awxpornRNihSvZE
HrOe0rMSolmXq7nzyA+740ZkadlrhwyzyYwZL3QMAiCQOwRioG4SrC/edZf3yz9Q2knn1FfsGho5
Qa/dFatmW9Wa+9kD8tZQ1/qGBeZTkDbZRfs2H3Sp5XZLE6v7odm1HW4i9mxc3GC55yGhuMV0MTY0
LVI0zjAS0jNm5qr2atnggebqjvqZ9gc/s0noRdtDDP+TpJYMMi4fPTjNhafM4tG7gO2Wq4JeRzaz
2jkpz+yqlgPaXUmSWdaVD7GyXSTQnGZbOKf+rIS1j6dV788FMcdeBq4Oqst1ZFfjusXal7cZVfds
YGzdnpSvqSR0z5ZV7Wx95Xz6M80OGX4bHHs2kAAEQGAMCcRD3SRA+187JF8HXu9/9Y23Xu873H94
YODo8bdPnAyCr3trfdvy5pVzZZ15KzdXta16VFoxB9ofXrd86cISrbn5dc3VrKFTmCTFreZNVfLe
vLW7Gq21nG+JwmL3b3IVsfuhReusN70k3LOjjTWuVsRgTXsU4YNKONi5k54xu1tlgyUrWknj1Ids
CGWX0IO3B6ggs2SWDUY+TR/Oc+Ep82DrN+rb1u8eWTPPaHtG1ZYjm6vWLbYZ5IwSJStWr2dtO592
NhWHAxFprUBzGmnPkTU2f83uDaz9sT0p3onI2o9rQ3v3rGONXcpyZXPvpeWq71fzFq5nxqZhjpHX
qmqp44s83Q4ZdJOJK0jIDQIgEDGB2Kibctx5ecnCgvxJhfQqKBIv+iMIkvlrhkYeWTHDoQrXvaqX
3GTeIvXrxNBarpgKtay81LxVWqqrZR63pK65iPTUIyfo0eh0DW7b1FTVsrlRvecuocc4w0lYUvPI
kKpIsZLSCsb29SuPcCcJuRwWd1u/Lpk3Dekh1f3OKZEAgRt07MsDkdtceMq899HaDu0xrDY9o3Jp
tTQFeVzqggmyRDNe1m1OM95xtB3MKlvO2voPRdto3Fsb6H/RPgT6gjRy4l65XOdX0m6jfYs2Snbv
aWD6N23vHTLUNhh3qJAfBEAgCgJxUjdJ1yzI1xXNSQVFkwoLCwvy8vJCcxjctpq7km8XG/Ghvg5W
UVpihj1ZAxmrS2eZ/Qi1zLjcb81bS65VZ+2Wq2xkOWObv1lT6j4Ai4SMazmsYZMWbyr8oYrSE0pC
a9fcyMEh6O+6SEhaoyUgoV410LqKQbUWsy7pkiYzatMiM4whXIOMeQzZxtRrLtza4Q9vxeCtLIAV
raoBKaUzrs9ZLUyhl2j0FX2suug7zUSL/ANrmbhMdBK3NoW7XI0Csg1g7u0ty1P86T2dTczyTduo
ZN1/6O0INpm4IYW8IAAC0RCIjbqZSCby8/MKC/OLyLTJdc2CAtI9xRWGhAh4n8Md65qLfLB/H0U1
LZ7SWamH6C3ZoQcy8geb/RJWQI9bacQaaL+/lhmeKXthm4RUhBtcebypsBGKICpN6YlEQqG/mpEG
FELgKOFg6/dJzd0t7L508dACXXovGmRB0TV7GshNy0zTVLgGnfsKvha8ZO7rt1q1XVtvp2hO03BL
64q92JeTbt50qy44v2hqkIZkTbdKCWi29SLiAYzAkmiEiEkr3qzoa9XBluVmGVtiUMnCJVUWf7rw
pC+rNL5kSgz2/SeSTSYmjCEmCIBA5ARio24mE8m8ZDI/mSRzZj79pV+JRCIMFGHrorQPrlOaCQeU
NKNrUUzaCWTsZvQXNxuwlgcc3fq6GmeXkDujFzEtz+ngsp2zXXPJA0ssfM0U9WXYYl0l5E+dDZVm
LCObW+kcKmCVgZx0WpAof5KpuVMhGww8xNFXsKZIKw/ylEQWHlZYO8c9uHP0koRrIf2qC9fu6Gs1
amZvPR+LElysjVoVeoqH7qUYFc1BPPruY9VCWlYiSEZLa6Mg49SjEqSTRN/Z5Je9e2pStE36Gum0
Q8YKFIQFARDIJQKxUTfzkglSMsmfnqJrhlQ3tTkQOqWZcKD4kakADw4TWZz8D/slCnvc8phm3aGZ
usU7VFEkTLECltQ8wHPJt3CFeJQSGnGNxvPbXUJhBna5PGmY0ZmkMZMBRm8jZIPOfQX/bKWbwd5+
M+Nn7r16frpLMK7WvbT4Zuy7SvBBihoBVl3IHjJYLTUz3T1GJYNCxK5pEbU5RLp726rZ5rcjkcqm
5acPPv1YyrdH6yCV/WeUm0zs4EFgEACBSAnERt0sKMijQE3KEyLrpnqRlXNUQPSEA2s4ptGkEatk
SUqwakget5xFE1s8PQA09yv3GwrjTeo5mrK2kRJhC1azZvaElJBrgSKfSY0x9ZDQBZQxUhcx9BRv
YTIxTMi8VsgGedXA5F2Wils7PK+iY0dnCLc4TybLsSvYqssx4SGOLwLi2GD7NiL8D2oKIF/YTQ/T
Wb8D4rCLhVpgjGMflpSskJuML+FRCARAYFwTGJ2ulkU0pGjSi8I35SV96WTaDKZu2rdjw5PLjwix
nkinJQ+RQsSjnVQrV3+/vkd73HKHI3LejXMchd9QGG94IKaHhM6puMLIGlZCccYeP+gnJZ/JS0LD
6KsP0FS+PcRI8ZircWDhGnTsK8SC9JxBnlfRXrvVfk5huo74Cqkq80gCS9dA9Pe95jT63tDiWBCw
xWUqQlhcN1wBJa9ODz+OQ8uVlEU99p+wm8xYgECfIAACOUcgTuom5QYZWqb0oQf2pJfU3K19rZcz
Ic9clMdw8oM2xTd++y3uUeqoX62khBt7tMetUJPtIaG8tUh3itGDgVslpfChJBQWR7JreiRZO4xB
OPHNvPKejTwzRrvcxbDolNq5j5oGH65BpyGHIu45g5pslhhZEUFLca7uNiF5aIA9Hi6UfKgEAr4J
6Jnp6gH4WqiMftiwbEscObzz4cc6UpKEvHbIUJuMb9FREARAYHwTiI26efH0aZMnTy4qKpIWTmnU
HBZXoBmi8Ltd1xl5xDP77zENe9z8c0+fnmI8c+dSxb88b634qRgjJVzPeuEbt/utQJLphT0k5Lf4
b4QIMWZTYvuoJBTeVcZ/SciSFGz7YU/bKMSB8AqNZiVVyI2GrreJvjaVHuQ2XUODD9OgE/lQvNPM
oDAKdpUrlBY3bODZWmqeSkoii3poQEiZUA0EQhHgKT5d69UFKVaj7Tg2njDU0d5mP+fLY//x2uui
3gZDDR6VQAAEcpdAYvv27STd1q1bDb2tpaXFsBq+8Erf9VeVqeL39vaWl5fTOyffzdKo6AfT/fRU
v2q1n2IoAwIgAAIgAAIgAAITlsAFBWeam5vvuOOOkZGR8+eHz9P/hum/wydPnVKZnHv/nLjOnz17
9syZM1OnTvl5157bVtxWXFwsixkKoVGLlMaKK2fV1tbKd8gyWFdXJ/+Ogbo5YRcEBg4CIAACIAAC
IAAC0RIYE3UzNs70aFmjNRAAARAAARAAARAAgewQgLqZHc7oBQRAAARAAARAAAQmKAGomxN04jFs
EAABEAABEAABEMgOAaib2eGMXkAABEAABEAABEBgghKAujlBJx7DBgEQAAEQAAEQAIHsEIC6mR3O
6AUEQAAEQAAEQAAEJigBqJsTdOIxbBAAARAAARAAARDIDgGom9nhjF5AAARAAARAAARAYIISCH/M
+wQFhmGDAAiAAAiAAAiAQGwJ0E8E4VeFYjt7EBwEQAAEQAAEQAAEcp4AflUo56cIAoIACIAACIAA
CIAACAQkgNjNgMBQHARAAARAAARAAARAIAgBqJtBaKEsCIAACIAACIAACIBAQAJQNwMCQ3EQAAEQ
AAEQAAEQAIEgBKBuBqGFsiAAAiAAAiAAAiAAAgEJQN0MCAzFQQAEQAAEQAAEQAAEghCAuhmEFsqC
AAiAAAiAAAiAAAgEJBADdbN58yY/L/8DH9xWM31KQrxq2get9QZb79RuJaY/2G2517PRtZbHLdkE
b3bjXhcR9z6YmL6yVRXEQ0LlVuKhHmuLYSQcaF8pUfDXndsGHGW0S2gjo5BMSyPtVPEW/DUYoi/H
uUjTTvdDxqqgP6yTxeGodx3XVdohZ71AmjnNnjxiBaZ81hhTxXMi7FAleyKPWU/pWQnRrMvV3Hnk
h91xI7K07LVDhtlkxowXOgYBEMgdAjFQNwnWF++6y/vlHyjtpHPqK3YNjZyg1+6KVbOtas397AF5
a6hrfcMC8ylIm+yifZsPutRyu6WJ1f3Q7NoONxF7Ni5usNzzkFDcYroYG5oWKRpnGAnpGTNzVXu1
bPBAc3VH/Uz7g5/ZJPSi7SGG/0lSSwYZl48enObCU2bx6F3AdstVQa8jm1ntnJRndlXLAe2uJMks
68qHWNkuEmhOsy2cU39WwtrH06r354KYYy8DVwfV5Tqyq3HdYu3L24yqezYwtm5PytdUErpny6p2
tr5yPv2ZZocMvw2OPRtIAAIgMIYE4qFuEqD9rx2SrwOv97/6xluv9x3uPzwwcPT42ydOBsHXvbW+
bXnzyrmyzryVm6vaVj0qrZgD7Q+vW750YYnW3Py65mrW0ClMkuJW86YqeW/e2l2N1lrOt0Rhsfs3
uYrY/dCiddabXhLu2dHGGlcrYrCmPYrwQSUc7NxJz5jdrbLBkhWtpHHqQzaEskvowdsDVJBZMssG
I5+mD+e58JR5sPUb9W3rd4+smWe0PaNqy5HNVesW2wxyRomSFavXs7adTzubisOBiLRWoDmNtOfI
Gpu/ZvcG1v7YnhTvRGTtx7WhvXvWscYuZbmyuffSctX3q3kL1zNj0zDHyGtVtdTxRZ5uhwy6ycQV
JOQGARCImEBs1E057ry8ZGFB/qRCehUUiRf9EQTJ/DVDI4+smOFQhete1UtuMm+R+nViaC1XTIVa
Vl5q3iot1dUyj1tS11xEeuqRE/RodLoGt21qqmrZ3Kjec5fQY5zhJCypeWRIVaRYSWkFY/v6lUe4
k4RcDou7rV+XzJuG9JDqfueUSIDADTr25YHIbS48Zd77aG2H9hhWm55RubRamoI8LnXBBFmiGS/r
NqcZ7zjaDmaVLWdt/YeibTTurQ30v2gfAn1BGjlxr1yu8ytpt9G+RRslu/c0MP2btvcOGWobjDtU
yA8CIBAFgTipm6RrFuTriuakgqJJhYWFBXl5eaE5DG5bzV3Jt4uN+FBfB6soLTHDnqyBjNWls8x+
hFpmXO635q0l16qzdstVNrKcsc3frCl1H4BFQsa1HNawSYs3Ff5QRekJJaG1a27k4BD0d10kJK3R
EpBQrxpoXcWgWotZl3RJkxm1aZEZxhCuQcY8hmxj6jUXbu3wh7di8FYWwIpW1YCU0hnX56wWptBL
NPqKPlZd9J1mokX+gbVMXCY6iVubwl2uRgHZBjD39pblKf70ns4mZvmmbVSy7j/0dgSbTNyQQl4Q
AIFoCMRG3UwkE/n5eYWF+UVk2uS6ZkEB6Z7iCkNCBLzP4Y51zUU+2L+PopoWT+ms1EP0luzQAxn5
g81+CSugx600Yg2031/LDM+UvbBNQirCDa483lTYCEUQlab0RCKh0F/NSAMKIXCUcLD1+6Tm7hZ2
X7p4aIEuvRcNsqDomj0N5KZlpmkqXIPOfQVfC14y9/VbrdqurbdTNKdpuKV1xV7sy0k3b7pVF5xf
NDVIQ7KmW6UENNt6EfEARmBJNELEpBVvVvS16mDLcrOMLTGoZOGSKos/XXjSl1UaXzIlBvv+E8km
ExPGEBMEQCByArFRN5OJZF4ymZ9Mkjkzn/7Sr0QiEQaKsHVR2gfXKc2EA0qa0bUoJu0EMnYz+oub
DVjLA45ufV2Ns0vIndGLmJbndHDZztmuueSBJRa+Zor6MmyxrhLyp86GSjOWkc2tdA4VsMpATjot
SJQ/ydTcqZANBh7i6CtYU6SVB3lKIgsPK6yd4x7cOXpJwrWQftWFa3f0tRo1s7eej0UJLtZGrQo9
xUP3UoyK5iAeffexaiEtKxEko6W1UZBx6lEJ0kmi72zyy949NSnaJn2NdNohYwUKwoIACOQSgdio
m3nJBCmZ5E9P0TVDqpvaHAid0kw4UPzIVIAHh4ksTv6H/RKFPW55TLPu0Ezd4h2qKBKmWAFLah7g
ueRbuEI8SgmNuEbj+e0uoTADu1yeNMzoTNKYyQCjtxGyQee+gn+20s1gb7+Z8TP3Xj0/3SUYV+te
Wnwz9l0l+CBFjQCrLmQPGayWmpnuHqOSQSFi17SI2hwi3b1t1Wzz25FIZdPy0weffizl26N1kMr+
M8pNJnbwIDAIgECkBGKjbhYU5FGgJuUJkXVTvcjKOSogesKBNRzTaNKIVbIkJVg1JI9bzqKJLZ4e
AJr7lfsNhfEm9RxNWdtIibAFq1kze0JKyLVAkc+kxph6SOgCyhipixh6ircwmRgmZF4rZIO8amDy
LkvFrR2eV9GxozOEW5wnk+XYFWzV5ZjwEMcXAXFssH0bEf4HNQWQL+ymh+ms3wFx2MVCLTDGsQ9L
SlbITcaX8CgEAiAwrgmMTlfLIhpSNOlF4Zvykr50Mm0GUzft27HhyeVHhFhPpNOSh0gh4tFOqpWr
v1/foz1uucMROe/GOY7CbyiMNzwQ00NC51RcYWQNK6E4Y48f9JOSz+QloWH01QdoKt8eYqR4zNU4
sHANOvYVYkF6ziDPq2iv3Wo/pzBdR3yFVJV5JIGlayD6+15zGn1vaHEsCNjiMhUhLK4broCSV6eH
H8eh5UrKoh77T9hNZixAoE8QAIGcIxAndZNygwwtU/rQA3vSS2ru1r7Wy5mQZy7KYzj5QZviG7/9
FvcoddSvVlLCjT3a41aoyfaQUN5apDvF6MHArZJS+FASCosj2TU9kqwdxiCc+GZeec9GnhmjXe5i
WHRK7dxHTYMP16DTkEMR95xBTTZLjKyIoKU4V3ebkDw0wB4PF0o+VAIB3wT0zHT1AHwtVEY/bFi2
JY4c3vnwYx0pSUJeO2SoTca36CgIAiAwvgnERt28ePq0yZMnFxUVSQunNGoOiyvQDFH43a7rjDzi
mf33mIY9bv65p09PMZ65c6niX563VvxUjJESrme98I3b/VYgyfTCHhLyW/w3QoQYsymxfVQSCu8q
478kZEkKtv2wp20U4kB4hUazkirkRkPX20Rfm0oPcpuuocGHadCJfCjeaWZQGAW7yhVKixs28Gwt
NU8lJZFFPTQgpEyoBgKhCPAUn6716oIUq9F2HBtPGOpob7Of8+Wx/3jtdVFvg6EGj0ogAAK5SyCx
fft2km7r1q2G3tbS0mJYDV94pe/6q8pU8Xt7e8vLy+mdk+9maVT0g+l+eqpftdpPMZQBARAAARAA
ARAAgQlL4IKCM83NzXfcccfIyMj588Pn6X/D9N/hk6dOqUzOvX9OXOfPnj175syZqVOn/Lxrz20r
bisuLpbFDIXQqEVKY8WVs2pra+U7ZBmsq6uTf8dA3ZywCwIDBwEQAAEQAAEQAIFoCRjqptQySaOU
vuITJ6zqptQ2z507+/77pG5OnzZ1NOpmbJzp0bJGayAAAiAAAiAAAiAwYQlI0ya3XvJL6p1kz7S+
zgvz5vnzVHiUoKBujhIgqoMACIAACIAACIBAnAhMnz795MmTlHRN/m5+onl+HmVjf+ADF6gvSpi5
6KIp06dPveSDF5dcWkwu9WlTp4UeJJzpodGhIgiAAAiAAAiAAAjEjAA501944YVDhw79+u1f+xed
dM1LL730Yx/7GGI3/UNDSRAAARAAARAAARCYiARI3Tx27Bid81NYWKgeXp5yuCQ50NV36G+6Tp8+
HU7dhDN9Ii41jBkEQAAEQAAEQAAEUgioMZqBjzb3pAl1E4sNBEAABEAABEAABEAggwSgbmYQLpoG
ARAAARAAARAAARCAuok1AAIgAAIgAAIgAAIgkEEC4TPTMygUmgYBEAABEAABEAABEMgAATqz3S1V
KCU9SO18lKlC4dXNrP2IZQZQo0kQAAEQAAEQAAEQmIgEPDLTM6duwpk+EZcaxgwCIAACIAACIAAC
WSMAdTNrqNERCIAACIAACIAACExEAlA3J+KsY8wgAAIgAAIgAAIgkDUCUDezhhodgQAIgAAIgAAI
gMBEJAB1cyLOOsYMAiAAAiAAAiAAAlkjAHUza6jREQiAAAiAAAiAAAhMRAJQNyfirGPMIAACIAAC
IAACIJA1AjFQN5s3b/Lz8o9scFvN9CkJ8appH7TWG2y9U7uVmP5gt+Vez0bXWh63ZBO82Y17XUTc
+2Bi+spWVRAPCZVbiYd6rC2GkXCgfaVEwV93bhtwlNEuoY2MQjItjbRTxVvw12CAvjxHmqad7oeM
VUF/WCeLw1HvOq6rtEPOeoE0c5o9ecS8pHzWGFPFcyLsUCV7Io9ZT+lZCdGsy9XceeRHwHEjsrTs
tUOG2WTGjBc6BgEQyB0CMVA3CdYX77rL++UfKO2kc+ordg2NnKDX7opVs61qzf3sAXlrqGt9wwLz
KUib7KJ9mw+61HK7pYnV/dDs2g43EXs2Lm6w3POQUNxiuhgbmhYpGmcYCekZM3NVe7Vs8EBzdUf9
TPuDn9kk9KLtIYb/SVJLBhmXew+eI/WUWTx6F7DdclXQ68hmVjsn5Zld1XJAuytJMsu6CjfwjNYK
NKcZlcRn41bC2sfTqvf7bGmcF+PqoLpcR3Y1rlusfXmbUXXPBsbW7Un5mkpEerasamfrK+fTn2l2
yPDb4DgHj+GBAAh4E4iHuklj2P/aIfk68Hr/q2+89Xrf4f7DAwNHj7994mSQOe7eWt+2vHnlXFln
3srNVW2rHpVWzIH2h9ctX7qwRGtufl1zNWvoFCZJcat5U5W8N2/trkZrLedborDY/ZtcRex+aNE6
600vCffsaGONqxUxWNMeRfigEg527qRnzO5W2WDJilbSOPUhG0LZJfTg7QEqyCyZZYORd+3Da6Se
Mg+2fqO+bf3ukTXzjLZnVG05srlq3WKbQc4oUbJi9XrWtvNpZ1NxOBCR1go0p5H2HFlj89fs3sDa
H9uT4p2IrP24NrR3zzrW2KUsVzb3Xlqu+n41b+F6Zmwa5hh5raqWOr7I0+2QQTeZuIKE3CAAAhET
iI26Kcedl5csLMifVEivgiLxoj+CIJm/ZmjkkRUzHKpwjaR6yU3mLVK/Tgyt5YqpUFbKS81bpaW6
WuZxS+qai0hPPXKCHo1O1+C2TU1VLZsb1XvuEnqMM5yEJTWPDKmKFCsprWBsX7/yCHeSkMthcbf1
65J505AeUt3vnBIJELhBx77cEHmM1FPmvY/WdmiPYbXpGZVLq6UpyONSF0yQJZrxsm5zmvGOo+1g
Vtly1tZ/KNpG497aQP+L9iHQF6SRE/fK5Tq/knYb7Vu0UbJ7TwPTv2l775ChtsG4Q4X8IAACURCI
k7pJumZBvq5oTioomlRYWFiQl5cXmsPgttXclXy72IgP9XWwitISM+zJGshYXTrL7EeoZcblfmve
WnKtOmu3XGUjyxnb/M2aUvcBWCRkXMthDZu0eFPhD1WUnlASWrvmRg4OQX/XRULSGi0BCfWqgdZV
DKq1mHVJlzSZUZsWmWEM4RpkzGPIaRaFdaRu7fCHt2LwVhbAilbVgJTSGdfnrBam0Es0+oo+Vl30
nWaiRf6BtUxcJjqJW5vCXa5GAdkGMPf2luUp/vSeziZm+aZtVLLuP/R2BJtM3JBCXhAAgWgIxEbd
TCQT+fl5hYX5RWTa5LpmQQHpnuIKQ0IEvM/hjnXNRT7Yv4+imhZP6azUQ/SW7NADGfmDzX4JK6DH
rTRiDbTfX8sMz5S9sE1CKsINrjzeVNgIRRCVpvREIqHQX81IAwohcJRwsPX7pObuFnZfunhogS69
Fw2yoOiaPQ3kpmWmaSpcg859+VsL6ki9ZO7rt1q1XVtvp2hO03BL64q92JeTbt50q84fv+hLkYZk
TbdKCWi29SjiAYzAkugFyuEWvVnRV9yDLcvNMrbEoJKFS6os/nThSV9WaXzJlEO37z+RbDI5zBWi
gQAIZJRAbNTNZCKZl0zmJ5Nkzsynv/QrkUiEASTsjpT2wXVKM+GAkmZ0LYpJO4GM3Yz+4mYD1vKA
o1tfV+PsEnJn9CKm5TkdXLZztmsueWCJhd+for4MW6yrhPyps6HSjGVkcyudQwWsMpCTTgsS5U8y
NXcqZIOBh6hVsI3Uf0PWFGnlQZ6SyMLDCmvnuAd3+u8x2pLpV120/flvrVEze+v5WJTgYq1sVegp
HrqXYlQ0B7H/bsZFybSsROiIltZGQcapRyVIJ4m+s8kve/fUpGib9DXSaYccFwAxCBAAgbEgEBt1
My+ZICWT/OkpumZIdVNjLXRKM+FA8SNTAR4cJrI4+R/2SxT2uOUxnbpDM3WLd6iiSJhiBSypeYDn
km/hCvEoJTRiTI3nt7uEwgzscnnSMKMzSWMmA4zeRsgGnftK9xmyjzTdDPb2mxk/c+/V89NdgnG1
7qXFN2PfVdIN0uV+gFUXsocMVkvNTHePUcmgELFrWkRtDpHu3rZqtvntSKSyafnpg08/lvLt0TpI
Zf8Z5SYTO3gQGARAIFICsVE3CwryKFCT8oTIuqleZOUcFRA94cAajmk0acQqWZISrBqSxy1n0cQW
Tw8Azf3K/YbCeJN6jqasbaRE2ILVrJk9ISXkWqDIZ1JjTD0kdAFljNRFDD3FW5hMDBMyrxWyQV41
GHnHkQq53drheRUdOzpDuMV5MlmOXcFWXY4JD3F8ERDHBtu3EeF/UFMA+cJuepjO+h0Qh10s1AJj
HPuwpGSF3GR8CY9CIAAC45rA6HS1LKIhRZNeFL4pL+lLJ9NmMHXTvh0bnlx+RIj1RDoteYgUIh7t
pFq5+vv1PdrjljsckfNunOMo/IbCeMMDMT0kdE7FFUbWsBKKM/b4QT8p+UxeEhpGX32ApvLtIUaK
x1yNAwvXoGNf7szdRuqFjjGeV9Feu9V+TmG6lc9XSFWZRxJYugaiv+81p9H3hhbHgoAtLlMRwuK6
4QooeXV6+HEcWq6kLOqx/4TdZMYCBPoEARDIOQJxUjcpN8jQMqUPPbAnvaTmbu1rvZwJeeaiPIaT
H7QpvvHbb3GPUkf9aiUl3NijPW6FmmwPCeWtRbpTjB4M3CophQ8lobA4kl3TI8naYQzCiW/mlfds
5Jkx2uUuhkWn1M591DT4cA06DdmVuNdIPWdQk80SIysiaCnO1d0mJA8NsMfDhVoRqAQCvgnomenq
AfhaAIl+2LBsSxw5vPPhxzpSkoS8dshQm4xv0VEQBEBgfBOIjbp58fRpkydPLioqkhZOadQcFleg
GaLwu13XGXnEM/vvMQ173PxzT5+eYjxz51LFvzxvrfipGCMlXM964Ru3+61AkumFPSTkt/hvhAgx
ZlNi+6gkFN5Vxn9JyJIUbPthT9soxIHwCo1mJVXIjYaut4m+NpUe5DZdQ4MP06ATeTfeaUbqOYPC
KNhVrlBa3LCBZ2upeSopiSzqoQGh1gAqgUBIAjzFp2u9uiDFarQdx8YThjra2+znfHnsP157XdTb
YMjhoxoIgECuEkhs376dZNu6dauht7W0tBhWwxde6bv+qjJV+N7e3vLycnrn5LtZGhP9YLqfnupX
rfZTDGVAAARAAARAAARAYMISuKDgzLFjx8h4R6eXqxGJpPuNjIy4+Y3pfbpOnz5dXFws0RkKoUGS
lMaKK2fV1tbKd6jxuro6+XcM1M0JuyAwcBAAARAAARAAARCIlsCYqJuxcaZHyxqtgQAIgAAIgAAI
gAAIZIcA1M3scEYvIAACIAACIAACIDBBCUDdnKATj2GDAAiAAAiAAAiAQHYIQN3MDmf0AgIgAAIg
AAIgAAITlADUzQk68Rg2CIAACIAACIAACGSHANTN7HBGLyAAAiAAAiAAAiAwQQlA3ZygE49hgwAI
gAAIgAAIgEB2CEDdzA5n9AICIAACIAACIAACE5RA+GPeJygwDBsEQAAEQAAEQAAEYkvgzBn8qlBs
Jw+CgwAIgAAIgAAIgEDuE8CvCuX+HEFCEAABEAABEAABEACBYAQQuxmMF0qDAAiAAAiAAAiAAAgE
IgB1MxAuFAYBEAABEAABEAABEAhGAOpmMF4oDQIgAAIgAAIgAAIgEIgA1M1AuFAYBEAABEAABEAA
BEAgGAGom8F4oTQIgAAIgAAIgAAIgEAgAlA3A+FCYRAAARAAARAAARAAgWAEYqBuNm/e5Oflf9yD
22qmT0mIV037oLXeYOud2q3E9Ae7Lfd6NrrW8rglm+DNbtzrIuLeBxPTV7aqgnhIqNxKPNRjbTGM
hAPtKyUK/rpz24CjjHYJbWQUkmlppJ0q3oK/BgP05TnSNO10P2SsCvrDOlkcjnrXcV2lHXLWC6SZ
0+zJI+Yl5bPGmCqeE2GHKtkTecx6Ss9KiGZdrubOIz8CjhuRpWWvHTLMJjNmvNAxCIBA7hCIgbpJ
sL54113eL/9AaSedU1+xa2jkBL12V6yabVVr7mcPyFtDXesbFphPQdpkF+3bfNClltstTazuh2bX
driJ2LNxcYPlnoeE4hbTxdjQtEjROMNISM+Ymavaq2WDB5qrO+pn2h/8zCahF20PMfxPkloyyLjc
e/AcqafM4tG7gO2Wq4JeRzaz2jkpz+yqlgPaXUmSWdZVuIFntFagOc2oJD4btxLWPp5Wvd9nS+O8
GFcH1eU6sqtx3WLty9uMqns2MLZuT8rXVCLSs2VVO1tfOZ/+TLNDht8Gxzl4DA8EQMCbQDzUTRrD
/tcOydeB1/tffeOt1/sO9x8eGDh6/O0TJ4PMcffW+rblzSvnyjrzVm6ualv1qLRiDrQ/vG750oUl
WnPz65qrWUOnMEmKW82bquS9eWt3NVprOd8ShcXu3+QqYvdDi9ZZb3pJuGdHG2tcrYjBmvYowgeV
cLBzJz1jdrfKBktWtJLGqQ/ZEMouoQdvD1BBZsksG4y8ax9eI/WUebD1G/Vt63ePrJlntD2jasuR
zVXrFtsMckaJkhWr17O2nU87m4rDgYi0VqA5jbTnyBqbv2b3Btb+2J4U70Rk7ce1ob171rHGLmW5
srn30nLV96t5C9czY9Mwx8hrVbXU8UWebocMusnEFSTkBgEQiJhAbNRNOe68vGRhQf6kQnoVFIkX
/REEyfw1QyOPrJjhUIVrJNVLbjJvkfp1YmgtV0yFslJeat4qLdXVMo9bUtdcRHrqkRP0aHS6Brdt
aqpq2dyo3nOX0GOc4SQsqXlkSFWkWElpBWP7+pVHuJOEXA6Lu61fl8ybhvSQ6n7nlEiAwA069uWG
yGOknjLvfbS2Q3sMq03PqFxaLU1BHpe6YIIs0YyXdZvTjHccbQezypaztv5D0TYa99YG+l+0D4G+
II2cuFcu1/mVtNto36KNkt17Gpj+Tdt7hwy1DcYdKuQHARCIgkCc1E3SNQvydUVzUkHRpMLCwoK8
vLzQHAa3reau5NvFRnyor4NVlJaYYU/WQMbq0llmP0ItMy73W/PWkmvVWbvlKhtZztjmb9aUug/A
IiHjWg5r2KTFmwp/qKL0hJLQ2jU3cnAI+rsuEpLWaAlIqFcNtK5iUK3FrEu6pMmM2rTIDGMI1yBj
HkNOsyisI3Vrhz+8FYO3sgBWtKoGpJTOuD5ntTCFXqLRV/Sx6qLvNBMt8g+sZeIy0Unc2hTucjUK
yDaAube3LE/xp/d0NjHLN22jknX/obcj2GTihhTyggAIREMgNupmIpnIz88rLMwvItMm1zULCkj3
FFcYEiLgfQ53rGsu8sH+fRTVtHhKZ6Ueordkhx7IyB9s9ktYAT1upRFroP3+WmZ4puyFbRJSEW5w
5fGmwkYogqg0pScSCYX+akYaUAiBo4SDrd8nNXe3sPvSxUMLdOm9aJAFRdfsaSA3LTNNU+EadO7L
31pQR+olc1+/1art2no7RXOahltaV+zFvpx086Zbdf74RV+KNCRrulVKQLOtRxEPYASWRC9QDrfo
zYq+4h5sWW6WsSUGlSxcUmXxpwtP+rJK40umHLp9/4lkk8lhrhANBEAgowRio24mE8m8ZDI/mSRz
Zj79pV+JRCIMIGF3pLQPrlOaCQeUNKNrUUzaCWTsZvQXNxuwlgcc3fq6GmeXkDujFzEtz+ngsp2z
XXPJA0ss/P4U9WXYYl0l5E+dDZVmLCObW+kcKmCVgZx0WpAof5KpuVMhGww8RK2CbaT+G7KmSCsP
8pREFh5WWDvHPbjTf4/Rlky/6qLtz39rjZrZW8/HogQXa2WrQk/x0L0Uo6I5iP13My5KpmUlQke0
tDYKMk49KkE6SfSdTX7Zu6cmRdukr5FOO+S4AIhBgAAIjAWB2KibeckEKZnkT0/RNUOqmxproVOa
CQeKH5kK8OAwkcXJ/7BforDHLY/p1B2aqVu8QxVFwhQrYEnNAzyXfAtXiEcpoRFjajy/3SUUZmCX
y5OGGZ1JGjMZYPQ2Qjbo3Fe6z5B9pOlmsLffzPiZe6+en+4SjKt1Ly2+Gfuukm6QLvcDrLqQPWSw
WmpmunuMSgaFiF3TImpziHT3tlWzzW9HIpVNy08ffPqxlG+P1kEq+88oN5nYwYPAIAACkRKIjbpZ
UJBHgZqUJ0TWTfUiK+eogOgJB9ZwTKNJI1bJkpRg1ZA8bjmLJrZ4egBo7lfuNxTGm9RzNGVtIyXC
FqxmzewJKSHXAkU+kxpj6iGhCyhjpC5i6CnewmRimJB5rZAN8qrByDuOVMjt1g7Pq+jY0RnCLc6T
yXLsCrbqckx4iOOLgDg22L6NCP+DmgLIF3bTw3TW74A47GKhFhjj2IclJSvkJuNLeBQCARAY1wRG
p6tlEQ0pmvSi8E15SV86mTaDqZv27djw5PIjQqwn0mnJQ6QQ8Wgn1crV36/v0R633OGInHfjHEfh
NxTGGx6I6SGhcyquMLKGlVCcsccP+knJZ/KS0DD66gM0lW8PMVI85mocWLgGHftyZ+42Ui90jPG8
ivbarfZzCtOtfL5Cqso8ksDSNRD9fa85jb43tDgWBGxxmYoQFtcNV0DJq9PDj+PQciVlUY/9J+wm
MxYg0CcIgEDOEYiTukm5QYaWKX3ogT3pJTV3a1/r5UzIMxflMZz8oE3xjd9+i3uUOupXKynhxh7t
cSvUZHtIKG8t0p1i9GDgVkkpfCgJhcWR7JoeSdYOYxBOfDOvvGcjz4zRLncxLDqldu6jpsGHa9Bp
yK7EvUbqOYOabJYYWRFBS3Gu7jYheWiAPR4u1IpAJRDwTUDPTFcPwNcCSPTDhmVb4sjhnQ8/1pGS
JOS1Q4baZHyLjoIgAALjm0Bs1M2Lp0+bPHlyUVGRtHBKo+awuALNEIXf7brOyCOe2X+Padjj5p97
+vQU45k7lyr+5XlrxU/FGCnhetYL37jdbwWSTC/sISG/xX8jRIgxmxLbRyWh8K4y/ktClqRg2w97
2kYhDoRXaDQrqUJuNHS9TfS1qfQgt+kaGnyYBp3Iu/FOM1LPGRRGwa5yhdLihg08W0vNU0lJZFEP
DQi1BlAJBEIS4Ck+XevVBSlWo+04Np4w1NHeZj/ny2P/8drrot4GQw4f1UAABHKVQGL79u0k29at
Ww29raWlxbAavvBK3/VXlanC9/b2lpeX0zsn383SmOgH0/30VL9qtZ9iKAMCIAACIAACIAACE5bA
BQVnjh07RsY7Or1cjUgk3W9kZMTNb0zv03X69Oni4mKJzlAIDZKkNFZcOau2tla+Q43X1dXJv2Og
bk7YBYGBgwAIgAAIgAAIgEC0BMZE3YyNMz1a1mgNBEAABEAABEAABEAgOwSgbmaHM3oBARAAARAA
ARAAgQlKAOrmBJ14DBsEQAAEQAAEQAAEskMA6mZ2OKMXEAABEAABEAABEJigBKBuTtCJx7BBAARA
AARAAARAIDsEoG5mhzN6AQEQAAEQAAEQAIEJSgDq5gSdeAwbBEAABEAABEAABLJDAOpmdjijFxAA
ARAAARAAARCYoATCH/M+QYFh2CAAAiAAAiAAAiAQWwJnzozBrwrBuhnb9QLBQQAEQAAEQAAEQCAO
BMJbN7P2m+lxwAgZQQAEQAAEQAAEQCAGBPAjljGYJIgIAiAAAiAAAiAAAiAQiACc6YFwoTAIgAAI
gAAIgAAIgEAwAlA3g/FCaRAAARAAARAAARAAgUAEoG4GwoXCIAACIAACIAACIAACwQhA3QzGC6VB
AARAAARAAARAAAQCEYC6GQgXCoMACIAACIAACIAACAQjAHUzGC+UBgEQAAEQAAEQAAEQCEQgBupm
8+ZNfl7+hz24rWb6lIR41bQPWusNtt6p3UpMf7Dbcq9no2stj1uyCd7sxr0uIu59MDF9ZasqiIeE
yq3EQz3WFsNIONC+UqLgrzu3DTjKaJfQRkYhmZZG2qniLfhrMEBfniNN0073Q8aqoD+sk8XhqHcd
11XaIWe9QJo5zZ48Yl5SPmuMqeI5EXaokj2Rx6yn9KyEaNblau488iPguBFZWvbaIcNsMmPGCx2D
AAjkDoEYqJsE64t33eX98g+UdtI59RW7hkZO0Gt3xarZVrXmfvaAvDXUtb5hgfkUpE120b7NB11q
ud3SxOp+aHZth5uIPRsXN1jueUgobjFdjA1NixSNM4yE9IyZuaq9WjZ4oLm6o36m/cHPbBJ60fYQ
w/8kqSWDjMu9B8+ResosHr0L2G65Kuh1ZDOrnZPyzK5qOaDdlSSZZV2FG3hGawWa04xK4rNxK2Ht
42nV+322NM6LcXVQXa4juxrXLda+vM2oumcDY+v2pHxNJSI9W1a1s/WV8+nPNDtk+G1wnIPH8EAA
BLwJxEPdpDHsf+2QfB14vf/VN956ve9w/+GBgaPH3z5xMsgcd2+tb1vevHKurDNv5eaqtlWPSivm
QPvD65YvXViiNTe/rrmaNXQKk6S41bypSt6bt3ZXo7WW8y1RWOz+Ta4idj+0aJ31ppeEe3a0scbV
ihisaY8ifFAJBzt30jNmd6tssGRFK2mc+pANoewSevD2ABVklsyywci79uE1Uk+ZB1u/Ud+2fvfI
mnlG2zOqthzZXLVusc0gZ5QoWbF6PWvb+bSzqTgciEhrBZrTSHuOrLH5a3ZvYO2P7UnxTkTWflwb
2rtnHWvsUpYrm3svLVd9v5q3cD0zNg1zjLxWVUsdX+Tpdsigm0xcQUJuEACBiAnERt2U487LSxYW
5E8qpFdBkXjRH0GQzF8zNPLIihkOVbhGUr3kJvMWqV8nhtZyxVQoK+Wl5q3SUl0t87gldc1FpKce
OUGPRqdrcNumpqqWzY3qPXcJPcYZTsKSmkeGVEWKlZRWMLavX3mEO0nI5bC42/p1ybxpSA+p7ndO
iQQI3KBjX26IPEbqKfPeR2s7tMew2vSMyqXV0hTkcakLJsgSzXhZtznNeMfRdjCrbDlr6z8UbaNx
b22g/0X7EOgL0siJe+VynV9Ju432Ldoo2b2ngenftL13yFDbYNyhQn4QAIEoCMRJ3SRdsyBfVzQn
FRRNKiwsLMjLywvNYXDbau5Kvl1sxIf6OlhFaYkZ9mQNZKwunWX2I9Qy43K/NW8tuVadtVuuspHl
jG3+Zk2p+wAsEjKu5bCGTVq8qfCHKkpPKAmtXXMjB4egv+siIWmNloCEetVA6yoG1VrMuqRLmsyo
TYvMMIZwDTLmMeQ0i8I6Urd2+MNbMXgrC2BFq2pASumM63NWC1PoJRp9RR+rLvpOM9Ei/8BaJi4T
ncStTeEuV6OAbAOYe3vL8hR/ek9nE7N80zYqWfcfejuCTSZuSCEvCIBANARio24mkon8/LzCwvwi
Mm1yXbOggHRPcYUhIQLe53DHuuYiH+zfR1FNi6d0Vuohekt26IGM/MFmv4QV0ONWGrEG2u+vZYZn
yl7YJiEV4QZXHm8qbIQiiEpTeiKRUOivZqQBhRA4SjjY+n1Sc3cLuy9dPLRAl96LBllQdM2eBnLT
MtM0Fa5B5778rQV1pF4y9/VbrdqurbdTNKdpuKV1xV7sy0k3b7pV549f9KVIQ7KmW6UENNt6FPEA
RmBJ9ALlcIverOgr7sGW5WYZW2JQycIlVRZ/uvCkL6s0vmTKodv3n0g2mRzmCtFAAAQySiA26mYy
kcxLJvOTSTJn5tNf+pVIJMIAEnZHSvvgOqWZcEBJM7oWxaSdQMZuRn9xswFrecDRra+rcXYJuTN6
EdPynA4u2znbNZc8sMTC709RX4Yt1lVC/tTZUGnGMrK5lc6hAlYZyEmnBYnyJ5maOxWywcBD1CrY
Ruq/IWuKtPIgT0lk4WGFtXPcgzv99xhtyfSrLtr+/LfWqJm99XwsSnCxVrYq9BQP3UsxKpqD2H83
46JkWlYidERLa6Mg49SjEqSTRN/Z5Je9e2pStE36Gum0Q44LgBgECIDAWBCIjbqZl0yQkkn+9BRd
M6S6qbEWOqWZcKD4kakADw4TWZz8D/slCnvc8phO3aGZusU7VFEkTLECltQ8wHPJt3CFeJQSGjGm
xvPbXUJhBna5PGmY0ZmkMZMBRm8jZIPOfaX7DNlHmm4Ge/vNjJ+59+r56S7BuFr30uKbse8q6Qbp
cj/AqgvZQwarpWamu8eoZFCI2DUtojaHSHdvWzXb/HYkUtm0/PTBpx9L+fZoHaSy/4xyk4kdPAgM
AiAQKYHYqJsFBXkUqEl5QmTdVC+yco4KiJ5wYA3HNJo0YpUsSQlWDcnjlrNoYounB4DmfuV+Q2G8
ST1HU9Y2UiJswWrWzJ6QEnItUOQzqTGmHhK6gDJG6iKGnuItTCaGCZnXCtkgrxqMvONIhdxu7fC8
io4dnSHc4jyZLMeuYKsux4SHOL4IiGOD7duI8D+oKYB8YTc9TGf9DojDLhZqgTGOfVhSskJuMr6E
RyEQAIFxTWB0uloW0ZCiSS8K35SX9KWTaTOYumnfjg1PLj8ixHoinZY8RAoRj3ZSrVz9/foe7XHL
HY7IeTfOcRR+Q2G84YGYHhI6p+IKI2tYCcUZe/ygn5R8Ji8JDaOvPkBT+fYQI8VjrsaBhWvQsS93
5m4j9ULHGM+raK/daj+nMN3K5yukqswjCSxdA9Hf95rT6HtDi2NBwBaXqQhhcd1wBZS8Oj38OA4t
V1IW9dh/wm4yYwECfYIACOQcgTipm5QbZGiZ0oce2JNeUnO39rVezoQ8c1Eew8kP2hTf+O23uEep
o361khJu7NEet0JNtoeE8tYi3SlGDwZulZTCh5JQWBzJrumRZO0wBuHEN/PKezbyzBjtchfDolNq
5z5qGny4Bp2G7Erca6SeM6jJZomRFRG0FOfqbhOShwbY4+FCrQhUAgHfBPTMdPUAfC2ARD9sWLYl
jhze+fBjHSlJQl47ZKhNxrfoKAgCIDC+CcRG3bx4+rTJkycXFRVJC6c0ag6LK9AMUfjdruuMPOKZ
/feYhj1u/rmnT08xnrlzqeJfnrdW/FSMkRKuZ73wjdv9ViDJ9MIeEvJb/DdChBizKbF9VBIK7yrj
vyRkSQq2/bCnbRTiQHiFRrOSKuRGQ9fbRF+bSg9ym66hwYdp0Im8G+80I/WcQWEU7CpXKC1u2MCz
tdQ8lZREFvXQgFBrAJVAICQBnuLTtV5dkGI12o5j4wlDHe1t9nO+PPYfr70u6m0w5PBRDQRAIFcJ
JLZv306ybd261dDbWlpaDKvhC6/0XX9VmSp8b29veXk5vXPy3SyNiX4w3U9P9atW+ymGMiAAAiAA
AiAAAiAwYQlcUHDm2LFjZLyj08vViETS/UZGRtz8xvQ+XadPny4uLpboDIXQIElKY8WVs2pra+U7
1HhdXZ38Owbq5oRdEBg4CIAACIAACIAACERLYEzUzdg406NljdZAAARAAARAAARAAASyQwDqZnY4
oxcQAAEQAAEQAAEQmKAEoG5O0InHsEEABEAABEAABEAgOwSgbmaHM3oBARAAARAAARAAgQlKAOrm
BJ14DBsEQAAEQAAEQAAEskMA6mZ2OKMXEAABEAABEAABEJigBKBuTtCJx7BBAARAAARAAARAIDsE
oG5mhzN6AQEQAAEQAAEQAIEJSiD8Me8TFBiGDQIgAAIgAAIgAAKxJXDmDH5VKLaTB8FBAARAAARA
AARAIPcJ4FeFcn+OICEIgAAIgAAIgAAIgEAwAojdDMYLpUEABEAABEAABEAABAIRgLoZCBcKgwAI
gAAIgAAIgAAIBCMAdTMYL5QGARAAARAAARAAARAIRADqZiBcKAwCIAACIAACIAACIBCMANTNYLxQ
GgRAAARAAARAAARAIBABqJuBcKEwCIAACIAACIAACIBAMAIxUDebN2/y8/I/7sFtNdOnJMSrpn3Q
Wm+w9U7tVmL6g92Wez0bXWt53JJN8GY37nURce+DiekrW1VBPCRUbiUe6rG2GEbCgfaVEgV/3blt
wFFGu4Q2MgrJtDTSThVvwV+DAfryHGmadrofMlYF/WGdLA5Hveu4rtIOOesF0sxp9uQR85LyWWNM
Fc+JsEOV7Ik8Zj2lZyVEsy5Xc+eRHwHHjcjSstcOGWaTGTNe6BgEQCB3CMRA3SRYX7zrLu+Xf6C0
k86pr9g1NHKCXrsrVs22qjX3swfkraGu9Q0LzKcgbbKL9m0+6FLL7ZYmVvdDs2s73ETs2bi4wXLP
Q0Jxi+libGhapGicYSSkZ8zMVe3VssEDzdUd9TPtD35mk9CLtocY/idJLRlkXO49eI7UU2bx6F3A
dstVQa8jm1ntnJRndlXLAe2uJMks6yrcwDNaK9CcZlQSn41bCWsfT6ve77OlcV6Mq4Pqch3Z1bhu
sfblbUbVPRsYW7cn5WsqEenZsqqdra+cT3+m2SHDb4PjHDyGBwIg4E0gHuomjWH/a4fk68Dr/a++
8dbrfYf7Dw8MHD3+9omTQea4e2t92/LmlXNlnXkrN1e1rXpUWjEH2h9et3zpwhKtufl1zdWsoVOY
JMWt5k1V8t68tbsarbWcb4nCYvdvchWx+6FF66w3vSTcs6ONNa5WxGBNexThg0o42LmTnjG7W2WD
JStaSePUh2wIZZfQg7cHqCCzZJYNRt61D6+Reso82PqN+rb1u0fWzDPanlG15cjmqnWLbQY5o0TJ
itXrWdvOp51NxeFARFor0JxG2nNkjc1fs3sDa39sT4p3IrL249rQ3j3rWGOXslzZ3Htpuer71byF
65mxaZhj5LWqWur4Ik+3QwbdZOIKEnKDAAhETCA26qYcd15esrAgf1IhvQqKxIv+CIJk/pqhkUdW
zHCowjWS6iU3mbdI/ToxtJYrpkJZKS81b5WW6mqZxy2pay4iPfXICXo0Ol2D2zY1VbVsblTvuUvo
Mc5wEpbUPDKkKlKspLSCsX39yiPcSUIuh8Xd1q9L5k1Dekh1v3NKJEDgBh37ckPkMVJPmfc+Wtuh
PYbVpmdULq2WpiCPS10wQZZoxsu6zWnGO462g1lly1lb/6FoG417awP9L9qHQF+QRk7cK5fr/Era
bbRv0UbJ7j0NTP+m7b1DhtoG4w4V8oMACERBIE7qJumaBfm6ojmpoGhSYWFhQV5eXmgOg9tWc1fy
7WIjPtTXwSpKS8ywJ2sgY3XpLLMfoZYZl/uteWvJteqs3XKVjSxnbPM3a0rdB2CRkHEthzVs0uJN
hT9UUXpCSWjtmhs5OAT9XRcJSWu0BCTUqwZaVzGo1mLWJV3SZEZtWmSGMYRrkDGPIadZFNaRurXD
H96KwVtZACtaVQNSSmdcn7NamEIv0egr+lh10XeaiRb5B9YycZnoJG5tCne5GgVkG8Dc21uWp/jT
ezqbmOWbtlHJuv/Q2xFsMnFDCnlBAASiIRAbdTORTOTn5xUW5heRaZPrmgUFpHuKKwwJEfA+hzvW
NRf5YP8+impaPKWzUg/RW7JDD2TkDzb7JayAHrfSiDXQfn8tMzxT9sI2CakIN7jyeFNhIxRBVJrS
E4mEQn81Iw0ohMBRwsHW75Oau1vYfenioQW69F40yIKia/Y0kJuWmaapcA069+VvLagj9ZK5r99q
1XZtvZ2iOU3DLa0r9mJfTrp50606f/yiL0UakjXdKiWg2dajiAcwAkuiFyiHW/RmRV9xD7YsN8vY
EoNKFi6psvjThSd9WaXxJVMO3b7/RLLJ5DBXiAYCIJBRArFRN5OJZF4ymZ9Mkjkzn/7Sr0QiEQaQ
sDtS2gfXKc2EA0qa0bUoJu0EMnYz+oubDVjLA45ufV2Ns0vIndGLmJbndHDZztmuueSBJRZ+f4r6
MmyxrhLyp86GSjOWkc2tdA4VsMpATjotSJQ/ydTcqZANBh6iVsE2Uv8NWVOklQd5SiILDyusneMe
3Om/x2hLpl910fbnv7VGzeyt52NRgou1slWhp3joXopR0RzE/rsZFyXTshKhI1paGwUZpx6VIJ0k
+s4mv+zdU5OibdLXSKcdclwAxCBAAATGgkBs1M28ZIKUTPKnp+iaIdVNjbXQKc2EA8WPTAV4cJjI
4uR/2C9R2OOWx3TqDs3ULd6hiiJhihXw/2/v3GOruq78f/zCJoQAacFQNYYi/0Ki0JSq2KKNmaGy
BqlMqqa+jrDNhNgTOmrRTxqgoymMpjZ2JKBNBpA6k84fTiCJ4gfYViQipGHEhAYqjGmaDD9XeRQY
27+k2CEFEucBTYNn7b3PY597ztnn4XNf3O/RVXPr/Vrrcw77fu/ae+1b3rCL5ZIfZIJ4mhaae0zN
z29vC3kY2ONS0rB2Z5JipgCM0UfEDt3H8vs35PTU7w4Oj1kZP1XbjPx0j824+vAi4puy7yp+TnqU
h3jqIo6QwmbJmenee1RSaETOdc13bU6Sdu/dVGl9O+KpbHp++sTJF5O+PdqdlOafaU4yOQcPBoMA
CMRKIGfkZklJEW3UpDwhim7KF0U5pwXESDiwb8c0uzT3KtmSEuwKSVHkbhqf4ukDQF9+ZeuGPHiT
fI6maG2mRDg2q9kzeyJayFQgz2eS95gqLPQAZXrqYYaR4s1DJmYImbWK2CFrGo68q6fcbq9+WF7F
wEsnIiyLs2SyLLvCPXVZZjzMCUSAHxvsnEb4+oOcAsge7I6n6KzfcX7YxRp9Y4zrGLaUrIiTTCDj
UQkEQOCWJjA9rZZGNCQ06UXbN8Ul1tIptBlObjqnY3Mllx0RYj+RTk8eIkHEdjvJUa6xMWOOVhR5
w+E57+Y5jnzdkAdv2EZMhYXuqbg8yBrVQn7GHjvoJymfSWWhGfQ1HLTEt8KMpBVzeR9YtA5dx/Jm
7uWpCp2msbyK/sYu5zmFfk8+e0ISSxRJYH4dxF+uuqfxj4YeM0HAsS9TMsK2dMMEKK3qDLHjOPRc
SVFVMf9EnWQyAQJjggAIZB2BXJKblBtkqkyxhh56Jb284Yf613pxJ8SZi+IYTnbQJv/G7yxiK0oD
LVuklHBzjlYURbrZCgtFUa2xKEYfDCwqKYyPZCGPOFJcU5Fk7eIDX8S38sqH9rDMGP3yNsOmKfVz
H3UFH61DN5c9ias8Vd5B3TbbHlm+g5b2uXrHhMShAc79cJGeCDQCgcAEjMx0+QB8fQOJcdiw6Isf
OXz0qRcHkpKEVDNkpEkmsOmoCAIgcGsTyBm5eee8ubNmzSorKxMRThHUvMmvUHeItt8du9/MI140
ttkK7LHwz+YRI8V40dEHpfXl6u38p2LMlHAj64VN3N5FoSwzKissZEXsN0K4GZWU2D4tC/nqqsZ+
SciWFOz4YU+HF/xAeInGASlVyIuGodv4WPsrzrOYrqngo3ToRt6Lt4+nyjvIg4Knl0uU1rbtZtla
cp5KUiKLfGhApGcAjUAgIgGW4nO6VX4g+dPoOI6NJQwN9Pc6z/lSzD+quS7uaTCi+2gGAiCQrQQK
jhw5QrZ1dXWZuq27u9uMGr721sjXly2RjR8eHl6+fDn95dpHafKJfjA9yEgtm7YEqYY6IAACIAAC
IAACIJC3BGaW3Lh8+TIF7+j0cnlHImm/qakpr3Vj+jtd169fnz9/vkBnCkKTJInGFXcvbmxsFH+h
zpuamsT7HJCbeftAwHEQAAEQAAEQAAEQiJdARuRmziymx8savYEACIAACIAACIAACKSHAORmejhj
FBAAARAAARAAARDIUwKQm3l64+E2CIAACIAACIAACKSHAORmejhjFBAAARAAARAAARDIUwKQm3l6
4+E2CIAACIAACIAACKSHAORmejhjFBAAARAAARAAARDIUwKQm3l64+E2CIAACIAACIAACKSHAORm
ejhjFBAAARAAARAAARDIUwLRj3nPU2BwGwRAAARAAARAAARylsCNG/hVoZy9eTAcBEAABEAABEAA
BLKfAH5VKPvvESwEARAAARAAARAAARAIRwB7N8PxQm0QAAEQAAEQAAEQAIFQBCA3Q+FCZRAAARAA
ARAAARAAgXAEIDfD8UJtEAABEAABEAABEACBUAQgN0PhQmUQAAEQAAEQAAEQAIFwBCA3w/FCbRAA
ARAAARAAARAAgVAEIDdD4UJlEAABEAABEAABEACBcARyQG4e6Nwf5BXc74m+hnmzC/iroX/C3m6i
5zG9qGDe3kFb2dAez1aKItEF63bPWQ8Tz+4tmNfcIxuisFAqKtg3ZO8xioXj/c0CBXs91jfuaqPT
QgcZiaQvDd9bxXoI1mGIsZSe+vQzuM98KuiN/WYxOHKp63Pl63LaK/jc0/TZw+9L0r81TZPNcyPs
0iR9JmdsJH9W3DT742rNPOKfgOtEZOtZNUNGmWQyxgsDgwAIZA+BHJCbBOvvfvAD9Ss4UJpJ72lZ
cWxy6iq9jq/YVGmXNT/RdomiydOtbd+0PgVpkq19vfO8RyuvIt2swX2VjQNeJg7tWdtmK1NYyIs0
w4zdHbWS4oxiIX3GLNrUv150+OaB9QMti5wf/JrDQhVthRnBb5JcM4xf3iMoPVXazD96v6kdF08F
vS51ao33JH1mJ7rf1EsFSc32XEVzPKWtQt3TlFoSsHM7Yf2fp133B+zpFq/G5KD8uE4da9+xVv/y
tjCxebem7TiV9DWViAwd3NSvtdasorc+M2T0afAWBw/3QAAE1ARyQ26SD2/8flS83rww9vbFdy6M
vDv27vj4e+9fuXotzD0e7GrprTvQXCXaVDd3Jno3vSCimOP9T+2oe3BNud7dqqYD67W2EzwkyYsO
7E+Isurtx9rtrdyLeGU++3d4mji4r3aHvVBl4amXerX2LZIZWscpyfiwFk6cOEqfMcd7RIfl9T2k
OA2XTaOcFip4K0CFuUtW3XDkPcdQeaq0eaLnn1p6W49Pba02+16YOHipM7FjrSMgZ9Yor9/SqvUe
PekeKo4GItZWoe5prCPH1tmqrcd3a/0vnkpanYit/1zt6OypHVr7aelx1aq20eNqzFfVa1o1c9Kw
fGStEt1N7CH3myHDTjK5ChJ2gwAIxEwgZ+Sm8LuoqHBGSXHpDHqVlPEXvQmDZNXWyamn6xe6NGGK
ZP261VYRya+rk9uZMOViZXmFVVRRYcgyRZHQmrWkUy9dpY9Gt2uib39HoruzXS7ztlDhZzQLyxue
npSFlFZesULTXh+TPsLdLGR22JbbxgzL1DTECqmx7py0EyB0h65jeSFSeKq0+ewLjQP6x7Dc9cKa
B9eLUJDikh+YMI9oyut63dOUDxzvAIuX1Gm9Y6PxdprrvY2PnXO6QF+Qpq5uE4/rqhqabfRv0WbN
wVNtmvFNWz1DRpoGcx0q7AcBEIiDQC7JTdKaJcWG0CwtKSudMWNGSVFRUWQOE31b2FLyBj4Rj44M
aCsqyq1tT/aNjOsrFlvjcFlmXt5F1dtpadVd3TLJRpEzrfNnDRXeDtgs1JjK0dr26/tN+XqoJHoi
WWgfmgU5GATjrx4Wkmq0bUhokQO0nmZQq7XaabEkTWHUjlprG0O0DjVN4bLPQ2H31Ksf9uEtBbyl
B6C+Rw4gJQ3G9Jw9whT5EY2/YYCnLv5BU9Ej+wdru3GpGCTX+uTL5fIuIIcDVRu665LW04dOdGi2
b9pmI/v8Q3+OYZLJNaSwFwRAIB4COSM3CwoLiouLZswoLqPQJtOaJSWkPfkVhQTf8H4PW1jXl8gn
xl6nXU1rZ5+oMbborXvJ2MjIPticF48CKop8zBrv/0mjZq5MOSs7LKQqLODK9pvyGCHfRKWLnlgs
5PrV2mlAWwhcLZzo+XeSucd53JcutrXAsF5FgyIohrInR1Y/ZIWmonXoPlawZ0H2VGXzyJg9qu3Z
ez/t5rQCt/RcaedGsnKZ1++pC8Yv/lqkkOzpVkkbmh0j8v0A5saS+A3K4h7VrOgr7vnuOquOIzGo
fM26hG09na+kP1RjfskUrjvnn1gmmSzmCtNAAARSSiBn5GZhQWFRYWFxYSGFM4vpnXEVFBREAcTj
jpT2wTSllXBASTOGitJEnEDs3Yz/YmEDrXuX67K+IeOcFrLF6FpNz3M6/9DRSs9c8tAW83V/2vVl
xmI9LWSfOrtrrL2MWlWN+1YBuw20SKdvEmWfZHLuVMQOQ7uoN3B4Grwje4q09EGelMjCthU23uO9
uTP4iPHW9H/q4h0veG/tetjbyMeiBBd7Y7ugp/3Qw7RHRV8gDj7MLVHTlxXfOqKntdEm4+SjEsQi
iTGziS97mxuS1CZ9jXSbIW8JgHACBEAgEwRyRm4WFRaQyKT19CStGVFu6qy5prQSDqR1ZKrANofx
LE72xnnxyooixe00FjSTp3iXJpKFSVHA8oZdLJf8IBPE07TQ3GNqfn57W8jDwB6Xkoa1O5MUMwVg
jD4idug+lt+/IaenfndweMzK+KnaZuSne2zG1YcXEd+UfVfxc9KjPMRTF3GEFDZLzkz33qOSQiNy
rmu+a3OStHvvpkrr2xFPZdPz0ydOvpj07dHupDT/THOSyTl4MBgEQCBWAjkjN0tKimijJuUJUXRT
vijKOS0gRsKBfTum2aW5V8mWlGBXSIoid9P4FE8fAPryK1s35MGb5HM0RWszJcKxWc2e2RPRQqYC
eT6TvMdUYaEHKNNTDzOMFG8eMjFDyKxVxA5Z03DkXT3ldnv1w/IqBl46EWFZnCWTZdkV7qnLMuNh
TiAC/Nhg5zTC1x/kFED2YHc8RWf9jvPDLtboG2Ncx7ClZEWcZAIZj0ogAAK3NIHpabU0oiGhSS/a
vikusZZOoc1wctM5HZsrueyIEPuJdHryEAkitttJjnKNjRlztKLIGw7PeTfPceTrhjx4wzZiKix0
T8XlQdaoFvIz9thBP0n5TCoLzaCv4aAlvhVmJK2Yy/vAonXoOpY3cy9PVeg0jeVV9Dd2Oc8p9Hvy
2ROSWKJIAvPrIP5y1T2NfzT0mAkCjn2ZkhG2pRsmQGlVZ4gdx6HnSoqqivkn6iSTCRAYEwRAIOsI
5JLcpNwgU2WKNfTQK+nlDT/Uv9aLOyHOXBTHcLKDNvk3fmcRW1EaaNkipYSbc7SiKNLNVlgoimqN
RTH6YGBRSWF8JAt5xJHimookaxcf+CK+lVc+tIdlxuiXtxk2Tamf+6gr+GgdurnsSVzlqfIO6rbZ
9sjyHbS0z9U7JiQODXDuh4v0RKARCAQmYGSmywfg6xtIjMOGRV/8yOGjT704kJQkpJohI00ygU1H
RRAAgVubQM7IzTvnzZ01a1ZZWZmIcIqg5k1+hbpDtP3u2P1mHvGisc1WYI+FfzaPGCnGi44+KK0v
V2/nPxVjpoQbWS9s4vYuCmWZUVlhIStivxHCzaikxPZpWchXVzX2S0K2pGDHD3s6vOAHwks0Dkip
Ql40DN3Gx9pfcZ7FdE0FH6VDN/JevH08Vd5BHhQ8vVyitLZtN8vWkvNUkhJZ5EMDIj0DaAQCEQmw
FJ/TrfIDyZ9Gx3FsLGFooL/Xec6XYv5RzXVxT4MR3UczEACBbCVQcOTIEbKtq6vL1G3d3d1m1PC1
t0a+vmyJbPzw8PDy5cvpL9c+SpNP9IPpQUZq2bQlSDXUAQEQAAEQAAEQAIG8JTCz5Mbly5cpeEen
l8s7Ekn7TU1Nea0b09/pun79+vz58wU6UxCaJEk0rrh7cWNjo/gLdd7U1CTe54DczNsHAo6DAAiA
AAiAAAiAQLwEMiI3c2YxPV7W6A0EQAAEQAAEQAAEQCA9BCA308MZo4AACIAACIAACIBAnhKA3MzT
Gw+3QQAEQAAEQAAEQCA9BCA308MZo4AACIAACIAACIBAnhKA3MzTGw+3QQAEQAAEQAAEQCA9BCA3
08MZo4AACIAACIAACIBAnhKA3MzTGw+3QQAEQAAEQAAEQCA9BCA308MZo4AACIAACIAACIBAnhKI
fsx7ngKD2yAAAiAAAiAAAiCQswRu3MCvCuXszYPhIAACIAACIAACIJD9BBS/KiQbn/SDltP8EUss
pmf/gwELQQAEQAAEQAAEQCCHCUBu5vDNg+kgAAIgAAIgAAIgkP0EIDez/x7BQhAAARAAARAAARDI
YQKQmzl882A6CIAACIAACIAACGQ/AcjN7L9HsBAEQAAEQAAEQAAEcpgA5GYO3zyYDgIgAAIgAAIg
AALZTwByM/vvESwEARAAARAAARAAgZQQKCoqmsGvEvsl/kKlsYyaA3LzQOf+IK/gOCb6GubNLuCv
hv4Je7uJnsf0ooJ5ewdtZUN7PFspikQXrNs9Zz1MPLu3YF5zj2yIwkKpqGDfkL3HKBaO9zcLFOz1
WN+4q41OCx1kJJK+NHxvFeshWIchxlJ66tPP4D7zqaA39pvF4Milrs+Vr8tpr+BzT9NnD78vSf/W
NE02z42wS5P0mZyxkfxZcdPsj6s184h/Aq4Tka1n1QwZZZLJGC8MDAIgEIQACcoHHnhgBb9WrVpV
W1v70Pe/v/GRjT/60Y9++ctfFhbGIxTj6SWIP9Op83c/+IH6FbxzmknvaVlxbHLqKr2Or9hUaZc1
P9F2iaLJ061t37Q+BWmSrX2987xHK68i3azBfZWNA14mDu1Z22YrU1jIizTDjN0dtZLijGIhfcYs
2tS/XnT45oH1Ay2LnB/8msNCFW2FGcFvklwzjF/eIyg9VdrMP3q/qR0XTwW9LnVqjfckfWYnut/U
SwVJzfZcRXM8pa1C3dOUWhKwczth/Z+nXfcH7OkWr8bkoPy4Th1r37FW//K2MLF5t6btOJX0NZWI
DB3c1K+11qyitz4zZPRp8BYHD/dAIHcJfP7557/+9a9Ja66uqVm7du33vve9+kSiLlHX0tJCivPm
zZuxuJYbcpNcfeP3o+L15oWxty++c2Hk3bF3x8ffe//K1WthQAx2tfTWHWiuEm2qmzsTvZteEFHM
8f6ndtQ9uKZc725V04H1WtsJHpLkRQf2J0RZ9fZj7fZW7kW8Mp/9OzxNHNxXu8NeqLLw1Eu9WvsW
yQyt45RkfFgLJ04cpc+Y4z2iw/L6HlKchsumUU4LFbwVoMLcJatuOPKeY6g8Vdo80fNPLb2tx6e2
Vpt9L0wcvNSZ2LHWEZAza5TXb2nVeo+edA8VRwMRa6tQ9zTWkWPrbNXW47u1/hdPJa1OxNZ/rnZ0
9tQOrf209LhqVdvocTXmq+o1rZo5aVg+slaJ7ib2kPvNkGEnmVwFCbtBII8IkNyk6+DBg7fffvuc
OXPmzp1Lb5YtW1ZdXU1ak4piYZEzclN4W1RUOKOkuHQGvUrK+IvehAGxauvk1NP1C12aMEWyft1q
q4jk19XJ7UyYcrGyvMIqqqgwZJmiSGjNWtKpl67SR6PbNdG3vyPR3dkul3lbqPAzmoXlDU9PykJK
K69YoWmvj0kf4W4WMjtsy21jhmVqGmKF1Fh3TtoJELpD17G8ECk8Vdp89oXGAf1jWO56Yc2D60Uo
SHHJD0yYRzTldb3uacoHjneAxUvqtN6x0Xg7zfXexsfOOV2gL0hTV7eJx3VVDc02+rdos+bgqTbN
+KatniEjTYO5DhX2g0AeEBCKc/eePbR0XlpWSlrz3nvvFX+My/tckpukNUuKDaFZWlJWyraxTmcT
60TfFraUvIFPxKMjA9qKinJr25N9I+P6isUWcy7LzMu7qHo7La26q1sm2ShypnX+rKHC+2baLNSY
ytHa9uv7Tfl6qCR6IlloH5oFORgE468eFpJqtG1IaJEDtJ5mUKu12mmxJE1h1I5aaxtDtA41TeGy
zz8Qu6de/bAPbyngLT0A9T1yAClpMKbn7BGmuP65xtBPgKcuhlHS0AX7B2u7cWkYM+uH4Mvl8i4g
h8VVG7rrktbTh050aLZv2mYj+/xDf45hksl6hjAQBPKUAClLimVu3br1jtl3kNaMMa4pgOaM3Cwo
LCgupuSp4jIKbTKtSflSxeKK8mjwDe/3sIV1fYl8Yux12tW0dvaJGmOL3rqXjI2M7IPNefEooKLI
x6zx/p80aubKlLOyw0KqwgKubL8pjxHyTVS66InFQq5frZ0GtIXA1cKJnn8nmXucx33pYlsLDOtV
NCiCYih7cmT1Q1ZoKlqH7mMFexZkT1U2j4zZo9qevffTbk4rcEvPlXZuJCuXef2eumD84q9FCsme
bpW0odkxIt8PYG4sid+gLO5RzYq+4p7vrrPqOBKDytesS9jW0/lK+kM15pdM4bpz/ollkslirjAN
BEBAKM7vfOc7sWvNXJKbhQWFRYWFxYWFFM4spnfGVVBQEOUR4XFHSvtgmtJKOKCkGUNFaSJOIPZu
xn+xsIHWvct1Wd+QcU4L2WJ0rabnOZ1/6GilZy55aIv5uj/t+jJjsZ4Wsk+d3TXWXkatqsZ9q4Dd
Blqk0zeJsk8yOXcqYoehXdQbODwN3pE9RVr6IE9KZGHbChvv8d7cGXzEeGv6P3Xxjhe8t3Y97G3k
Y1GCi72xXdDTfuhh2qOiLxAHH+aWqOnLim8d0dPaaJNx8lEJYpHEmNnEl73NDUlqk75Gus2QtwRA
OAECIMAJkHxKuuiPpDg/++wzsYYeUV954M2Z6GZRYQGJTFpPT9Ka08PBNaWVcCCtIxMvtjmMZ3Gy
N86LV1YUKR5oY0EzeYp3aSJZmBQFLG/YxXLJDzJBPE0LzT2m5ue3t4U8DOxxKWlYuzNJMVMAxugj
YofuY/nNIk5P/e7g8JiV8VO1zchP99iMqw8vIr4p+67i56RHeYinLuIIKWyWnJnuvUclhUbkXNd8
1+YkaffeTZXWtyOeyqbnp0+cfDHp26PdSWn+meYkk3PwYDAI3OoEKHjnVJy+f4lMJWfkJh01Shs1
KU+IAMnXdE+EMhIO7NsxTZ7mXiVbUoJdISmK3O8Ln+LpA0BffmXrhjx4k3yOpmhtpkQ4NqvZM3si
WshUIM9nkveYKiz0AGV66mGGkeLNQyZmCJm1itghaxqOvKun3G6vflhexcBLJyIsi7Nksiy7wj11
WWY8zAlEgB8b7JxG+PqDnALIHuyOp+is33F+2MUafWOM6xi2lKyIk0wg41EJBEAgvQRISpGC8tWX
coXpGJgzcpOEJr1o+6a4xFo6UQgnN53TsbmSy44IsZ9IpycPkSBiu53kKNfYmDFHK4q8bwvPeTfP
ceTrhjx4wzZiKix0T8XlQdaoFvIz9thBP0n5TCoLzaCv4aAlvhVmJK2Yy/vAonXoOpY3cy9PVeg0
jeVV9Dd2Oc8p9Ps3x56QxBJFEphfB/GXq+5p/KOhx0wQcOzLlIywLd0wAUqrOkPsOA49V1JUVcw/
USeZTIDAmCAAAv4EhIiS6znX1uPSmjRKLslNyg0yVaZgFHolvbzhh/rXekFYnLkojuFkB23yb/zO
IraiNNCyRUoJN+doRZH/zXapobBQFNUai2L0wcCiksL4SBbyiCPFNRVJ1q4W0iK+lVc+tIdlxuiX
txk2Tamf+6greL4rIHSHbi57Eld5qryDum22PbJ8By3tc/WOCYlDA5z74SI9EWgEAoEJGJnp8gH4
+gYS47Bh0Rc/cvjoUy8OJCUJqWbISJNMYNNREQRAIM0EwoY2hfSMbGTOyM07582dNWtWWVmZiHCK
oCYlT4U975623x2738wjXjS22QrssfDP5hEjxXjR0Qel9eXq7fynYsyUcCPrhU3c3kWRbovCQlbE
fiOEm1FJie3TspCvrmrsl4RsScGOH/Z0eMEPhJdoHJBShbxoGLqNj7W/4jyL6ZoKPkqHbuS9ePt4
qryDPCh4erlEaW3bbpatJeepJCWyyIcGRHoG0AgEIhJgKT6nW+UHkj+NjuPYWMLQQH+v85wvxfyj
muvingYjuo9mIAACgQk45WZS0+mIS6cVBUeOHKG/dnV1mbqtu7vbHOO1t0a+vmyJ3Gx4eHj58uX0
l2sfBfZpehXpB9ODdNCyaUuQaqgDAiAAAiAAAiAAAnlLYGbJjcuXL5eWlkYjMH/+fNHQFIRmPyQa
V9y9uLGxUfyFFG1TU5N4H0VuRrMPrUAABEAABEAABEAABG4ZAiL+mBK5ecswgiMgAAIgAAIgAAIg
AAJxEVBEN3Nm72ZcLNAPCIAACIAACIAACIBAOglAbqaTNsYCARAAARAAARAAgbwjALmZd7ccDoMA
CIAACIAACIBAOglAbqaTNsYCARAAARAAARAAgbwjALmZd7ccDoMACIAACJtqy2wAAA46SURBVIAA
CIBAOglAbqaTNsYCARAAARAAARAAgbwjALmZd7ccDoMACIAACIAACIBAOglAbqaTNsYCARAAARAA
ARAAgbwjALmZd7ccDoMACIAACIAACIBAOglAbqaTNsYCARAAARAAARAAgbwjALmZd7ccDoMACIAA
CIAACIBAOglAbqaTNsYCARAAARAAARAAgbwjALmZd7ccDoMACIAACIAACIBAOglAbqaTNsYCARAA
ARAAARAAgbwjUHDkyBFyuqur6+bNm8L77u7ugoIC8f61t0byDgkcBgEQAAEQAAEQAAEQCE9gxd2L
GxsbRbvCwsKmpibx3kduhh8ILUAABEAABEAABEAABPKRwNTUlKvcxGJ6Pj4N8BkEQAAEQAAEQAAE
0kYAcjNtqDEQCIAACIAACIAACOQjAcjNfLzr8BkEQAAEQAAEQAAE0kYAcjNtqDEQCIAACIAACIAA
COQjAcjNfLzr8BkEQAAEQAAEQAAE0kYAcjNtqDEQCIAACIAACIAACOQjAcjNfLzr8BkEQAAEQAAE
QAAE0kYA526mDTUGAgEQAAEQAAEQiI3Aq6++GltfkTr6xje+4dUu47ZFcihQI4XX1N7r3E3IzUBw
UQkEQAAEQAAEQCB7CFy4cOELX/jCbbfNypRJ/59dY2vWrHEakHHbUsdE4bUYFMe8pw4+egYBEAAB
EAABEEgrgWvXrs2ZOzetQ9oHKy0tI+3lakDGbUsdFoXX6kGxdzN1NwU9gwAIgAAIgAAIpIwARdKm
KJyWqdeUyrEM2xaVyeDPS0s39F5SNFd67U0EcjNl/wzQMQiAAAiAAAiAQOoIBFeagz8vK5shvx45
ZJdUrMKGQ5ds0vXME6xJck1jUPqvWm4GVnyDT9hsSzYjcD+x6G7ukzdYH68hN1P3tKNnEAABEAAB
EACBjBC4OTXl+xp8orRszT/XPzvyySfXxevisw8f3ri47IlBq63GxCNFJM2/UKu//OnDz168/mx9
uesQvv76GkYV/nDob8pm/sV/22zTNn6l9Odn/P0K0n/oOg4OST34eu1VAdHNyOjQEARAAARAAARA
IGMEhEL0eZ15Yk2rVn/wf56rLzdrLqx/7uLBeq31L588YzZnfbEX73D80N9Qq46Xn3u43LN/0oMK
zwPatvTRw07bPrn43GJfv1JVwcbByVbttQII5GbG/p1gYBAAARAAARAAgZQSGDr5U017/P8+vDBp
lIUPP/EsCc5/OzzuHH7oyaXNfR0vf/oP1SrT2L7R6V1etmnlDz8shh568rZZM6XXI4cnrCGH/mXm
bY8eHp84vNGoszHJG6mIOnlyyGw7fvhRq9vkVkqnInsNuTm9hwWtQQAEQAAEQAAEMkGABfj0mKTX
m/GRc5rWsbrKpdrCivs1rW901CjSw3qk8L79U4o4/rjat3O1GPVtfuaVVi/brLZax4mPP/5Uf738
tUeXPnJoQirt27j0H7WfiwovP97X/BXSlDoTcmTpxvtfNts+fm50nBUxDfqVR+83ur34nNb8FVKc
JkmdgzfYaLcacjMaN7QCARAAARAAARDIJAGW/O2zOXF0tI+vkLtVq7irXtP+e/SS3gnz5H8Obfz2
T7X2EwcT5X4980V27yuAbUwXJu6qUA208scfba22Kqxc3a71HfnVJf0vrP2zFw7ULxDesVKt9RWx
6/PMk9yRbSsNRCt/LJw68/zGPmpldju//mcH6vuanz9jIGI+eVNVe60AArmZyX8qGBsEQAAEQAAE
QCAyAb+dm0IReu1z5GV6Kb3ta659tJ/+2/avhyf8t0bypBrVFcw20wCvEc/8y+zbbtdfa9qEP5bN
cquKioTh7ASL6rbXVDlsOHOyTUs8uHqBxGQBC/OeG9Vdlvt3McnXay8ikJuRH3I0BAEQAAEQAAEQ
yBiBABHEirvq7AnnUtyORz6/WjGfx/94qDLxzIUPP/ivnVpfS+WTQ75J3UrlFcA2NmT/6Igiujl+
eOPs2d9u2/lfH37wEX+RbZY7emqTbCdbERfhTKGyHdHfS6MkQ/tbls6ePct61dKivlGTGaWMGUfV
m5CbGft3goFBAARAAARAAAQiEwiQtlK+mCJ3O0+edRljgimvusUVVlHir2vKNa1629sHE1pbbUuf
lJbj0l6dKRTAturVJB7dbRPDDb3wt/1MAW9VpiyFwlde8VVdVQv9ar7ouKdAHUXOj4LcDMQXlUAA
BEAABEAABLKLAFsq9rlWPkAbGtv+rW88qd543z/+7YC280eJBXoB90y8X5B45j/btYHm/7P3jLp3
FY1p2DY1frhviE5jYnr4q3cZBnJTTBtd/p9QgroLd5Gs3HnSaX9FRR0PqXpeEgePOtGeAcjNaNzQ
CgRAAARAAARAIJMEhP7yeVVt+882beCxyjn7hsya4/3Ndz/Wr7Ud31IlZXnLXVVte+vphNZe29I/
4dU/LVkrnA9oG41CtsmjkG1zlrWwfPny1X9dp+38ZR8d1cRtGNo7t3anUINJ2fQuf6lu5PbvPWtU
PruXj1Jet/tAXXutTIP3vJdOSRLdyv07fVd7rQACuZnJfyoYGwRAAARAAARAIDIB3/xxqrDy7z+8
+h87SXvNnTtbvJY91l/X+furf18lN2cyS9oHueD7B97sZFqQlJjrKL42B7GNRrn6xjMajWLZ9tVj
Vz7c8g3aQLmgrvN460DLMr3o5ANXjtMuS+u3j6ydmubOSysNn/X8Hzvb/0p3ee5fnVv3rQXMpPmJ
p6/8vvOcRWPu3Nr/19m0UspMV1ju67VXhYIjR45QWVdX182bfGeppnV3dxcUFETuEQ1BAARAAARA
AARAIKUEXn311fvuW3790xspHUXR+XuX/3jmzKlHHnnEWSfjtqWOicJrMSiFdRsbG8X7wsLCpqYm
/X3qbELPIAACIAACIAACIJAiAgHScVI0sq6rFL1n1rbUuR3ZL30xnRRoSUnJ+++//93vfpdCm3/+
85/FDtHUWYyeQQAEQAAEQAAEQCAyAec5P76HF8VYwfeY9xjHyp6uvLw2RSNpyPvuu+/KlSszZswg
bWneXP3d7bffTjWo4PPPP4fKjPzooyEIgAAIgAAIgEB6CLBTyDMnxNQ+Zta21GHx8lrOYqfNmaQn
i4uLZ8+enSw3KbQp5OZnn31Gb8RFlSA90/NvBqOAAAiAAAiAAAiEI+D7uz2prOBjaiqH9v/Jo5SN
7uo1aUVTOv7pT3+iwCXpSbFsniw3Z82aRQUU+bx8+fKlS5dMoYmcoXCPPmqDAAiAAAiAAAikhQBp
Kt9zkFJXQb3dMLO2pdlr0ormmaAXL178+OOPSU8WFRXRynmy3Fy0aBHVvuOOO86fP3/69GmqJBQn
dnCm5Z8MBgEBEAABEAABEAhHIMMJJj7nbobzJWdqO7w2taIQnSdOnKBEINKTpaWlFRXWbzbpezcX
LlxIMc+ysrJ33nlneHiY1t1FXFQESLGknjPPAQwFARAAARAAgTwgQPsC33//cua2bk59OPlBebn7
Lz9m3LbUYZG9FkJTVokfffTRa6+9du3aNdKTM2fOpFCm+STq527S/3/llVf+8Ic/fPDBB0uXLm1u
bv7a174muqBFdtFdHjy9cBEEQAAEQAAEQCA3CDz//POZNdT10E1hUsZtSx0Z02sRiyR9KGKUtGvz
1KlT5Dgpydtuu43y0++/n36xXr8suTkxMTE4OEhHIFG9ysrKXbt2UVYRZQ7R/5rSFaIzdfcPPYMA
CIAACIAACIBAThAwA5H0hoQmLZ2/++67ra2tFOCk0ObcuXOrq6vnzJlj+mIdiTSfXyRRaePmhQsX
Ojs7SWvSCvuNGzfktXUsrOfEcwAjQQAEQAAEQAAEQCB2AvIaOiWhk0okrUlZ5s888wzt2hTD3XXX
XaQ1qdQcvcj8fSGKXFKDP/7xj0KxvvHGG6KBODZJnMcpRzcR6Yz9FqJDEAABEAABEAABEMhOAnLA
kQKRdNECOF2jo6OHDh06efLkvHnzSBzSlk1aJKfkdPLCPOndkpvUjFLWqZjCodQjSc/f/OY3ohmt
wVN31AZyMzufAFgFAiAAAiAAAiAAAiklYIYdSQ2KkzUptElxzeeee47yf774xS/SqjjFKFeuXEm6
kyqLY47EZYtuUhktt1MyEeWnUziTVOZvf/tbOhqJuqC/i67NkzxT6hI6BwEQAAEQAAEQAAEQyB4C
sgKkVB/apkk5P7/4xS/efvvtO++8k/5CuvFb3/oWiUayOWkN3EoVojKhWynMOTY2RqFNsWtTnAtP
cnXdunVVVVW0vxM/OJQ99x6WgAAIgAAIgAAIgEAaCAj5RyHM8fHxX/3qV3TEJr2/fv06C14WFVFc
UqhEU0/KJtnkpqw433vvPTqAk7ZykuKkA+JpDDqbk/qihCMhb3/3u9+lwTcMAQIgAAIgAAIgAAIg
kHECdLYRxSVp9fvTTz8lfUiJQRSRpHAkic4lS5bce++9FOP0MjJZblI9EdGkN/SDlrSPkzTs5OQk
BTjpfz/55BMKlopUI0WnGScCA0AABEAABEAABEAABGIkcOXKFYo2UiCT8nko24d+/5xkIcUi6bj7
L3/5y/RLQq5xTWGAi9wUtcUB7/SezuOkrZwffvghnatEF2lYkaVO72P0AV2BAAiAAAiAAAiAAAhk
LQHKJie5SVqTQpDipygpJWjx4sXifE0Sh3JuUJIX7nJTVKIwp5lYRKFN+s0hWmGnQ+BFjDNrccAw
EAABEAABEAABEACB2AmQ4qSgJm2tXLBgwZe+9CVKLhdCU+SqK4bT5WZPTw9VpWz2lpaWDRs2kJoU
EhWHa8Z+q9AhCIAACIAACIAACNxKBMwftCSnHn/8cTqDkxbZKWrZ0NAg3PxfdadSk4VkG1kAAAAA
SUVORK5CYII=
------=_NextPart_000_004B_01CDEF53.FC4B9BF0
Content-Type: image/png;
name="image002.png"
Content-Transfer-Encoding: base64
Content-ID: <image002.png(a)01CDEF53.F7DD4550>
iVBORw0KGgoAAAANSUhEUgAAA4QAAAM9CAIAAABL6SrRAAAAAXNSR0IArs4c6QAAAAlwSFlzAAAS
dAAAEnQB3mYfeAAAmF5JREFUeF7t3QucVVd98P09Fy7hHiDAxCSMMUKThkoMsTNVK32sBFOpSYQO
xwuBqnnr85gWSvoUtM048bWgQsvzxJp+NDpE38fDaG4trSU8SUUlmYkhAUPMzVwmCZEhEAQChBnm
8v7XXvvss891731mnzlnnfPbTnDmnLXX/q/vOjPzn3XZp2b79u2WZQ0ODp765b3/cP+xiQ3v+L0Z
b7118uTAwMDRo7+tq6t9xzsunjWzYdz48fX1tZZV87Of/UzKcyCAAAIIIIAAAgggkEtg4cKFkl6e
PXv25MmTr9jHqFGjzj33XMkwx44de955502aNEnOHRoaqvnB9u0T5P9qjj96590PHq+54Lzxddbg
mTNv1dXVT548eerUqVdfffW73/3uKVOm6IvJObgjgAACCCCAAAIIIJBHoKamRj976NChhx56SEYz
T58+/cYbb8jjkozW1ta+7W1vmzhxonxZ0/q9f1twbo117Jdbv/d47ZT62tqaurq6N9988/LLL//4
xz9+6aWXypeSgPb398v5zz///HPPPff4448fOXJEapfTC8hNt23bRuchgAACCCCAAAJRCSxfvjyq
qqinMAHJCWUctLau7pxzxp85c1q+nDhx0rTpMxpnz547953v+r15koB2dXXF4/GDBw9KDiqFpczs
2bNVPnp92/dWvfvcp3/S0fly79jRkozWHj9+fNmyZYsXL5Zh0fr6+rfeeuuOO+6QwVU559SpU729
vfKIDLEOJxnlRVNYT3MWAggggAACCCBQhgJOMiqDmomRylrJTOvqx4wZNap+1NsuvOjdVy5417zL
JM/8zne+8/DDD0+bNk1m8GXiXvLRurrzrrx4pvWr/c+MGjvu7MDAW6dPx2Kxa665Zvr06ZJxPvDA
A/fcc88TTzwhY6XHjh2TZPTMmTNqdj8x9FoAx9KlS++6664CTuQUBBBAAAEEEEAAgfIUUMnh0NCg
pIn2IWlk/9mzahDzzFsnTpyQ2fVDrx+ZOm16c9Pvy4NPPvnkhAkTZHxUPq8779zLJ086dOrE2RMn
Tw0NDn7oQx/65Cc/KU+//vrrO3bsuP/++7u7u6U6GSKVlks+K4ebiRaWkpKMludriKgQQAABBBBA
AIHCBPSAqM4S9aGTRjU6Wl8vg6CnTp18442jB3sONb794gULrjxy+PBLL70kBVR6ec6oVw53vyFl
ZV/S2xsbP/OZz4wePVom4nft2iUb7WVtqFQhNfb19UlKKofksO460QIWjBbWQs5CAAEEEEAAAQQQ
KFsBPW2eGBV1/19tfJdMVMIeM2bM2b63Xvz10/fe96/HT5xctXKl7KzX+Wvt1Aly+6aa3/72txdd
cMGf/dmf6RFQ2fEkyaicLF9KAho2+yRJLdvXCoEhgAACCCCAAAIjJaCyUsk2ZShTDWgOWTKi+cwz
T/3XT3bV1tXLutBx48bJQtDaKaNrZX/T6dNnpk4998orr5QTZOP9o48+evToURlZ9Y6DZsbtjsQW
Nl8/UhBcBwEEEEAAAQQQQKD0AjIPXzvUv//J/b965tmmpibZwCRDn7UTpkyUpaPnnz9r7ty5Mg4q
2afscnrxxRclK00b4PTuW5L0Vk6WzUwyoS+HfCJf6i32chR2y6fSCxEBAggggAACCCCAQKQC7uCo
rlWyzTOn3nz6qSffPHnqsssuk1vfq4FT2eL0zne+s7m5We176u9/+umnZYWoDIu6yaWc6S4FkAcl
15T74V944YW/8zu/M98+5BP5Uh6Up/S0PvlopP1IZQgggAACCCCAgKkCOh+VN/xU7/k5aPX39fb0
9Oz75RPyLk1y+yY1/CkjozNnzmxsbJQmyv2b9E575zS71Tq5VKcPDp5zzjnnn3/+JZdcMmfOHMlB
5a74csgn8qU8KE9JAV2SfNTUlwxxI4AAAggggAACkQqo+XbJR9X2+aGBocHjx0888eSvZFreGRmV
p2XZqF73+cwzz8jNRGV8VOegOgx3e5QUu+iiiy6++OJZs2aNG3eOFJP3G5VDPpEv5UF5SgpIMb2N
irWkkfYjlSGAAAIIIIAAAsYKSGKp80OrRrLHA6++On78eJmKr5G7isr7Mq1atUre/FNm2L/5zW/u
27dP8lG9e8ltrn5Xe5mLl0Mm+3We6s013UckMX3VPmQhqVSSCSZvB5rzHZg+t3nbB96mTjn11J2f
vvU//bQ/fMt3brhsvGW99tPla2/PVjhPgcSlsp2bOE1X7JT0DUmVs3JF4teU9OfzX9R1Sj0tPcSg
katawpQN2xjKI4AAAggggAACtoAa4lRvHFpfWz/q//ve1ltvvVXda9R75yZZMCqz9npc00WTzyUB
ldtByXs3yYOyV6lP/ne2/2zfUH9vjXzIJ/Kl/aC6lZQUk8JuzpqGL1lvru743CV2JirH+NlXfbi4
nXb786/ZF3jbJZ9Lv9CHr5otGa4crz2fNcXNEpikr04aXdyo89c+/rIbtn3nliLDlbKBXBsBBBBA
AAEEKkBA3qlpUOWR9iy6WjPqvUOTrBnVW5S8DZUhUrlVqUzqS34pqWpvX29/X3/f4LFTo351YvxP
5UM+kS/lQXlKDikmheUU79iqW2HWB+1nk7loSbPRRC566qkH7Fz09rUykrt8eYCR2nJ4dUhCujmR
XpsVeTnoEQMCCCCAAAIIFFVAr+OUW47KZibnfZt03ukdHJUv0/YeybOSWcohU/ByDJwd6LOOnxn7
7JkJj58Zt199yCdjn5UH5SldRpf3Dq/6NuzDtyywx0Vfe80eshx/2R9nDFlalhqBdI7sQ4C+BZJx
5BobbZxsj4ueevlRvU5AJrDV4VzP/epziVDu/M73ttmrBeR42weckqknqacSkXni9gTruYKvlFtA
puXtPFkfdz51Sj/jDvZmBuE8kjB009bMayZKMtIavD8oiQACCCCAAAI+AmoM1B4PrVHT8yrntP/f
s/pTj5Km7T2SL+WupLIAtE/GROU4O3R29MGzE14YGn2s1qqXD/lEvpQH5Sl5XopJYTklTDKaGI98
7fm1D+isKmMCXfKjRNInT8sQoOcruxm+BVJ9smejzvism4tmNx1/2Qec/PPUb99UaxNCH+7CVPdM
adFwUr//vPXTP8259EAukmUpgeTO2fPRz23Wa3df+6khA8Kh+TkBAQQQQAABBEogoJJD531Dnas7
yagbi7sLPi2P1BmqLAlVh7xNff1Rq/5UzcBotS3KGpJP5Et5UJ7SRfJspY/H41mansxFb7f+89GX
s2SjiZFTtavIPpzMy63Mt0DGZbNlo8FyUakqMSz56dWrP+2OStqxBcrfPvfHdjKbaIwzrDl+cuNw
XheJBk05L3PhqCucyve2BRmLTCVrtVPRnPvChhMi5yKAAAIIIIBAtQukLAdNT0Zz4chCTz1FrwZG
+/sH5cb2MrmfKK0/lwflKbdYrrWhWe/35M1FrWQ26kmU0ldyqrWcKemob4EsbcvMRgPnoq/t8d/t
n++l5qznVHcBUDPiiUHebGlktK9YtZJADYc6AWSkzo23LFVpsuTa2e9QEG001IYAAggggAAC1S3g
vOdn/vl0SR/1m386qWZ/38Cp8VbfuJq6szoNlU/kS3lQpuh1Gf0GoYHvM+oMEzprLpOpWZZN9ccO
J+/41H3cWSXp7UTfAt7C6dmok4v6Z5qnjncP95Xjrt/UM+LFPhLjzeo69tLWrMtU3eUHxb+fQbEb
TP0IIIAAAgggUO4CkoIGGhnVyai8B739dqEyPNp79sSks4cbBs9MqK0dlA/5RL6UB+Up9fxAvxQO
kYx6t9GnomXfxhQhbGo2mshFA9/SqeBIElPhUoE9U+/uPiq4RnViAtKbkLsVyprSjMtkXaYqt5mV
k4pOP6yWcjICCCCAAAIIVIZAreSO8q6g27dv1zcTzdoqSUbl3ertwU47Fx0423e6/syhmW+98vYz
r10oH+qTQzPlQXlKzeOfVSOjckrAkdHcuagaxEu7D6h3HtvZ954atG+B1OLebHTkclHLCV0loimz
4VnTyICvtcSuIyv3sK1KSJ0jscYhbQhU3Tj/03vsjVBv+0Ce3fYBY6IYAggggAACCCCQXUCSz6ee
eiroyKjcfFTyS7mHqM5GJd/sO1V/+jcz3vz1O+VDPpEvVRJqj51KMSmceb/SHF2RyEUTe3nSsiUn
G03MMo+/bKmz48bNvXS9vgVyXD6ZjW7W99wPfqv7PK8tZwWBO7z4Yb0SM/1IpNruOoVQL1d1k3vP
kZjuz77IIP1eTbc7Ny2wUvJffROBxHLcLLubQgVIYQQQQAABBBBAIL9A0GRUUlc9OGrnonIMSto5
aJ0drOlVH9ZZ+dJ+UCWjelg04HvTJ/fAp8+Nu1mi3sb0n7fq8Tp1Tyc7A0tfaulbIFc2mriV1NsK
z0X/8/Axu3b3PqOJBxLLM9PvQuU2Tq/edNsy/A1Mr/00+8ajRPLpZrA6pMSt/dNwnNLM1fMTBAEE
EEAAAQSKKhAoGdURSKJ52j5k4NPOR9VdnPR2JflEPyJP6TLyebC48731ZiJhS0wky/5v79LK136a
vtDSt0D2mLxbe3LkZr6NcccZVbas7s8ksXg3+2cEe/taT/TqNlG69LD2Ddl3m8q5Bz7LmlEZjM51
H6pEas/gqG/fUwABBBBAAAEECheoaWlp0WfL7T9liadMkWdWpsc49S1I5W72csjq0traWnlQPtd5
qjylb/+ks9KsN8/XNcs4YNarFN4IzkQAAQQQQAABBBAwTUCSz1gsFmhk1H1PJvlEZ5x6Vaj+1/1E
ryiVAnkyUdOUiBcBBBBAAAEEEECgiAKBklG5vjcflS8l49TT9DoZ1dP0+i73ZKJF7C6qRgABBBBA
AAEEKksgaDLq5qO6+TrjVO9tbx/ul/rZgFuXKkuS1iCAAAIIIIAAAgiEFgiRjIaumxMQQAABBBBA
AAEEEMgrEC4ZDXgT+4DF6BoEEEAAAQQQQACBKhcIl4xWORbNRwABBBBAAAEEEIhWgGQ0Wk9qQwAB
BBBAAAEEEAghQDIaAouiCCCAAAIIIIAAAtEKpN/0Ptrac9XGTe9HxpmrIIAAAgggUA0C8n461dDM
ymuj3H9JbnpfmmS08jRpEQIIIIAAAggggEAoAZ2MMk0fCo3CCCCAAAIIIIAAAlEKkIxGqUldCCCA
AAIIIIAAAqEEsk/T7322O1QtFEYAAQQQQAABBBBAIJTA/Dmzc64ZlWT0irmNoaqjMAIIIIAAAggg
gED1CPT29h4+fLi+vn706NHy5vBuw9Pe/CjtXeL1e8ifOXPmwNFTOhllmr56XjO0FAEEEEAAAQQQ
KDsBktGy6xICQgABBBBAAAEEqkeAZLR6+pqWIoAAAggggAACZSdAMlp2XUJACCCAAAIIIIBA9QiQ
jFZPX9NSBBBAAAEEEECg7ARIRsuuSwgIAQQQQAABBBCoHgGS0erpa1qKAAIIIIAAAgiMhEBdXd2o
UaPk3yAXIxkNokQZBBBAAAEEEEAAgUACkoPKbUebmprk3yD5KMloIFYKIYAAAggggAACCPgK6Ez0
0ksvHTt27OLFi4PkoySjvqoUQAABBBBAAAEEEPAXkExUDslEZ82cOfuii65asOCv1/61fjDPySSj
/rKUQAABBBBAAAEEEMgv4Gai77j44saLL25oaJg0adI7L3nn1jvvzD8+WtPS0qKrjsfj7nuJFvbe
9P+0ZUuQflqzenWQYpRBAAEEEEAAAQQQKFuBtPeml3eoX7ly5bhx46ZNmzZzxowZM2dMmzp94qSJ
Ev9bb731gQ984OzZs/J55nvTR5yM3vjZz+Yn+9a3v00yWravKgJDAAEEEEAAAQQCCqQlo/mn4wcH
BwcGBrImo9FP0z/965f1xzMvvPLciwde6H7tldd6el4/cvS3x3za1rVRJcvLt/UENAhdTF0gSPU6
kPRjY1fygv41+ZcIHT4nIIAAAggggAAC5SoguWaffcgIqPfQj+hMNOsRfTKqLyNbqUaPqh8zWj5G
jbU/5JP8el271rds2NDScd+u1GxUpXWJRND7eXH7oiV+cMh7dG5Y3xwoky1uWNSOAAIIIIAAAghU
lEBRklHJREfVJ9LQMaPGjpFVBH43Pu3ZtmW9NX/hymtbOmJbPWOQVk/3Plfc+/kId0PTuoNxCW21
HrdtWjc0tG35rBGOgcshgAACCCCAAAKVJhB9MlpTW1NfXzd6dP1YGRZVmajcgb9eH3nwenbd19ES
X9k0a+G1Ldb6LYmp+p5tyxtiHZYlo5I1X/xy8nM9Upo2nZ4yA5/ynHeG3drlmYRPecKnb2ctX73B
csZtvZPwKVfKvgpAmlHkFQiV9rqkPQgggAACCCBQJQLRJ6O1NbVyt9N6+5779eq+p87hbtXPJtu1
NdbRcu1CGWrUKV9icHTW8m0yIGlZGzqHhr7y98nP1zWp/K55vXpcH95xS5WlNu9z59ntGfZE2tkR
616YOEc9EWQNaSLixsYWq6O729sACcNzpYNxK9aQnuDqhFpFylhqlXxX0UwEEEAAAQQQCCwQfTJa
V1sjKajM1KdlovmSUVkuaulcVI6mlZJ+JgdHczRF0lTJKdc1JZ62h1R1pmjP+G9Y7c6iqzl1t6Qa
fs2TXPrD7ev2Lmnt7u5IjJbKqXZQyZjsWJxMNOVR/6tQAgEEEEAAAQQQqAqB6JPRUaPqZIGo7F7S
O/zdQwZIc4jayaOMhjY4+9ftifm0laM5O8OZAa+psc+yD5UgbljoppwR9+P8Ru9S0aZ1nWogV4We
OcYqj6uoNnSSiUbcCVSHAAIIIIAAApUiEH0yKmmofMiyUX3oSXrJ1XImo2qK3p6H9xxqbt5ncNTJ
Qhti8/Wp9nR+cQ+V5bY0NqZeRI272td2sumUnLQlHt8gC17DrAUobhOoHQEEEEAAAQQQKCuBoiSj
smPJzUH17HyeOXp7it4zd27z6En39Js8eeXsVab2stCMYUe1tHP9Lu+O/EjI7QFcdzFBSpV6zcDQ
kIySeod0W+Jbli9P2YUfSSBUggACCCCAAAIIlINAT0/PQw89dNddd/0w8HH//fc///zz3uCjT0an
njtl/PjxY8eO1aOjekBUbrsvRxa1XBle6j6mHNyezURdG5PT9Pa5nnFVewx1mIOTeu2nyi5T7ueU
VrW93DV97HTW8i2em0KVwyuHGBBAAAEEEEAAgQgEXn755aampk8ljk/ax8dSj+uvX3rttdd99KPX
fuQjSxYtunrOnN956aWXipiMyrt95v9Ia7c9RZ99tFHvY2pevmuhSi3l1k6STuo00/68UY04qk/1
sWuhPVfupKAydd4537MGVWbyw+5kTy5htetviFkyDJtRib3bX3bQJ8JouO/ag5lXctLRBndLfwS9
TxUIIIAAAggggECJBQ4fPjx16lQJQuaH+/sHzvb3y9stnT59xvtx6tSpN0+eOn78zcNH3nj1wG9k
pLLnUMr7G0X53vQl9uDyCCCAAAIIIIAAAiMlIO9N397evmLFCrmg/dbzeiJ8UPJObwj9kqXKk/0D
p06f7j3TO3nyxIce/vmypcsOHD01f87sWCwW/TT9SAlwHQQQQAABBBBAAIHSC8iw6OCgfKhDUtL+
AUk/PR/9/Wf7zsqI6dCgs1c9LWKS0dJ3IREggAACCCCAAAJVK0AyWrVdT8MRQAABBBBAAIHSC5CM
lr4PiAABBBBAAAEEEKhaAZLRqu16Go4AAggggAACCJRegGS09H1ABAgggAACCCCAQNUKkIxWbdfT
cAQQQAABBBBAoPQCJKOl7wMiQAABBBBAAAEEqlaAZLRqu56GI4AAAggggAACpRcgGS19HxABAggg
gAACCCBQtQIko1Xb9TQcAQQQQAABBBAovUDO96Z/9++8vfTREQECCCCAAAIIIIBAWQqcOXNGvze9
vMunvAvogP3m9PLPsePHvfH2n5V3p1dvUN/X1ydvZy/vTf9w527ve9PnTEavmNtYlg0nKAQQQAAB
BBBAAIHSC0hmGUkyyjR96fuSCBBAAAEEEEAAgaoVIBmt2q6n4QgggAACCCCAQOkFSEZL3wdEgAAC
CCCAAAIIVK0AyWjVdj0NRwABBBBAAAEESi9AMlr6PiACBBBAAAEEEECgagVIRqu262k4AggggAAC
CCBQegGS0dL3AREggAACCCCAAAJVK0AyWrVdT8MRQAABBBBAAIHSC5CMlr4PiAABBBBAAAEEEKha
AZLRqu16Go4AAggggAACCJRegGS09H1ABAgggAACCCCAQNUKkIxWbdfTcAQQQAABBBBAoPQCJKOl
7wMiQAABBBBAAAEEqlaAZLRqu56GI4AAAggggAACpRcgGS19HxABAggggAACCCBQtQIko1Xb9TQc
AQQQQAABBBAovUAhyejBgwd37tz5rW9/+5+2bJF/5XN5pPRNiTSCamhjpGBUhgACCCCAAAIIFCIQ
Ohn9zx07frJr13nnnXfdtdfe+NnPyr/yuTwijxdy/bI8pxraWJbwBIUAAggggAACVScQLhndcf/9
f/j+93/4wx8+3TfYteeJ3Y/s7Xrsid5+65oPXyOPy7NZ/Hq2La9Zvq2nnGS7NtbkDimtjd+7e4dv
G1V9WY/yaXfeJrtdlL0hG7uSnZe3noj6WIVT473osOqNtrZhhcLJCCCAAAIIIJApECIZ7enpeePI
kd6+vocf2Xumr6+hoWH6tGkXXXjh4NDgvv1Pnj179ujRo8OZr1d5TmQpSIF97bZxd9fjP9r50E1f
/+6d2//r8xvv+OH9ux//5f58bWyJHxzKOLYtn5U3kHJostW1seG+aw+6oaY3pHPD+uYi/TUxIs2f
tXzbwfi+5lK/sgp8QXIaAggggAACFS8QIhnd98tfXnbZZU8/9+I5486ZPHGy0IwZPWrM6PoZ06dN
mTT5lVcPXHbppb984olCyXq69xV6anTnuW2M73jozGDtvT/65y9/5UuNv3v59p3/9d37fjLsNqYF
Wg5N7tm2Zf2G1XmS5qZ1B+MtHbHVenC7ad3QkF+KHbQ7Rqz5s5avloyadDRox1AOAQQQQACBkRQI
kYweOHDg/PPPP3r02KSJk2RSevSo+nPGjjpnzOixo0fPOG9aX1+vjJVKmQBDgfbMqXPoFEEeaYh1
WJYMwiVGR72FkiOmeka5S1cgs+B6cC2zQh1F2rSz/6y5buPeX/36pd+8/okb/59Z06YcGTPq2B8s
6bHGP/7Uc0+98HKANqYD6CDcazuxbvyPsmhy19aYFV/ZlLfTVC5nddy3S2Wj7jR9ekc4vZhcrZCa
+6X0hHoqS48HeN2nrxLwzMHnfSU0rYy3rN9SXotFAjSXIggggAACCFSBQIhk9PTp0+PGjRt7zphR
Kg0dLR/jxo6Rf8eMGTV69KhJkybJs1LGF21982pri57RljG39c0qTbPnUlssa0Pn0NA6SY0ks2iI
zZcvnGL7vDPFHbEtugJniC5bhXa2U9O8XlXoXssd38sZo27j0y//ZvyomsNvnn7p0Jn2X/e81W+N
+UDLGz2v7X/+QMA2ei8gg4nJscWebasl6Zao1v1JGTRZDYta8xvzryWQtjQ2tlgd3d3pbCkdkafL
5Knmfe4yBnvef2N3WvN9XzZBCmR/JagzZzXOT+TTQSqiDAIIIIAAAgiMlECIZHTixIknT54cXVc7
eeK4KfIxafz48ZKbjTvnnHPGjh0rn5w6dWrC+PG+kbfEtyRmhWctvDZblmPnSBs6VVZqH7OWb5GZ
Yj00p46Waxd686fsFUp+K1moW4eV41qp4eo2njth3IvdL//533w59q9P7X+p78iJ/hOP/XT8hIkz
pp2bs40dsYaMPUzuaKhuQKxh+XJJRWVRZjKqxOVL0+Tu7g6rpbHRt8vsAvu6M3ahJTsiT/z6qeRK
ADXT7+mXYBcPVirPS6tpoYzuZubTweqlFAIIIIAAAggUTSBEMvq2889/7Te/mXru5FMn36yvr5Oj
1j7kk7Nn+8eOGXPgtdcuuOCCCEJVOZI9Y+8eag4/mUoEGMtLRuFO4dvrAHwO3cZr/nBBzdiJ/S89
cXjHvSdfePHkg/eOOvBE3TmTPvJHv5+zjdk2MHkWV9rpqNXRIamom4p7YylJk+1Fm4ExsxRMPpQn
fvXUhoX5VwL49UtEz2fJpyOqmWoQQAABBBBAoECBEMnoFVdcsXv37tkXXdjX2yvT2ToTleNsf//A
4IAMjsqzUqbAQNJPy5LcZY4n5r2Wk4W60/32OgCfQ7fx8kvntP7Fn82Yft7EFx7qv3fTuS89NHP6
ebf9/f+YOX1a4W200zUr3+hcaZrsR6KeDzaCOvz4g8RCGQQQQAABBBCoKIEQyajc3P53L7tMxkEv
vvjtQ0PW0d/+9q233jp1+vRA/4AMi8ogpjwrZSLgsZcoJmflC6tRtuaoGfFwU8JuGz+66AN3b/nC
J/7suvc3N6361PL/+92N77vy8mG0sWtjs1p50LlBRnyzbesuSZPVQspss+9p4PY8e9rSiPQ+yRO/
emr9Ls+9SgvrzyjOCjwKHMXFqAMBBBBAAAEEggiESEalug996EPPPPvs977//Z5DPTJRL6Ohp06e
fObZZ9q3bpXH5dkgl/Qvo7dvJ+4mJOX1IGf4e/N4FgnK3TQDTNN723jm9JurP/XRe/+57b//2eLX
e14bRhslfDsVXdfUtM5ORzO39ZemyQEWUupd7zkWFyS7Mk/89lOerex2Z/rf2CDryyRtJ5X6g8P/
5WSXUEsSAi+PDVgnxRBAAAEEEEBg+ALhklG53pXvfvcnP/GJs319P/v5z+WN6eVf+VwekceHF43O
WWShqMpT7B3olrsnqEHuP3Qw5KYX+/6YyYWnuxaqPe3epCh3uIW0MdsGJrstiWzO2bXk3LazQd9D
oNRNVtlo2qhlWkM0fYBbi+bpMnmqc76nM+U2CarClOan90bKkmH33ljOHewTa4l3LQyy9sLORXfd
1+EzuDu8ly9nI4AAAggggEBhAjUtLc5Kyng8Lr/jdS17n+2+Ym5jYTVyllECKlfuXh0y0TeqhSpY
dXMpS+6nVRbbqIzTI2AEEEAAAQSyCfT29ra3t69YsULWRA7I/iH5b1D+HTx2/Li3eL/sLlLHQJ/c
lL63d/LkiQ937l62dNmBo6fmz5kdi8VCj4zSHZUlYL87UYXfDt5+l6nkncIqqwNpDQIIIIAAAoYL
kIwa3oHDD1/WscoceoGLOId/+WLXYC+TmE8qWmxn6kcAAQQQQKBAAZLRAuEq6bQo33C+7FzsNz9g
fr7s+oWAEEAAAQQQcARIRnkpIIAAAggggAACCJRMgGS0ZPRcGAEEEEAAAQQQQIBklNcAAggggAAC
CCCAQMkESEZLRs+FEUAAAQQQQAABBEhGeQ0ggAACCCCAAAIIlEyAZLRk9FwYAQQQQAABBBBAgGSU
1wACCCCAAAIIIIBAyQRIRktGz4URQAABBBBAAAEESEZ5DSCAAAIIIIAAAgiUTIBktGT0XBgBBBBA
AAEEEECAZJTXAAIIIIAAAggggEDJBEhGS0bPhRFAAAEEEEAAAQRIRnkNIIAAAggggAACCJRMgGS0
ZPRcGAEEEEAAAQQQQIBklNcAAggggAACCCCAQMkESEZLRs+FEUAAAQQQQAABBEhGeQ0ggAACCCCA
AAIIlEyAZLRk9FwYAQQQQAABBBBAgGSU1wACCCCAAAIIIIBAyQRIRktGz4URQAABBBBAAAEEalpa
WrRCPB6vqanRn+99tvuKuY3oIDACAm1tbSNwFS6BQDULtLa2VnPzaTsCCBRJoLe3t729fcWKFUND
QwMDgwPy36D8O3js+HHvFfvP9tvHQF9fn5wyefLEhzt3L1u67MDRU/PnzI7FYiSjReogqg0qIMko
vymDYlEOgfACfIuFN+MMBBAIJBBVMso0fSBuCiGAAAIIIIAAAggUQ4BktBiq1IkAAggggAACCCAQ
SCDqZHT/5gULbtp+JNC1KYQAAggggEBRBf5py5YgH0WNgcoRQCC/QIHJqMo55di838fXLudbik5C
IJ9A18aNXfmFujbW1PiVKdi4Z9ty2drnHMu39QSpSAWUcaRGGCRmVSbHFQuJKkjklEGg8gRu/Oxn
839UXpNpEQJmCRSUjB7Z/q241dzcbMW/xSCoWf1tYLRdG5vXly5syfkaYh0t8YOyU3DoYLylI9YQ
NO11TlIn6pP3NXtT5qZ1Q0Prmgpr2TCiKuyCnIWA4QJP//pl/fHMC6889+KBF7pfe+W1np7Xjxz9
7bGMltl/6aX9FWj/fZn692SiWJ6/GaNFS/wFGvBP4lAXt+v2+4t+xFoaKnQKV4RAIcnokc6dnVbs
xtZFzZZ8lndKft7aPXv2rJ1XEVQ0ogoFenbd12Ft6Ny2fJZq/Kzl2yQftdbv8hmozQolJ3dusNZv
CTa0mtc6wqiqsE9pchUL1NXVjh5VP2a0fIwaa3/IJxkes5av3mB13LfLMwvStUv9SZzyra+/DVfr
nw0jcXRtdf4ulp9HQSZWRiKmbNco59hKZcJ1fQUKSEZ1Lrpo3vRm/2w0bQmpM7tvT/GrI311afrz
TPD7dmAFFfBObTt/oau/1tW46HoZU0yMBqTMT2cZqXAfsv+K/5d/Sf97X/9x/4QeB0heU1Xvqdqp
RTLI1PHLWY3zLWtfd6C5+oy+aVqphla36lQ29Sd2SquyD1DoIirOaKOqoJcQTUEgj4BkoqPqE2no
mFFjx4wePXpUXV1dllMaG1usju5u9xmVi26Ix1u82Wh3d4fV0thoWWqSw/l7tZj+Pd37LGt+o537
2p+X6VHOsZUpGWGJQPhkNJGLWlaQbDQDOdYuY6XO0d7YttjNR49sv2nBqu7WHe6zO1qbu7vZClUl
L1NJzWSgPTGh3bK+2c49VdYlw4kyNun8tLfnpy09aT4kTznlHKRZy7fIrws99tizbcv6lviWv7j2
2tShTPVbpeXahTPUKeubtzR6598bulc7M+oSQPYZK3uAxPmFEL5rcqSydqvmO63XzUrPR/XEvDhk
/aU3vKjCt4MzEDBNoKa2pr6+bvTo+rEyLKoy0VGjJDO1jyxNmbUw5eeGnYsuXC4PJrNR5yeJ5IbJ
yWv9WVdylXnqhHqeP6Q9MaQtONdVyIPyA0D/Xf7FL6sfBvpzz5/t7iJ190eXup4bTpa5fc+VNu5K
VQgSarYy+gdVgNhMe/0Qb7EFQiejyVxUstElN8Z8Z+q9LZBZe++c/bxFydOduf8l093y05fcdpvn
y2JLUH8pBeQHe0t8pbOC0v5F4B2WcCPT81RbEhNjmSMSan7NHnuUkpZdMMtvFclF9cyaW5VdSL50
Ikg7Jwljr19NBhpeLH3ARdeghlg2LHTXj6pmpS4n7dqoM9Hsa0yHHVX4dnAGAoYJ1NbU1tXW1tfW
ylBovXyWONz3HUxtj/0zIDEFonPRJkv9MZnIRtX4n/uTxHtuR6zZ+aNW/qyUNeaJJNBOJ90/Oe0l
5Jn5oZ4LUn98u3+Yd8RWSzoqPxTUEiH77/Kv/L29XMj+XP1IyFtzR2yLtUVVlvZnrH2lfe7f9VZM
pZDOESTUHGWcpUzBYjPsJUS4RRUIm4zuv7PNnqN3grLTybY7/TbVe5vgnYpfFU+mnmq6I76KrfdF
7e6yrVx+1CZ+ViaGALLE6p2nytWUpnX2wKL8QHcWc9nrvxK/QZKjGer8tBFOnwFPNXgrv5MSC0gj
tGxaKBFmDIYmLnDfarmu7IbKlYkWK6oIG0hVCJRaoK62RlJQmalPy0RzJKN24uksG1U/d/Qfi/Z3
qr1k3F4wmv0nxobOxLeq+mHkVKJmauSHh/tNbM/ipC5LVUL2ChzPH525/zB3Pf1qzpox6/iTC17t
SJ3Dr0JVLEiZ4MVK/drg+uUgEDIZ3b9TttG33pDckaSyUSu+M1g2qmbiF6yKJ2fq2+XkxCGjpupL
yUdzrSgtBy9iKJJAcsZHZuvtP/tzHPYirbyH+oUhowaekUb3N0hqLhqmKXYmqvbHF7oBXl/MXWaW
em097KFm3fThXSLQ0WHFO+OWPTySfkQUVRgIyiJgpMCoUXWyQFR2L8nIqPeQEdLs7VE/R+z5GZW3
JX6cuNlo2myGpwrPTx6dvdqVqPL2vLp7qNnsrPM/qi7356E955338Ks5e8acEb/+sZn4IeUXqt9F
Q1Rl5KuJoKMXCJeMqlxURkIXu/miyi0lqGDZqBpVbZZFoTl319t77xPrSdWQa3JFafQtp8YyEujZ
ttqehraP/Olezh/fieaov9k3bJDR0WRGl/gNYs+2hd/9qqfOJBMd9iaFPEO7ekhEr4SV3wTJCTy1
lqDJXn2Qmo5GF1UZvRAIBYEiCUgaKh+ybFQfepZeMsOcyWhiGFTlosk/gZ2fJYmJ+1DBpt3sLesP
OycLdSf08/1hnrx4kJojDzXgRQMWCxUehStQIFQyqsdFk1uMdN5oj2cGGBs9ov4+bGxMLgrNyzlv
rXfYtALlaZJHIO3PdPvP7ixHgK3sKq214ivXyb51z12U9G+Qjc7Kr1D0+XcOhapKLWSVJa+JpbG5
ztVze56dvKpgykZ8e+Akz36mcFFRGoEqEJBMVHYsuTmonp3PNUevPdSPjX27tkoumlhl7j64KzFx
nymXcq8Ney5GTebYq8VTbhaV1dy9e1PwKZiANaddTp2VeqMqd39+kAqDlJErBixWBS8/mugrECYZ
tXPRRc3p2WTQmXp7873nNvmyeNSzZlRN4afcySnH1XxbRAEDBVJ+Mjp3uff+TE98rlMyd4Qw8z7N
6me5Pfbp7mRK/lpZv17vQghz2EO2OddrhqlJ5Y9qeNXdfeU5Oe1W0u4vMG/9eumrHu6NMKpQTaAw
AsYKTD13yvjx48eOHatHR/WA6KB95GyT/FzqWL8+JRe1U1T7wVzrhTw/oOwfZXouRt+61DO7oYdA
s92zw/OHqL1z0Yc8RM2emrw/TxI/Upyng1QYpIxUF7CYsS8qAo9QIHgyqt92KTMXtax5N7QGejMm
2R7fHktO8u9cZA+qOodKVT3rRWX+32rfw276CLu6nKvS92RyFlSp2y2psUHnB7e96kre9yhxr6eD
sn6yQS+8sueyPLP66me/O/KoEtfkXH3GQtJAHPYqf/vqKUewN0BJO80ONvtEvywZ7ZyfvIja45ql
oD1iqlo0rKgCtZtCCFSUwLe+/e38H9lb69xlwzMuqsrZP0uybwtST7fEr+12fmDYu9WTu5lkYbj7
s0t+eKlb1KWvSWpal7J4fNdCew995ltlOLmkvu+wveTcr+aMFqqfOmqvp/5R2n2tZ6F+kApzl4kg
top68dGYYAI1LS3OXpF4PO7OWex9tvuKuY3BaqAUAsMSaGtra21tHVYVnIwAArkF+BYbqVeH2lGY
/U/JkYqA6yAwwgK9vb3t7e0rVqyQRdADA4MD8t+g/Dt47PhxbyT9Z/vtY6Cvr09OmTx54sOdu5ct
XXbg6Kn5c2bHYrHgI6Mj3EAuhwACCCCAAAIIIFD5AiSjld/HtDB6gdR37yxkEj/6mKgRAQQQQAAB
IwVIRo3sNoIusUDyRkyJt0pJ/v+wbwBV4rZxeQQQKExghN6lvrDgOAuBchYgGS3n3iE2BBBAAAEE
EECgwgXYwFThHVz+zZPdFeUfJBEiYLQAewSN7j6CR6BsBaLawEQyWrZdTGAIIIAAAggggED5CkSV
jDJNX759TGQIIIAAAggggEDFC5CMVnwX00AEEEAAAQQQQKB8BUhGy7dviAwBBBBAAAEEEKh4AZLR
iu9iGogAAggggAACCJSvAMlo+fYNkSGAAAIIIIAAAhUvQDJa8V1MAxFAAAEEEEAAgfIVIBkt374h
MgQQQAABBBBAoOIFSEYrvotpIAIIIIAAAgggUL4CJKPl2zdEhgACCCCAAAIIVLwAyWjFdzENRAAB
BBBAAAEEyleAZLR8+4bIEEAAAQQQQACBihcgGa34LqaBCCCAAAIIIIBA+QqQjJZv3xAZAggggAAC
CCBQ8QI1LS0tupHxeLympkZ/vvfZ7ivmNlZ842lgOQi0tbWVQxjEgEAFC7S2tlZw62gaAgiUSqC3
t7e9vX3FihVDQ0MDA4MD8t+g/Dt47Phxb0j9Z/vtY6Cvr09OmTx54sOdu5ctXXbg6Kn5c2bHYjGS
0VL1INd1BCQZ5TclrwYEiifAt1jxbKkZgSoXiCoZZZq+yl9INB8BBBBAAAEEECilAMloKfW5NgII
IIAAAgggUOUCJKNV/gKg+QgggEAlC9yz/cdBPiqZgLYhUPYCJKNl30UEiAACCCAwDIHrl1yT/2MY
dXMqAghEIBAuGT2y/aYFacdN249EEEbIKvZvlig27w92liqcEWWoGoJdh1IIIIAAAuUp8Nvjb+qP
YyfePH7i5Ik3T7558vSp02+dOXMme8CH7r/9q7fffyh4a/b/4KvhTghedd6Sua9bqogiahjVVJVA
uGTUpom173GPHa1W2+LgeWFV0dJYBBBAAIFyEpDbF9bW1Nba/9XV1dTJ/9UW8EtwhJqkkskfBBx0
GaGQuAwCRRIY5vfh9CW37dnR2hxfNbIDpPPWSjq8dt4wTIZfwzAuzqkIIIAAAiMsoDLRWklAVQ6q
P+RL9+7aIxxMgMsd+k3KjRoDnEERBIwVGGYyqto9fcmNMauz7U7vH3Ap8/lpiao9Q+45vNPtqesA
7Gfsh+Qz5zT7sdSZd6eE91y3Tim5Km5JeDJ+qw/9VObcfe6QnbKeAiObehv74iJwBBBAoDwE5B1d
JPOsk3zUzkPrJCe1j6DRqVFK98gzfb8vWS51UDOlAv+nZJXA1n0nLOvVHydGR1MqyFgRkPO6bgPV
ugP3YMA1aMdTboQEIkhGLWveopi8g9NOJxtVSdvitsbEZL49k5/I3tRzq7pbd3jm+Zu7u51Vp5L0
ec7b0x5zn7HiqxbsXGSflGs8NL6qzWrV1XqHamUEtF2Ca05eMmsF9qUtt1B7oySvKWtSJZt162+P
yVdBV6yOUD9yGQQQQACBnAI1lso8dQIqeaiUC5GJSh734+PzV/6tPlbOt/ZtzZ7Nndi3z7omUer4
jxOT7CoP/PGrFzrP/O01F0qK6SS0uZ6aefXnVs6fZFnqpI/PSyslIUw6se+e5IrWHNdNakgmu3Xf
5EQAK+dLbKEWxPLKQqDIApEko1bDnGY3zv13tnVK7ufmfNOXtLY2OwOnRzp3dlqxG5dMd0vLPP9t
+sv9m2UIM9aezBXnrXWekSdlnarPrHys3S3tvWIgviPbv6UunbzcvLWSwca/5dmbJS1yn07NvQNd
gUIIIIAAAqUTSMtE3UACpaSvv3HCOvHiPmc3k+SJKkHM2pYLr0k8MfPq911ovbpbJYyH9r14QrJK
95R5H7eTyQdl/CbPU9767Ut6rjlz/sWTrBNvvO6WyXpd99lD9+9+1RvAzKuvlwDcBpWuV7gyAgmB
aJJRj+f+nXGreVFzMt2UefzGRsuyRzntz2SUM8uw4pHubsk5Fw1nHWgyiuQVA/X0wec6reY5Dd6y
KuGUzDn7nQK8uXegC1AIAQQQQKCEAmq9qNq35IyJ6hw0UCYq5eZ9/JoLrRP7tsosd/7xxEnTZiTb
OG+enKQSRpXLXjjP+8tt5vmTLev4bw7leyobljvVbk/hJ4/s13WfVwHYE/7uoc73JrMl7BkujYAS
iCYZVdmcPuyc0rNC016mqVZt6sOZNZd8NHGUwfJLO+bGRm/+zIsDAQQQQKByBFQmqubnk+tEg2ai
+lfXx9X0vEyb65Q01Bx3no1IgfcoOVmoO9VuxxLqmOQuM3BWG+Qc3Q1VLYURiEYgkmRUjYY6o5r2
mKRnhaa7ODQ5ya12wjuHTIZL4lrqfDTkOGo08NSCAAIIIDBSArJhybt3PlQm6sSo58r/VpZ8qpxU
TbJnHCmDjfv3v2qpIUt7Tv3V/d7y6ilr8vkz8zyVUvf+B/ed0Nlk9uUBWa+brGHGNJnUZ1Z+pF5r
XKcQgQiSUXvFZXPrDXoSQk1hdz53MFgs9tpMfegZ/MQmqGCn5yyVZa1AvhqzxByyhmHGy+kIIIAA
AkUUGDt27OjRo0cljnrP4XtVe1gyOT8/74MyLJkyMZ6s4dUfJ3Y27f/Bj2Wd5vuunmk52aj7jGU/
NWn+B+V3ps5Gsz6VHpYn4dz/g7Rpek8Vnuu6NdjrV707nvRAK1vqfbueAiMmMMxkVO+c70zfPZS+
LFQ2q7u3aUpZL+rJ+ebd0Nos60mTT+/fXOCQqd4K5W6Tyrv8UyfC6uZUnnulSqtSahix3uBCCCCA
AAJRC/i+N33+C9o722UHfWLF5dYXL175OZVlZhyT5s+3nKWZ9u55ZxxTjanaO9j1Ye/MdyrI+5Ta
AiUn3X7/DLXlKbnoc/88e82A3h6ljhzXTcYnywy8Tdi6z5IQcmzCilqf+hAIIFDT0tKii8XjcXfm
Yu+z3VfMbcw8XbI0ST1THs++zz29YGIzekYFaac7twXVV9An2afIjaK8u+lVMblBlDP1n1Gpd+u7
XZO3hL5iag3pZZxLOw3NKJstpADWFMkq0NbW1traCg4CCBRJgG+xIsFSLQII9Pb2tre3r1ixYmho
aGBgcED+G5R/B48dT3nPhv6z/fYx0NfXJ6dMnjzx4c7dy5YuO3D01Pw5s2OxWLhktDzdyQ3Ls18C
RsVvyoBQFEOgMAG+xQpz4ywEEPAViCoZHeY0vW+cFEAAAQQQQAABBBBAIKcAySgvDgQQQAABBBBA
AIGSCVRCMirv4pT7bUJLJsuFEUAAAQQQQAABBHwFKiEZ9W0kBRBAAAEEEEAAAQTKU6ASNjCVpyxR
BRSQ3RUBS1IMAQQKE+CGFYW5cRYCCOQXiGoDE8korzQEEEAAAQQQQACB0AJRJaNM04em5wQEEEAA
AQQQQACBqARIRqOSpB4EEEAAAQQQQACB0AIko6HJOAEBBBBAAAEEEEAgKgGS0agkqQcBBBBAAAEE
EEAgtADJaGgyTkAAAQQQQAABBBCISoBkNCpJ6kEAAQQQQAABBBAILUAyGpqMExBAAAEEEEAAAQSi
EiAZjUqSehBAAAEEEEAAAQRCC5CMhibjBAQQQAABBBBAAIGoBEhGo5KkHgQQQAABBBBAAIHQAiSj
ock4AQEEEEAAAQQQQCAqAZLRqCSpBwEEEEAAAQQQQCC0QE1LS4s+KR6P19TU6M/3Ptt9xdzG0JVx
AgLhBdra2sKfxBkIIBBIoLW1NVA5CiGAAALhBXp7e9vb21esWDE0NDQwMDgg/w3Kv4PHjh/3VtZ/
tt8+Bvr6+uSUyZMnPty5e9nSZQeOnpo/Z3YsFiMZDW/PGZEKSDLK78tIRamscgTWrFkznMZMmTKF
b67hAHIuAgjkFyAZ5RVSIQIkoxXSkTSjCAKSjG7atKmwim+++WaS0cLoOAsBBAIKRJWMsmY0IDjF
EEAAgdIInA5/lCZQrooAAggUJEAyWhAbJyGAAAIImCDwT1u2BPkwoSnEiEDFCpCMVmzX0jAEEKgM
AdkZEPaojIZH1YobP/vZ/B9RXYh6EECgMAFzktEj229aoI+bth8prLGchQACCJgnEDYTlfLmNbLI
ET/965f1xzMvvPLciwde6H7tldd6el4/cvS3x7JfuWfb8prl23qGH1bXxppoKgofSu5LlzCo8M3g
jGoQCJWMJvPBEDnh/s1RpI/7Ny9us1p37FHHbUumV0PX0EYEEEBACZCMRvU6qKurHT2qfsxo+Rg1
1v6QT6KqnHoQQKBggVDJ6PQlt+3Zs6O12bJi7SObE+7fGbeaFzWThBbc0cacKMMRNTUbu7LEq0Yq
1FMB/6bPNrChq3CPrJcxRopAq0ZA7tsX9qgamxANlUx0VH0iDR0zauyY0aNHj6qrqwtRBUURQKA4
AqGS0eKEEKDWI93dltXYSC4awMr0Isu3xFus9bsys9GeXfd1WBs61zXlbqFKU3MmmCoRbYhZ8YOJ
YaaD8X3NJZs/M72biH8kBRgZHb52TW1NfX3d6NH1Y2VYVGWio0ZJZmofgSpXP1zcI8f0fb4yuzzn
e39IpZyT/Hmk/5bu0n89O5dL+Vs65QddjkqchnkunePHY+6aA9lQCIHhC0SQjDrz8J45/OSiTnlu
VdyyOtsWO+s9F2zenwg6ZdLfsw7UflyKqXrl2Lx7+02L2zrlLaJW6Tp0Dc6ziWozF5KmrilIXtbK
dd3hY1JDFAKzFl6bLRvt2hrraImvlFS0ad3Q0LblszKv1dO9L2cAPdtWxzo2dHpPnLV821Dnho7Y
6ijWhUXRcupAwF/gkksumZ7taGxs9D+5ikvU1tTW1dbW19bKUGi9fJY43PcdzGcjyVrzPvfv2INx
K9aQkdblLdMR616Y+JOic8P6Zp1eqhyweb38WHKOg/EW78+jjtgWa4t6Sv24k3yzITY/UdT+SzpQ
JVZHLGY5p9lnZQSes+YqfrXQ9JEXiCAZVUFLtilLOu0VnXvaY/KVk/zNWytfWlazs9pTnl07T5VX
CeHitkaZ7LePHa1W2+KUfUmSee5cZD+39n1LbnOXBtgP2DXIodYKJI72xpQKJFX1VC8xdHfbm558
rzvyPcAV0wV0NrolNUXs2rXe2rDaTkHdafqUwYOP/uW1DbEOy1ovo52Zf/4nU9nUqzWtVD/+t2Zb
FUDHIFAuAt6R0b17906YMCEtMnnkpZde8hYrl9DLJo662hpJQWWmPi0TDZSMdnd3WB337XJ2M6k/
Y4cypmjyltF/R+ujsbHF6lCTfZZdkacm+0effso+Wq5d6PzR3bNti/wATM4LzVIzSDoiv0o8p81a
vnpD+o/WPDWXTd8RSDUIRJSMSrbpbiuat0jSz/hOdwQ0C+P+O9s65RQ3rZy+pLW1ubPtzuQ5kmi6
z2btB0lzvQXURTt3dupt9vs3y3BsrD35/Ly1Ojr/61ZDn5d9G+0fmckf/RKv/oG5MMsMvTt48K//
+z4ZV7DkJ2+W3xP2mOn8xiyjqbMa51vWvu4I9syWPSsBGitwwnPI4tEHH3xw/Pjxbmvk88cff9xb
Rj43tq3FCnzUqDpZICq7l2Rk1HvICKn/JZvWdcqPpFhDcso885wgZXJcyZ0lt/+eTh7JH1kq07X/
0HYPVdSTt+phVvtIq6TFO2TetFDakcx25VIBavb3oQQCwxYI8H0Y+hoNc2SLU74jy3ak6eobxhm+
DHxB71S9Wg3gHPYK09iixABqsrqorhs4QAoWKGD/yHQHIqxc45qq9uTgQZ5r2T9wcx+pP50LjJnT
ECiWQNrupalTp7r5qGSijz32mKQgaWWKFYqx9UoaKh+ybFQfepZe3AIlo3pxkP3Xrk5Js96rKUiZ
VD8ngXTn3+0r5DpakuvdE2Pg9vBsqEqyVp6rZmM7m8ANFChGMurHYCeLnmWk9rpPTzLpd7563l74
uSqenKlXqwF8jiiu63cNno9GwJ4+T2Sjaoo+Z86ZdbwzZBApYwchz6U4AiUQmDZt2gMPPKAzUfaD
B+kAyURlx5Kbg+rZ+UBz9G7tekJ8SFaaq5w0++KeIGXcCvVf2WpLZb59maq4PbWf/PPc22C/SlL+
1LZ/lqasLs5TcxBWyiAQkUApklF7FNSzjNRd+Bn8BqLOfLvPVH6aUQTXjUidanwF1Oop/fNeT9Hr
5aKFHmqkNftkfO4J/EKvxXkIRC2QdTe95KOyflSyq6zPRh2C8fVNPXeK5O5jx47Vo6N6QFQPJ/u2
zR57TO6gV38qp6V0zvikT5lsF/Lkil0b02bYk+X10iXPZsvEje50kbyVrG9OLKLv2tic8bPUp2Zf
GwogEI1AKZJRS83jdz53sPAW5L/Vk510Zlu0OuzrFh4xZ4YVSGyqV3d08q7+D1uPLp9ro1K+BQCF
XYmzEIhcIOxNRoMkWJEHWc4Vfuvb387/kT94Ge60d9AnFmw23HftwfQbegQpk36VpnUyL59cCbpr
ob0SIH3zpvMjbJ08mYxB36VODaj6VdISl5t068jtnfsZg7CyuiB7zeXco8RWcQLFT0ZTthZpP3u/
kuyX99xvyb5XU8rX+ainyw3wrfi33LcFde4glThl3g1yY/74qmR1+zfbW/WHfd2K6/5ybpCz83O1
5KLuntLC47V3n8qPfc8NAvV9VVriW4Y16Fp4SJyJQEABktGAUFmLrVm9OshH+rlqwj2ZcSam3/Uw
dNZbyzn72hMD1W6Z9HvReTfjp1QraaL9tToz2479XDHkrMS5Dd5yWe/qHG4mmhpUkNYNpws4FwFf
gVDJqL1Q073nZ9C3iJ+31r5zk3NHUJ0hqvdy2tHa7dw5VC8a7W69IXPPUY4GyPn2HaScSuUuUClr
RlX17bHEjUlV3c67Nw33ur6eFIhQwN7G1NEReIpep6+5bmRv/7ztnO8Z3rDv2pf910qEraAqBIYr
QDI6XEHORwCB8haoaWlxtu/F43F3NffeZ7uvmNtY3pETXYUItLW1tba2VkhjaAYCkQqsWbNm06ZN
Tz31VNhaL7vssptvvnnKlCl8c4WlozwCCAQX6O3tbW9vX7FihYy+DwwMDsh/g/Lv4LHjx72V9J/t
t4+Bvr4+OWXy5IkPd+5etnTZgaOn5s+ZHYvFQo2MBg+PkggggAAC0QjwdqDROFILAgiUqwDJaLn2
DHEhgAACtgDT9LwQEECgsgVIRiu7f2kdAggYLzAv/GF8m2kAAghUkwBrRqupt8uyrawZLctuIaiy
EJA1o8OJgzWjw9HjXAQQ8BWIas0oyagvNQWKKyDJaHEvQO0IVLEAG5iquPNpOgJFFyAZLToxF0AA
AQQQQAABBBDIJRBVMsqaUV5jCCCAAAIIIIAAAiUTIBktGT0XRgABBBBAAAEEECAZ5TWAAAIIIIAA
AgggUDIBktGS0XNhBBBAAAEEEEAAAZJRXgMIIIAAAggggAACJRMgGS0ZPRdGAAEEEEAAAQQQIBnl
NYAAAggggAACCCBQMgGS0ZLRc2EEEEAAAQQQQAABklFeAwgggAACCCCAAAIlEyAZLRk9F0YAAQQQ
QAABBBAgGeU1gAACCCCAAAIIIFAyAZLRktFzYQQQQAABBBBAAIGalpYWrRCPx2tqavTne5/tvmJu
IzoIjIBAW1vbCFyFSyBQnQKtra3V2XBajQACIyDQ29vb3t6+YsWKoaGhgYHBAflvUP4dPHb8uPfq
/Wf77WOgr69PTpk8eeLDnbuXLV124Oip+XNmx2IxktER6CwukU9AklF+X/ISQSCrwJo1a4YjM2XK
FL65hgPIuQggkF+AZJRXSIUIkIxWSEfSjCIISDK6adOmwiq++eabSUYLo+MsBBAIKBBVMsqa0YDg
FEMAAQRKI3A6/FGaQLkqAgggUJAAyWhBbJyEAAIIIIAAAgggEIUAyWgUitSBAAIIFE1AdgaEPYoW
CxUjgAAC0QuQjEZvSo0IIIBAhAJhM1EpH+HVqQoBBBAotkC4ZPTI9psWpB03bT8SKsb9mxcsCHtO
rguoujxHRNVGGWEoGgojgAACWQRIRnlZIIBAZQuES0Zti1j7HvfY0Wq1LV6wYPP+kVay0+JV3a07
0kIZ+UhGuuVVcL2ujXLH24xjY1fQpvdsW26fvXxbT9BTvOXU5Qs8tZDLcQ4CfgJy376wh1+VPI8A
AgiUkUAByag3+ulLbtuzo7U5viqiUcmAMvs3L27rbG7dcduS6e4ZOpQ5AWugWJkLtMQPpgwHdW5Y
3xwwRezaGuuwT9+2fNYwW6ny0uBJ8DAvxukI5BBgZJSXBgIIVLbAMJNRhTN9yY0xq7PtzsToaNrk
uXdSXp5aFbeksIym6kOPZOY5JRv//p1SSexGTybqlJq+ZMk894SUNQXeZNmZh/c8n3w2V4Sq1lwV
2o9LQ5xWMDhbjG+ZpnUH4y0dsdX+g5093fssa37jcPNQ1Qi7Kg4EykngkksumZ7taGxsLKcwiQUB
BBAIIRBBMmpZ8xbF5O1Edybn6r0z+e2NbYsTyd68tXvapaiMaSZm19cmksdcp2Rpi85FFyXTzswy
Kj9c3NaYWFBgryZIGbyVfLjNatVRtMfkKyeFzBWhb4XxVQt2LrKrc5sUohso6i8wa/nqDVbHfbvc
qffEbLw9Je+MX8pQZkOsw7JkGNV5KG3O351/T5+Mt2vzjoLKA2lV+cdICQSKIeAdGd27d++ECRPS
riKPvPTSS95ixQiDOhFAAIEiCUSSjFoNc5qT8Uk+583HVKbaubMz7zan8Kc0z2nII7L/TnsW3w1j
+pLW1mbP4K2dD7tz/Bm5dGbN/hVKMk0WWqQXaaLaxsYWq6O72/7Szjrndzq/fg/G9+lJ/KZ1QzKC
alkb5Jl1TSrBbF6vPtdH0MFV+wqzlm/zVFXkplE9AnkETngOWTz64IMPjh8/3i0vnz/++OPeMvI5
nggggIBBAtEkoxkN9s67q4n5AEcBp+SqVQ2dNi9qTq4nlbUEag6ruzt7SpySS2erNGyFAZpLkQIF
9nXL0GjPti3rJeFc1+RUMmv5FpnE94ya6scln5QU1C1lzVp4bTKdLfD6nIbAiAuk7V6aOnWqm49K
JvrYY4/J3EBamRGPkQsigAAChQtEk4wefK7TDUHvc48np93VxHz+o4BTOp87mLPOI2rwzLMu1V6c
GjAlzlpp5BX6gfB8bgF7NWh3tzMV7+65VzPqiVHTjJPdCX173p0DAeMFpk2b9sADD+hMtK6uzvj2
0AAEEKhugUiSUe8iTmdCO8ycddhT/KbV7VFQz7pU9+5P3s33Ybo98grDXJyyroBKQVvcbRrp++1T
xkCdc5ws1J3Qt+fdORAwTCDrbnrJR2X9aG1tbdZnDWsh4SKAQHULRJCMHtn+LZkVb73B3lBkDyI2
NnpnyP2Aw5+is9FvZd5u/8j27WoXlZp2zzd06hdRxvORVxg6Ak7QU/Mt1y5U2+Tt1aMZs/IZSO5N
njxT9UgiYJxA2JuMSnnj2kjACCBQzQLDTEb1JvPOWHti0HG6rNX05onOrZKSxJn7mXxPyeyfeWvl
5qYyEe/dIK9DeU4Vtvcryfb2lJssSSTBbrqUZcfV8Cqs5tdXVG3XW9tb4lv0rUP1znrPjZ70EGi2
W4J65u67Nnqm6b27oaRGlbdGFSz1IBCtAMlotJ7UhgAC5SZQQDIqWZ57OLdP8uwjl3vP27dKcorI
7Y7S1oyqRNJ+2yb3PqO+p2RB07e4T9azwLmTkxOJfrrbE6l6uyY9eOt7ZEao8tthVOh7RQpkCHTE
GrzvwdQQs+Q29p672Nvb5i23kH4+uVVJ12ffnFTd5UkfuxbaW+3Xb1E3K7U3y8sefPe5LFP4ds6r
zucNmXiJllKAZLSU+lwbAQSKL1DT0uKsoovH4/J7WV9x77PdV8xtLP7VuQICVltbW2trKxAIIJAp
sGbNmk2bNj311FNhcS677LKbb755ypQpfHOFpaM8AggEF+jt7W1vb1+xYoUsXh8YGByQ/wbl38Fj
x497K+k/228fA319fXLK5MkTH+7cvWzpsgNHT82fMzsWixUwMho8SEoigAACCAxXgLcDHa4g5yOA
QHkLkIyWd/8QHQIIVL0A0/RV/xIAAIEKFyAZrfAOpnkIIGC6wLzwh+lNJn4EEKgqAdaMVlV3l2Nj
WTNajr1CTOUhIGtGhxMIa0aHo8e5CCDgKxDVmlGSUV9qChRXQJLR4l6A2hGoYgE2MFVx59N0BIou
QDJadGIugAACCCCAAAIIIJBLIKpklDWjvMYQQAABBBBAAAEESiZAMloyei6MAAIIIIAAAgggQDLK
awABBBBAAAEEEECgZAIkoyWj58IIIIAAAggggAACJKO8BhBAAAEEEEAAAQRKJkAyWjJ6LowAAggg
gAACCCBAMsprAAEEEEAAAQQQQKBkAiSjJaPnwggggAACCCCAAAIko7wGEEAAAQQQQAABBEomQDJa
MnoujAACCCCAAAIIIEAyymsAAQQQQAABBBBAoGQCJKMlo+fCCCCAAAIIIIAAAjUtLS1aIR6P19TU
6M/3Ptt9xdxGdBAYAYG2trYRuAqXQKA6BVpbW6uz4bQaAQRGQKC3t7e9vX3FihVDQ0MDA4MD8t+g
/Dt47Phx79X7z/bbx0BfX5+cMnnyxIc7dy9buuzA0VPz58yOxWIkoyPQWVwin4Ako/y+5CWCQFaB
NWvWDEdmypQpfHMNB5BzEUAgvwDJKK+QChEgGa2QjqQZRRCQZHTTpk2FVXzzzTeTjBZGx1kIIBBQ
IKpklDWjAcEphgACCJRG4HT4ozSBclUEEECgIAGS0YLYOAkBBBBAAAEEEEAgCgGS0SgUqQMBBBAo
moDsDAh7FC0WKkYAAQSiFyAZjd6UGhFAAIEIBcJmolI+wqtTFQIIIFBsgXDJ6JHtNy1YsHl/elD7
Ny/I9vCwY7cvd9P2I6kVZX80/MVU1BmVh6+GMxBAAIGiCpCMFpWXyhFAoOQC4ZLRkodLAAhEINC1
saZm+bYe/5p6ti2Xm+/qY2OXf3lKIFAMAblvX9ijGGFQJwIIIFAkAZLRIsFSbVEEVBo5UlmhXKsh
Nr9Tj0odjO9rHrErF4WOSs0VYGTU3L4jcgQQCCJQlGTUnrb3HCkT+/Yse+KIZpo8b5XZg5FHV8Ut
q7NtcSKUzNUHQfwoM6ICPd37orhe07qhoW3LZ+WtqmfblvVWS3xlky41a/nqDdb6XYyORuFPHcMQ
uOSSS6ZnOxobG4dRK6cigAACpRSIPBlVieGq7tYdexLHjtbm7m5n2ad6cnFbY7vz3I5Wq23xcPNR
SSsXt1nuBdsbJb9MJJa5g5m3dk97zLKak4GunVfKfuDaKQJqANRz6Cl1mTNviHVY1vpmd3Q0pZw9
YmpPrCfHTu0vvTPy9gN/HEs8qEdas83Gz1q+LTNj3dcdYG6fvkQgYgHvyOjevXsnTJiQdgF55KWX
XvIWizgCqkMAAQSKKRB1Mnqkc2enFbtxyXQ36OlLbrvN+XL/nW2dkv65ad/0Ja2tzZ1td2bsiPK2
2DN4qQcxF7d1Jp8/sv1bcSvWnriCZc1bK0lm/Fv2rqe8wRRTlbqHIaAyw+b1G5zpcTVB3tIRWy3p
qKSH8rllqafWyYClJJLN++IHnd/AnRskSd3YlTqC2bPrPsleO+7blcghu7bGOlria97ljW9982pr
S2IyvmV9c9bVpF0bm9dbG1b7DKcOo9mcikBOgROeQxaPPvjgg+PHj3dLy+ePP/64t4x8jiYCCCBg
kEDUyeh0NVcUX5Vly71l7d8Zt5oXNSfzVMuyi7sDp1ndPIOXekBVhlqTBQ8+12k1z2nwnjlvUcyS
lFiy0XzBGNRHVRaqPSRpZ5v6mLXw2haro7s7jcGeRvdkh2rq3T6raWFyPr27u6Nlw4YWNxvt2mWf
My2lrpb4lkSOmeVazqhp8/qW+MFkUFXWJzS3tAJpu5emTp3q5qOSiT722GMyjZBWprQBc3UEEEAg
lEDUyaiMTNrz35KPpq8LPaLyifRxTrVwcxiHXWdjoze/TclLcwYzjGty6sgIuHPn9tx8xiF5prVh
oZuyep5PZqOSe7Zcu3LhfCeX1flr1nNyN0nnxjJAe+19DSO3eWpkiLmKsQLTpk174IEHdCZaV1dn
bDsIHAEEEFACkSWjnuFJyUfdQxJTyT/tdaH2OGXGOKeUTM6xh+4T/5HVHMGEvhInjJiAk4W6O9nt
ufkwRyIbtXPRhbMSX6o5++SOpDAVqrKzlm+RONjCFNaN8hEIZN1NL/morB+tra3N+mwEV6UKBBBA
YKQEwiWj02WW3YrvTFvjmWX63Q3fXsLpHA1zmq3O5w5G2rQsdeYOxxtMpFFQWYQCelmnWgqad1a8
sTF3aqjTz22y9X5+o+yZ119uVPVKbhowVHtvVMZdpFrYshzQj2IRCoS9yaiUj/DqVIUAAggUWyBc
MmpNX3KjbA9a5dkBLxvWZard3bKk9q+n3CTJkxra+5XS15PKZvjh3FQpPaCUcPIFI1ud3LWlxUam
/rACniWiXRuzT9PrnUpb3FvXp+ybt9PPWKzDmZO3E9f14TYgNa3rlDqa3XTUjoMdTGF7kvJRCJCM
RqFIHQggUL4CIZNRtVtddhDJHZkSK0L1bZWSG+Rl7NSzXnTBglVWe3IeXnbWy9ndyfWk8nx36w3D
uqlSakAp4aiB3NzBSFO8LRlOSly+/WtgZE3rZF5e3b5JH7sWqv30TtqpM1B5Tu14ly1LnfNjspDT
PuxZfffmoSobtazEOKa9L0m24YdbLqq2RB1s3JKMw2es1kBrQjZDgGTUjH4iSgQQKFSgpqXFWZAX
j8fl166uZ++z3VfMbSy0Ts5DIIRAW1tba2triBMoikDVCKxZs2bTpk1PPfVU2BZfdtllN99885Qp
U/jmCktHeQQQCC7Q29vb3t6+YsUKWVo3MDA4IP8Nyr+Dx44f91bSf7bfPgb6+vrklMmTJz7cuXvZ
0mUHjp6aP2d2LBYLPTIaPERKIoAAAggMX4C3Ax2+ITUggEA5C5CMlnPvEBsCCCBgMU3PiwABBCpb
gGS0svuX1iGAgPEC88IfxreZBiCAQDUJsGa0mnq7LNvKmtGy7BaCKgsBWTM6nDhYMzocPc5FAAFf
gajWjJKM+lJToLgCkowW9wLUjkAVC7CBqYo7n6YjUHQBktGiE3MBBBBAAAEEEEAAgVwCUSWjrBnl
NYYAAggggAACCCBQMgGS0ZLRc2EEEEAAAQQQQAABklFeAwgggAACCCCAAAIlEyAZLRk9F0YAAQQQ
QAABBBAgGeU1gAACCCCAAAIIIFAyAZLRktFzYQQQQAABBBBAAAGSUV4DCCCAAAIIIIAAAiUTIBkt
GT0XRgABBBBAAAEEECAZ5TWAAAIIIIAAAgggUDIBktGS0XNhBBBAAAEEEEAAAZJRXgMIIIAAAggg
gAACJRMgGS0ZPRdGAAEEEEAAAQQQqGlpadEK8Xi8pqZGf7732e4r5jaig8AICLS1tY3AVbgEAtUp
0NraWp0Np9UIIDACAr29ve3t7StWrBgaGhoYGByQ/wbl38Fjx497r95/tt8+Bvr6+uSUyZMnPty5
e9nSZQeOnpo/Z3YsFiMZHYHO4hL5BCQZ5fclLxEEsgqsWbNmODJTpkzhm2s4gJyLAAL5BUhGeYVU
iADJaIV0JM0ogoAko5s2bSqs4ptvvplktDA6zkIAgYACUSWjrBkNCE4xBBBAoDQCp8MfpQmUqyKA
AAIFCZCMFsTGSQgggAACCCCAAAJRCJCMRqFIHQgggEDRBGRnQNijaLFQMQIIIBC9AMlo9KbUiAAC
CEQoEDYTlfIRXp2qEEAAgWILhE9Gj2y/aYHnuGn7ETfG/ZsXLPB+Xezgh1O/ijX3sXn/cOrmXAQQ
QCA6AZLR6CypCQEEylEgZDIqKdzits5Y+x7naI91ti02Jf9M8Z+3NtEG+f/2mGU1t+5IPrJ2Xjl2
VrXE1LVR7nib5djYZVnqueXbeoQi+VnELj3blnsvrq7qHkW7aKg2lEcUoUKm8HAE5L59YY/hXI5z
EUAAgREWCJWMHtn+rbhK2pKpmkrp2heNcMxcrgoEWuIHM4aD1jUVveEqzWuIze90r30wvq+5piYl
IS16EFwAgVQBRkZ5RSCAQGULhEpGs1PMW7Jkujwjg6ar4pYlI6WJ2W93rjtlZt87jmo/IcWcOXP7
hPT584xxV29tN23erGtIHjkvFq4j7TAyK9aPOHF7L5U+sR9RGOGCroLSTeuGhrYtn1WUlsqQaPN6
lQV7st5Zy7d1brDWN5OOFoWcSkMLXHLJJdOzHY2NjaHr4gQEEECgPARCJaPT1Y+7zp2dyVWinkbI
IGnabLceQLVn9i13Dry9UbLVlMwtvmrBzkX2DHlixDW5DEDGXaW8Jx9Nq611TndnWiK6uK0xsYpg
R6tV8CKCeTe0Nlvxnck0d/+dbZ1WbFFy/j6+SpqlZ/Z3tDbHVyWjVIloRGGUx6ukjKJIn6HelZzS
d/NFNc++fFuXnm3Xc/pWytx79syyZ9d9HdaG1RmZbtPKeIu1fouuxz48F00ZNE1bX+BcWq8o2Njl
DcGNIM9TAcMuo84hlCIJeEdG9+7dO2HChLQLySMvvfSSt1iRIqFaBBBAoBgCoZJRa95ayTeTY58B
tvnYM/ux9tvswVN12FXEv+XZ9iSpp3eJpiS1KV8ukism8t+M2qY3L2pOsqh80buKYPqS1tbmzrY7
C9qNZFedzEb371QrFG7wrCX1tCr1QpGGUYxOr5w6O2Ly6tC/gu359GSW2RHbYm1Rj6thVMn4PHPv
dkk3UUxidHd3WC3ZhpdkdNQ7HNsR616Y+LXfuWF9s65KpZrN6ze4E/wH4y0dsdXJDHZ982odkAq1
JXGWffVcTwULu3J6k5bkEjjhOWTx6IMPPjh+/Hi3sHz++OOPe8vI52AigAACBgmES0ZVLqnHAtUg
qCVjmnLk3b908LlOq3lOg1dknje9zE7lnapXc//OcUSy0pTBydSz7XxxUXMi7VVP2mO53d1Zh3L9
eklno07anKXulPM9F4o4DL8wK/T5jlhD6hamLNmjNH1DZ2JKfdby1TKdnhy/bLl2oTOZ37Nty3pP
QWvW8i2SKN63KznSaRv2dO8LZtkSX+kuXm1sbLE6urvlRDtj9Uzwz1p4beIpu9qW+JbEmGuwpwKG
HSxmShktkLZ7aerUqW4+KpnoY489Jt8raWWMbi/BI4BAtQmETUYTPp6ktLOtzTPOmQp4RP2ebmz0
5oc+wvZay1Vx74b9gH1iX8uzZNVeuupJZQNWkyw2fcmNiVFZlWDGbnSHd/NVFXkYoeOuiBMyNjBl
XSiaMpDZtHCDkxgqgfmNiYWlasRTRh89uW1DrMNT0vGa1Tg/Cjl3Ml5dZDhHwLCHcwnONVZg2rRp
DzzwgM5E6+rqjG0HgSOAAAJKoNBk1E1KZZgzzxF6ZNKZ4k6ZqQ/YU/a1Um7Q5NyqKblGIGBNbjFn
EHe/vdbAs1zUt8nRhhE2bMpnCGTZnJ+xNz85ypl+ur0W1G8Lk5OFuusBZDJ+2B0RJOxhX4QKyl4g
6256yUdl/WhtbW3WZ8u+TQSIAAIIJAVCJaP7N2esElVDhvmGPhvmNFudzx30kueb8c4/kuqT2ma5
1nC72s5G21albV3KUqunUUUIY7jNqNjz9Qy5c3TtWp91zaedZGbMymeapE/0p1bsmZvPytm1Ndah
c8eI7kAVNOyK7Vwa5gqEvcmolEcPAQQQMEggVDKqV4l6EtL9m2UaPLmtJ8tqUHuq27PTXGbh5ZSc
M94p6zSF0blfVAJUbXH3rgrYv1luwe9q29uIJMKUjFlqCLDPKmeP2Zvq1YCrd+tSRmnbIdGoYoRh
0CtqZENN3nOpa2OzrAzN3AuvVnPKalLvZiI9hpllpLNpXecGtVjV+5SMiqr7PbkLPvO1z5Mbd20c
7jR98LBHlpyrjbwAyejIm3NFBBAYSYFQyagsFLVvYuS+jaa9ttMzCz5vrX03Jed5nQTaJ7kP6rs8
5X6Do+lLbrPf1cmpQu75ZG+VShzytPcKOxdJPJ4NUvK0XKw7GaAsGu3On0f6YevJ/9RtUfocr0N3
SqOKEIZfmFX6fEtc/hjSi0Htnew5BiXl5qQH5a+FxJaoBvn7KOVmokk9dRtTtUM+ucC0eZ+UDXBn
06Z19ib5xIm7FqpN86l3hArbScHDDlsz5c0SIBk1q7+IFgEEwgrUtLQ4K9vi8bj8ItXn7322+4q5
koSV/6GGTq3Ue0NFG3TWC8j4rn0fUd41NALstra21tbWCCqiCgQqTmDNmjWbNm166qmnwrbssssu
u/nmm6dMmcI3V1g6yiOAQHCB3t7e9vb2FStWyBq1gYHBAflvUP4dPHb8uLeS/rP99jHQ19cnp0ye
PPHhzt3Lli47cPTU/DmzY7FYqJHR4OEVrWTqrLu+72jAnUWFxFT0CxQSFOcggEBVCfB2oFXV3TQW
gSoUMC0ZlS7yTI8vlnn6Yg5PZrzrUhW+QmgyAgiUWIBp+hJ3AJdHAIEiC5iWjCbubzrsuzYFcrWv
lmUuXi0KLWYSHCg4CiGAQHUIzAt/VAcMrUQAgQoRMH3NaIV0QzU3gzWj1dz7tD2/gKwZHQ4Ra0aH
o8e5CCDgKxDVmlGSUV9qChRXQJLR4l6A2hGoYgE2MFVx59N0BIouQDJadGIugAACCCCAAAIIIJBL
IKpk1LQ1o7wiEEAAAQQQQAABBCpIgGS0gjqTpiCAAAIIIIAAAqYJkIya1mPEiwACCCCAAAIIVJAA
yWgFdSZNQQABBBBAAAEETBMgGTWtx4gXAQQQQAABBBCoIAGS0QrqTJqCAAIIIIAAAgiYJkAyalqP
ES8CCCCAAAIIIFBBAiSjFdSZNAUBBBBAAAEEEDBNgGTUtB4jXgQQQAABBBBAoIIESEYrqDNpCgII
IIAAAgggYJoAyahpPUa8CCCAAAIIIIBABQmQjFZQZ9IUBBBAAAEEEEDANAGSUdN6jHgRQAABBBBA
AIEKEiAZraDOpCkIIIAAAggggIBpAiSjpvUY8SKAAAIIIIAAAhUkQDJaQZ1JUxBAAAEEEEAAAdME
SEZN6zHiRQABBBBAAAEEKkiAZLSCOpOmIIAAAggggAACpgmQjJrWY8SLAAIIIIAAAghUkADJaAV1
Jk1BAAEEEEAAAQRMEyAZNa3HiBcBBBBAAAEEEKggAZLRCupMmoIAAggggAACCJgmQDJqWo8RLwII
IIAAAgggUEECJKMV1Jk0BQEEEEAAAQQQME2AZNS0HiNeBBBAAAEEEECgggRIRiuoM2kKAggggAAC
CCBgmgDJqGk9RrwIIIAAAggggEAFCZCMVlBn0hQEEEAAAQQQQKBEAqNHjx47duz48ePedn6DfIwd
M0Z/TJgwfsqUybW1NbniIhktUY9xWQQQQAABBBBAoLIEFv7h+/7n36z9f798q3zc/s1v6I/77r17
7+N7xowZQzJaWb1NaxBAAAEEEEAAgTIT+NOPXrto0dWLrk75eM/v//7bL764tjbnACgjo2XWjYSD
AAIIIIAAAgiYKfDXa2/+4w8tes97fl8+3vGOd+qP88+/YMqUqW+99RYjo2b2KlEjgAACCCCAAAIV
LcDIaEV3L41DAAEEEEAAAQRGSiDPmtFzzjmHkdGR6geugwACCCCAAAIIVKVAnjWjeTwYGa3KFwuN
RgABBBBAAAEEohZgzWjUotSHAAIIIIAAAgggUGQBRkaLDEz1CCCAAAIIIIBAdQiwZrQ6+plWIoAA
AggggAACZSnAmtGy7BaCQgABBBBAAAEEqkOANaPV0c+0EgEEEEAAAQQQKD+Bvr6+M2fOnDp1+rXf
HJSPM729+uPkyVPHjh0fHBzKFTJrRsuvM4kIAQQQQAABBBCoGgGS0arpahqKAAIIIIAAAgiUnwDJ
aPn1CREhgAACCCCAAAJVI0AyWjVdTUMRQAABBBBAAIHyEyAZLb8+ISIEEEAAAQQQQKBqBEhGq6ar
aSgCCCCAAAIIIFB+AjUtLS06qng8XlNToz/f+2z3FXMbvdE++eST5Rc8ESGAAAIIIIAAAgiMkMDl
l1/uvVJvb297e/uKFSuGhoYGBgYH5L9B+Xfw2PHj3mL9Z/vtY0Du/SSnTJ488eHO3cuWLjtw9NT8
ObNjsViIZDQtghFqN5dBAAEEEEAAAQQQKLWAjEsWKRllmr7Ufcv1EUAAAQQQQACBKhYgGa3izqfp
CCCAAAIIIIBAqQVIRkvdA1wfAQQQQAABBBCoYgGS0SrufJqOAAIIIIAAAgiUWiD0BqZnnnllwozz
Ign75OuHo6oqVzwjcIlIKKgkWgH6XTzLE6E8o4r25RdVbVgFl8QquBUlEQgrcMHUc/QpxdvAVEgy
2vC2i8K2JGv5g6+9ElVVueIZgUtEQkEl0QrQ7+JZngjlGVW0L7+oasMquCRWwa0oiUBYgckTnTOK
l4wyTR+2UyiPAAIIIIAAAgggEJkAyWhklFSEAAIIIIAAAgggEFaAZDSsGOURQAABBBBAAAEEIhMg
GY2MkooQQAABBBBAAAEEwgqQjIYVozwCCCCAAAIIIIBAZAIko5FRUhECCCCAAAIIIIBAWIFqSkZf
v+em2VPf/vl7DqcjPXq7PD57y97k44e2f959RD879fbH3ae9z3rr8tbjnCUnuh+eGsJ2E+WHIzDy
feG5YvL1lvkyG06jCjr38S1vn/3p7a9blv29cNO/HSqoFnXS3m+63xG5vh1869Yner+zLEtFmPWb
1Le2rAVs8yzf8qFqK5Pv6yK9jCMg8rwYyurnYdZvw4K7PteJZfB9HapNFEagLAWqKRnVHdBrHT0i
/2Q5jh+2TqU+7H3ka7ff8/zJlKczy+uncz3+teum/sUPC//1X5avH1ODKmZfyC+nq7/mwmz/zHu+
/qj9ervqcy8f3btn9UUZL7MRQzx8YL9lzZtYY7348//4d+u6P3jnzKPHCrm4JB/Xf9XqO2Ed83xH
5HrZ577AzKZF18mzX3vwUbeevV23yiOLmt5bm+ObtJBwc3/Lh6qtDL+vv3bdp7d1h2pEjsLDIMr6
YvBepkRuub4NQ3GF+J4N//oPFQmFEahwgepLRgvu0J2f+fq/Hsqaxeau8pb2PUf36I/2W6TY/X8T
/2lBv/4LjpoTEwL5+uLwv306cwBbfsuqB7/5qGuY+UgW3tdffUrlU3fsUP1+/+fl82881HlU/k+N
oFyxYMsr51nj9fif58MdpMwaSUSdeKhr573W1Rc1WFbPoXslK22YbtWPTYxrJkPyTBGkxGkPqVqq
vGSicmy5YeoVdyRxrCP33Oy0yDvJkC/285r+5BrbZ1+fLvZol6r5ltjVM2vqrTGWlUMjb8x2hNo2
cy7CW6FnYFif8unbv6lfBrqleY7SfV87ryv1I6Vdvbbu/cnuQyfOZI8/R2OdTsxGlDbInTZomoRN
zCPlfjFkxxtBt5zfhiqyvK8r78vA8z0rp2X5dojoW5NqEKh6AZLRgC+BWz7/eWvnF+JdxwKWTxYb
e541TT7+2+p71siDt+55PH38NXSNnFCoQNa+kBTzPTdJcuYc7qDpFU3q7wdr76vPqlTSTZWu+8of
XfWG80juOHZ+5t+ftCacd9VaezR03kCy07ONoNzysWY1SJkrkkKb656n04hL/3K7/D30mcULpn5C
5Xy3rvrCPb8dq8vcev11akjS+fyz97xst/f25IPy5b1/+cV7XjyTGstZK0Fx66rFn9nh1vD1R9Mm
GbI3YcZ7F6tsdP9vDtk+OoH4/HvnWdaYKb4aqTE7V5SW2s20j69d5xmitpcWeDv632+69O3/6Emm
rXu/9lX7ZbDoT+bUWCfSWprRgJJ9X9fI60r9SJk83omp3wk1Jf7d/ztXY/MR6RpTXqLOoGnKWeoF
M3vLw2kmyRdD9t7Wj46oW7ZvQ7/vshRGPfpggwT4dsjXbp5DAIF8AiSjQV8fV37kjqutW3/w74cC
/ZbNVusFF6tJSTnOpk73B42ActEJJPui+57vSmZ2tR7IlI+nv7IoMYD97tj//rBl7fyPx16313U8
/pCafF/0J1dOt0aNyx3KjPcuUgmW9Y2VU3/HXov8kp06JDIH+8R3r37pV+pyO76iXhKfb1+t0q++
3JEMt+Ezl3zj6C/vv2ORupYzXrvoK0/v+YfrJxyxztqVqy/dodydn7nrJ9JkPUepWewh3p2vvHJy
5qJvHP2++sKuauVVbmhODTvUVaznXj3gm6+rM2c2fVgE7v3pY4cGzliHu2TxgPX5K6+y6qzxr/tr
OFe05xycK77+0E6ViTojcJrXOXSF1nVf2eFp0f/6lx8858H9/P2qsf9w/fQaa4yTpvvTj+j39c7P
LL5Sv670+PR1H7hyZjJEN/437olvztHYPER52pp6lj0oe2vnf838bzleDP5qllVct1zfhgFeV5bn
ZVCXbEnWb4fsK76CNJ8yCCDgFSAZDfx6mH79irXWzi9+4d+eD3wKBctf4LVXJAHS44UL1MelX9wp
X+9/7aCMhcxsusZOlfYc6j1j6eWMn4/5Ziozl9zupGt267ONIcnDE6xpQ/d844v3Wn91/8p5Vt05
1vgDeSKJwLH3uf/YaV13foNV99xD37CsuefPrBllTZPEWtVtz4zLl+ddteI2lcA999qht+w/maac
9+gPFcvVcoo6hqx+yxqfyKwl7GlT9RMqK1I1XHrhfP3AYKDf0+f96V/8tWS5P32o54y9isC65UrR
GG1Z/hqJK/639/5P94r6LHtsVUbg5n5SVe4czlN/cbU0eaLb0nsPHnAHQa/7ykdUHqzmMaarRQJl
f0hi/R1pTr2TN3viP/3Kj5VDtsa+mJsoT4NtvUVfiWnYhX9z9Mm9R+V1K39XZ3sxlINcjm/DAK8r
78ugPrUpWb8dyqG1xICA6QIko1l78JUDiZk+79Pv+sv7V1v3fuHeh94aKKTfDxxIzgUXcj7nRCcQ
pC9krM6TKjnLGVWqNNY/U3mf/dt6jx4mVGNImWszDm2/9TP/bt2y9VMqAZoyIV/bJJLhHWqa3p6h
vvcLi6de8XGVVd+2auqV61LuLCEz495jsMe++4R3M1a+IOadn8yKQgV71fv+Vg0/79n70I7t1tX/
oNKdUYE08l5RxlbtKNKSCfVQnTUpddRzUPLrxJFvzDtHs4K8lkKJ5CvsWTO6Z4/KRCWx9jYnPf7M
xro/u/IQ5Yhg58FX5O8NDTt+qsra879u87d6BNwCfBsmY/R+l2V5GeibsQT9doisw6kIgSoRIBmV
jr7ogiUqY4jfn9jq7kzIqq0eqcdVH5dxo2985m/VsEPI4/Et9rSaGvjhKLGAty/edtFHJJqU3/Ey
jat+zdtHIlX6DzVHf7U9MpQ/VXL2RnzzUfXb+tLr/2Kt09a0tRmH/+0LsrTxr7auvlyGmvTg4gX5
IxmOmUzTP/1VNUmvpqHVHKtu75evl03r9jS9++J/9C57+eyc82ce2aMmzVMm3/1CkDS9gMNem3vv
F1d+5scyvPpuNbyq0p1gGulX1Gd94yG5a4Ach//tX5K3NXCe+pf7j+gYdUuva8j4Jg/XhJH/vk6s
GVUjuOelJ9ZO8Hkaq1/wWYmck299zObLpvfQE4P6KbXTazi3BpPNQEX9eZjz23Bqgd9lR+w1JKG+
HcK9kCiNQHULVF8y6ll0lfh56txiZucXL71cbwe2920s+sB7ZelY2sjKeX/6D5uvDv6SuXXVgqkp
dQbIZoLXTskwAjn6ovH6P5d8UV4VzjS9PVnfds9g4tf8FUvvWCyp0hfkJXHdH9oz0fbIUM6bCzo7
xL96td6tfL1aupfxF8jr99xq53z/a6W6nHqFyJ79GT6RhGlsRtnXH/r5Tuvq8y+y6h59Wk3SXyiv
bbUPxpmml/Un+sVv7/hZdMdH5lkzLlR/NsnjCmTxZ9TihdTjG6tSd9MXHN+7E/Psi/7k3e7wamEa
zkrBW1ddofC925UcXunKD+m99nZL7Yns2izDp/kbU/bf11ova2Nn6TXN2YismRdcphouPSsvhlS9
zLOcXXdaKtiLYeTccn4bTi/wu2z6BT7fDgW//DkRAQREoPqS0dRul3slyi7m8/70O898394GkTjk
L+B/SNxfJvWMmdf/RUrJ4C8j2fCh6kzMcwU/kZKRC3j74oq/PPp/bvL2/R07WtWQoXMbrxnX36hH
N+3EJbE+zymfZWv8jOv/ebeendeHLOxT+5O846l6lCXlOKW2pftEMhwFe6ncOxtm1hxVO9ZvevdV
8q3vWRZ5S7veBiSHDJraO3jOea++/4P92FeetjcDOWNmdoJuH79ybzUwnOCuapKZ+sSff+7OoYI0
ZKWgPQbshH2HvdXKOaTCXRuSW5rUKJdswMqYyA7bkvL8vs7d2LxE/11WIjnH59vtXWvOkbYE03lV
y9KOgl8MxXXL/W1Y0OtKbTrM9e0Q9gVDeQQQyBSoaWlp0Y/G4/Gamhr9+d5nu6+Y2+gt/eSTT15+
ucwoWs8880rD2y6KhPLga5FVlSue9EucsY6+KXswUg7JDqdOsR85ab3xVvKp5OOWdeywJUutZOW+
s3MjUTL5SOI8b0n9ufeQDR/DWWUVCXs1VJL50vLti1NHrTNub8moYeoWFudZ2d3i7NdRiumvCq9s
6ivNXdjnnlJ/zDqp97G7R6Ly/JEE7750BPtFq16B9eq7oNZ5Kepb9sj289VXnWOd0a9/T/PdYPQf
URKz8xp2G2gX7pXp/qHkN4hqZoahjjzXd72+kPebTpfPpXEi7xX1s6opo6zasynB9HrkvZfTp6Qt
wXS1S/J9nf1lnAM2a/y5GivtykPkfrNIX1tvpeh5uyNplfpi8O79GjG37K+rHN+Gvq8r78vA24Rc
3w75fhQE/46lJALlKjB5ohOZmwq6kfb29ra3t69YsWJoaGhgYHBA/huUfwePHT/ubU3/2X77GOjr
65NTJk+e+HDn7mVLlx04emr+nNmxWKzKktFy7WniilZgBP7OiTbgYtQWDMGTjKbdf6oYMeVORotz
NbNrDdaDZrcxquixikqSehDIFBiBZLTap+l52SGAAAIIIIAAAgiUUIBktIT4XBqBkgvY98N/TK1q
5UAAAQQQQKAkAiSjJWHnogiUkcCk6RnvEVVG0REKAggggECFC5CMVngH0zwEEEAAAQQQQKCcBUhG
y7l3iA0BBBBAAAEEEKhwAZLRCu9gmocAAggggAACCJSzQCG3dpow47xImnTy9cNRVZUrnhG4RCQU
VBKtAP0unuWJUJ5RRfvyi6o2rIJLYhXcipIIhBW4YKrcc1gdZXSf0bBtoDwCCCCAAAIIIICA6QLF
S0aZpjf9tUH8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJq
cOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8A
AggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCA
AAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIk
owZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8
CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAA
AgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgs
QDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6D
xI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAAC
CCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICA
wQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajp
PUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgII
IIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAAC
CBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CM
mt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEj
gAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAII
IICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoA
yajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcR
OgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggg
gAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKm
C5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3
HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCA
AAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAggg
YLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJq
cOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8A
AggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCA
AAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIk
owZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8
CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAA
AgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgs
QDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6D
xI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAAC
CCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICA
wQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajp
PUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgII
IIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAAC
CBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CM
mt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEj
gAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAII
IICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoA
yajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcR
OgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggg
gAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKm
C5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3
HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCA
AAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAggg
YLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJq
cOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8A
AggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCA
AAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIk
owZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8
CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAA
AgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgs
QDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6D
xI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAAC
CCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICA
wQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajp
PUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgII
IIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAAC
CBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CM
mt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEj
gAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAII
IICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoA
yajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcR
OgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggg
gAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKm
C5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3
HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCA
AAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAggg
YLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJq
cOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8A
AggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCA
AAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIk
owZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8
CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAA
AgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgs
QDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6D
xI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICAwQIkowZ3HqEjgAAC
CCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajpPUj8CCCAAAIIIICA
wQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCBgsQDJqcOcROgIIIIAAAgggYLoAyajp
PUj8CCCAAAIIIICAwQIkowZ3HqEjgAACCCCAAAKmC5CMmt6DxI8AAggggAACCJRSYMg+Bu1D/f9A
6od+Qh6Vp4YGMwMlGS1l53FtBBBAAAEEEEDAaAFJQwcGBvv7+wfUMWh/fjb9Y0CeVwWkMMmo0d1N
8AgggAACCCCAQBkJnHvuuceOHaupsWpra+vqauvr60aNqh837hzvx/jx4ydNmnjuuZOnT5s6c8Z5
fX19UyZP8bahpqWlRX8dj8drpDL72Pts9xVzG73lnnzyycsvv7yMWk8oCCCAAAIIIIAAAiMlkJkK
9vb27t279+WXX37j6BvBo5BMdMaMGe9617sOHD01f87sWCxGMhpcj5IIIIAAAggggECVCmRNRg8f
PlxfXz969GgZGXVd3MFN/YhMzXsfkc/lOHPmjJuMsma0Sl9SNBsBBBBAAAEEECiSgHdtaFpumnlF
ktEi9QLVIoAAAggggAACCPgLkIz6G1ECAQQQQAABBBBAoEgCJKNFgqVaBBBAAAEEEEAAAX8BklF/
I0oggAACCCCAAAIIFEmAZLRIsFSLAAIIIIAAAggg4C9AMupvRAkEEEAAAQQQQACBIgmQjBYJlmoR
QAABBBBAAAEE/AVIRv2NKIEAAggggAACCCBQJAGS0SLBUi0CCCCAAAIIIICAvwDJqL8RJRBAAAEE
EEAAAQSKJEAyWiRYqkUAAQQQQAABBBDwFyAZ9TeiBAIIIIAAAggggECRBEhGiwRLtQgggAACCCCA
AAL+AiSj/kaUQAABBBBAAAEEECiSAMlokWCpFgEEEEAAAQQQQMBfgGTU34gSCCCAAAIIIIAAAkUS
IBktEizVIoAAAggggAACCPgLkIz6G1ECAQQQQAABBBBAoEgCJKNFgqVaBBBAAAEEEEAAAX8BklF/
I0oggAACCCCAAAIIFEmAZLRIsFSLAAIIIIAAAggg4C9AMupvRAkEEEAAAQQQQACBIgmQjBYJlmoR
QAABBBBAAAEE/AVIRv2NKIEAAggggAACCCBQJAGS0SLBUi0CCCCAAAIIIICAvwDJqL8RJRBAAAEE
EEAAAQSKJEAyWiRYqkUAAQQQQAABBBDwFyAZ9TeiBAIIIIAAAggggECRBEhGiwRLtQgggAACCCCA
AAL+AiSj/kaUQAABBBBAAAEEECiSAMlokWCpFgEEEEAAAQQQQMBfgGTU34gSCCCAAAIIIIAAAkUS
IBktEizVIoAAAggggAACCPgLkIz6G1ECAQQQQAABBBBAoEgCJKNFgqVaBBBAAAEEEEAAAX8BklF/
I0oggAACCCCAAAIIFEmAZLRIsFSLAAIIIIAAAggg4C9AMupvRAkEEEAAAQQQQACBIgmQjBYJlmoR
QAABBBBAAAEE/AVIRv2NKIEAAggggAACCCBQJAGS0SLBUi0CCCCAAAIIIICAvwDJqL8RJRBAAAEE
EEAAAQSKJEAyWiRYqkUAAQQQQAABBBDwFyAZ9TeiBAIIIIAAAggggECRBEhGiwRLtQgggAACCCCA
AAL+AiSj/kaUQAABBBBAAAEEECiSAMlokWCpFgEEEEAAAQQQQMBfgGTU34gSCCCAAAIIIIAAAkUS
IBktEizVIoAAAggggAACCPgLkIz6G1ECAQQQQAABBBBAoEgCJKNFgqVaBBBAAAEEEEAAAX8BklF/
I0oggAACCCCAAAIIFEmAZLRIsFSLAAIIIIAAAggg4C9AMupvRAkEEEAAAQQQQACBIgmQjBYJlmoR
QAABBBBAAAEE/AVIRv2NKIEAAggggAACCCBQJAGS0SLBUi0CCCCAAAIIIICAvwDJqL8RJRBAAAEE
EEAAAQSKJEAyWiRYqkUAAQQQQAABBBDwFyAZ9TeiBAIIIIAAAggggECRBEhGiwRLtQgggAACCCCA
AAL+AiSj/kaUQAABBBBAAAEEECiSAMlokWCpFgEEEEAAAQQQQMBfgGTU34gSCCCAAAIIIIAAAkUS
IBktEizVIoAAAggggAACVSpQU1PjtnxoaCi/Aslolb5KaDYCCCCAAAIIIFAOAiSj5dALxIAAAggg
gAACCFSpAMlolXY8zUYAAQQQQAABBMpBgGS0HHqBGBBAAAEEEEAAgSoVIBmt0o6n2QgggAACCCCA
QDkIkIyWQy8QAwIIIIAAAgggUKUCJKNV2vE0GwEEEEAAAQQQiFCgrq5utH2MSj30I/JsrmuRjEbY
C1SFAAIIIIAAAghUqYCkm+9973vn20dTU9MHP/jBa6+7bsWnVnzuc5+7/fbba2tz5pwko1X6iqHZ
CCCAAAIIIIBAhAIDAwMPPfSQZKLvf9/7Fi1a9NGPfnTpxz52/ceuX7VqleSjg4ODjIxGqE1VCCCA
AAIIIIAAAikCkozKsXXr1gkTJkyePHnKlCnyydy5c9/znvdIJipPkYzyikEAAQQQQAABBBAoooDO
Rzds3CiT8mPGjpFM9NJLL9UP5rkq0/RF7BKqRgABBBBAAAEEqkpA8k4ZB12zZs2kiZMkE80/Jqpl
SEar6hVCYxFAAAEEEEAAgeIK6Hz0wx/+cJBMlGS0uJ1B7QgggAACCCCAQGUL1GQc0l7JR8+ePatn
5+X5/AKMjFb2K4TWIYAAAggggAACRRSQOzpl5qO+j3gDIhktYvdQNQIIIIAAAgggUNkCkozKdiXf
7NNbIA2EZLSyXyG0DgEEEEAAAQQQKKKAzkS9F0hLTOWpPJmoPEsyWsTuoWoEEEAAAQQQQKCyBcIO
i+rE1GtCMlrZrxBahwACCCCAAAIIFFEgMxlNuxgbmIqoT9UIIIAAAggggECVC5wu6PCi1bS0tOiv
4/G4m7rufba7ymVpPgIIIIAAAggggEBRBebPmR2LxbIno0W9MJUjgAACCCCAAAIIIDA0NCTJKGtG
eSUggAACCCCAAAIIlEyAZLRk9FwYAQQQQAABBBBAgGSU1wACCCCAAAIIIIBAyQRIRktGz4URQAAB
BBBAAAEESEZ5DSCAAAIIIIAAAgiUTIBktGT0XBgBBBBAAAEEEECAWzvxGkAAAQQQQACByhd47LHH
StvIK6+8MlcAJY+teDJ5Wi0X1bd2Ihktnj81I4AAAggggEBZCLzwwgvTpk0bN258qaJ5VR2vLFy4
MDOAksdWPJM8rdYX5T6jxcOnZgQQQAABBBAoI4Fjx45NnjKlhAGNGTNWMrOsAZQ8tuKx5Gm196Ks
GS1eF1AzAggggAACCJSNgIzCDclQXKk+hvJBlDi2Qk26vjZmzCc6DuY5PW+rEyIko2XzTUIgCCCA
AAIIIFA8geB5aNfXxo4d7f341A9TEy5V4BM/PJiS2D7ydXVKesnEReX/8yejgfPBrq+nxJYeRuB6
IsnK7TblhvVpNclo8V7t1IwAAggggAACZSgwODTk+9H19TFjF/7d0ju7T58+oz9evHPZj1bMHvv1
ruS5lkotZTTTfUTO+sDfL7vzxTN3Lp2Z9RK+Gr6BSYHf/PCTY8/5w1+mxGatePuYrz3i364g9Ycu
k+GQVoNvq3UBRkYDQlEMAQQQQAABBAwW0Pmjz8cjX194i7V060vfWzrTLTlr6fde3LrUuuUDmx5x
T1d1qQ+7wp4fflLOuvUn31s2M2f9ki3msQsY28U3/CgzttMvfm+2b7uKVSDFIdM2f6tdEJJRg7+v
CB0BBBBAAAEEggvY+WO+j0d+/veW9eX/sWxWWrGZy76u0tF//lFP4vREFmYN/WLTxSvvuvUnb619
T97K/WasC45taOaypfrSv9g0bvw5no9P/fBQMqRHNp8z7oYf9Rz60YpEmRU/6km5qOcpqWTTL9xz
e354Q7LatLOSDllh/VrNyGjwVy8lEUAAAQQQQMBsgQB5Uc/LT1jWre9/T5aGzpr9e5Z118svpz31
i03j/+jvZbRSMtH8h++SUT/cR35+S67YPKfeuuvUqbecj5+8a+XFn/rRIc+zd624+H9aX9MFfvLl
u1a+ffMvEs9KQy5e8Xs/cc/98hMvS+JtWYd+dMP4t6/8vUS1L37PWvl2yWn9onWeD7R9iWn6gJoU
QwABBBBAAAGjBdSGdZ9FkS+/fJc9956t2EUXLrWsX7580KlEUbz0wxV/9PdW266tH5vpV7M9fZ/7
CBCbGqf82IUX5bvQgrUn17wnWWDB+9usu7b/9KDziDr/zhfal87QrVPPWrf8TK82fWST3ZC/XpAg
WrBWN+qR76+4S85yqz1v6Vfbl9618vuPJIhUm3Kr5m+168E0vdHfWQSPAAIIIIAAAkEF/FaM6nwx
1/pK+znnWfn0rpUfvOFu+f/Wb/zokP+STHurT74jWGxuALmu+MjmieMmOB8LW3V7kjF7z7rooo8l
GnuoW0aE2953VUYMj/y81frYR94/w2My4yIZIn7iZafJ3vqzhOTbai1CMhr0FUw5BBBAAAEEEDBX
IMDo40UXXp+6Sd4z5mePms676Dx77NAe5vzYd184cfy/vmTdteqSTb/w3YieNy8LEJu65N0vd+cZ
Ge350YqJE/+o9Uv/deL4SftDYks2x9lw5Y1TrRjVQ6E6B88YOT6oli3cveriiRPHJz8+KMsFEiVV
UHnHm4NloySj5n5bETkCCCCAAAIIBBVQyZbPMeOieZb1pZ//Iksxeznp9Rdd6DylctE/ee+MoaGr
1jzbfr3V+sFVshso/5EvTr/A5Pmr3ifjnNlj02c/8n/+/O7rv/P88dVXubU5KWYy5pTrJDJQ+8HU
ok65GReKh6rz2JupH1s/Jk13TtNpbK4jUO+QjAZiohACCCCAAAIImC3gN1EurbvqfbKQsvWbd3t3
/ahGH7r7b//8HutLn5MbPmUcM5e2/982655Vc/7J3Q2UxUnnezmPYcRmHbrr7kclxFckW/69i7IE
6N9rM9XU+5d2Z8Y/+6LrrXteSd+15V9hooRPqxPFSEaDk1ISAQQQQAABBEwV0AN5Ph9X/fX/bbXu
+fQlk//pF27JnrtXzvn03Vbrg6uvSp4uCsmqrvrrZ7/zMavtg6vuPpSrfpkMz5eLBotNriKxea8i
sU2eu0qyxaGZ7/+T660v3X5X4uZTv/jHKR/8kn1JN6SUmO3HE4+8J2bH/4+PJgo/+o/2VWZev6H9
+rYPejWGLKn5HyVv1dVm1ukVyN9qF4Rk1NRvKuJGAAEEEEAAgVACvnvepcCCvzrx2/u/JJnZlCkT
9cfcT999/R2//u1fXeU9XSVhnvWXM65rf+YOlSlKnpb1Kr5xBolNrvLbp79ryVWSsc3befTE6itl
4eaM6+948JZ7Vs11nvr5e48+KKs7k+8TlVwh6q74TN46QNV8/5faPuQ0ecqHnrjmD2aokM772HeO
/vqOJ5IaU6Z8cP8dH1/g2U2fJ3LfVusCNS0tLfqzeDxeU1MT8DSKIYAAAggggAACpgg89thjv/u7
l595q7dUAb9++I1HHtn9qU99KjOAksdWPJM8rdYXlRHhWCzGyGjxuoCaEUAAAQQQQKBcBAKuXyxS
uPmvXtrYitRknWsGqZxkNIgSZRBAAAEEEEDAbAFJjHzvv1S8Ar43vS/epUtYc9Cb3o8aNerIkSNL
liyROfr+/n7PTn2zX3NEjwACCCCAAAIIuALqnuyly8vyd0RpYyseS65Wu9mmJJ+/+7u/Wyv/V1tb
OzAwEHAolZc1AggggAACCCBgnoDvexwVs4APVzEv7f/2UEW7ev5kVKekg4ODTjJ69uxZyUr1IWeS
mJr3PUbECCCAAAIIIJBbQDIu3zs7Fa9A/rWTpY1thFstSaabc/b19cl4qIyK1o4ePfrw4cMHDx50
01C21fPtjAACCCCAAAKVJBBsL03RWuxzn9GiXbe0FWdrtSSZ7ls+vfjii6dOnaq74oor6uvrT5w4
cc4558i0vXdYlJS0tD3I1RFAAAEEEEAgEoGTJ0/29vaOGjW6VKOjh17vqa+vfcc73pHZnJLHVjyT
zFa7aagMhsrnd99993PPPVdz4403ygDpgQMH/uAP/uDv/u7v9HNy6E/IRyP5HqASBBBAAAEEECit
wPe///3SBpD1JqM6pJLHVjwZt9U6DZXEUhaJyud1dXUyEnrzzTcfOnSo5m/+5m9+85vfHD9+/OKL
L165cuW73vUuPXxKMlq8jqFmBBBAAAEEEECgqgS8yaikmrJadPfu3ZKFSwpaO3fuXJmgnz59+quv
vipvwiTpqpTQm+t1Vspmpqp6rdBYBBBAAAEEEEAgWgF305J8ojfNy31Ff/CDH7z11luTJk2qPc8+
JAeV8dIXXnjhjjvukEJy81FZWqETU1LSaPuD2hBAAAEEEEAAgSoR0MOaOpmUsU5JL8eMGSOb5r/7
3e9KPqoR6j7xiU/Io2+88YYu+vTTT8ujF1544cSJE+UTd4jUJWMVaZW8emgmAggggAACCCBQsIB3
al3GN+WQHfNyvPzyyz/84Q9//vOfn3vuuZJVNjQ01C1fvnzChAlyd6fXXntNTpPEdM+ePfq5cePG
yTmyeNSbgJKMFtwrnIgAAggggAACCFSJgLsPXr+/khwyLCpjot/73vd+9rOfyQJRmYqXoc8FCxao
kVEpPWXKFFk5KnvqZShUctDHH3/8+eefl3LyuD7fe0v8KkGkmQgggAACCCCAAAKFCXhTR3nDebmD
VVdX12233Sb3cpo6dao8Igmn3MpJss2a7du369RVhk9feeUVGRbVq0UlAZWVo5KxXnPNNVdddZWs
K+XNmQrrDM5CAAEEEEAAAQSqTUDnjTL82dPT89Of/nTXrl3y+ZkzZ+RB2ackw506vZQvVTIq/+fm
o6+//vqTTz4pS0glH5V74ktFs2bNkhPGjh2rM9xf/epX1aZJexFAAAEEEEAAAQRCCchbKelNS7Jl
XhJL2a4kA50yyikpaWNj46WXXirjo7pCJxmVz/RoqHwibw0q60cljX3zzTdlcFT+PX36tIymSnXy
rHtmqIAojAACCCCAAAIIIFA9AkePHpVBTBkElQ1Isj1p/Pjxkk/KEOfMmTMvuOACuaOTOxiaTEb1
Q/p29/K53BBflpDKzfHlrqRySBqrd9bL59XjSEsRQAABBBBAAAEEChCQzfGSjEomKiObcsgcu2yf
nz179uTJk6U2ySolT9XVpiSj+iH3bZrkcxkWlfdnkrl7uT++Hh8tIBpOQQABBBBAAAEEEKg2AclH
ZUBUVnvOmDHj/PPPl73yOg3V++tdjZpPfvKTss1+1apVsq1eck2dpXL/pmp7udBeBBBAAAEEEEBg
ZATctwaVy335y1/OMjI6MnFwFQQQQAABBBBAAAEE/n/AYTVcHkjkdgAAAABJRU5ErkJggg==
------=_NextPart_000_004B_01CDEF53.FC4B9BF0--
12 years, 4 months
[Users] oVirt Node (HyperVisor) - Memory Usage
by Alex Leonhardt
Hi All,
I've just had a little check on a hyper-visor (based on Centos 6.3)
VDSM versions:
vdsm.x86_64 4.10.0-0.44.14.el6
vdsm-cli.noarch 4.10.0-0.44.14.el6
vdsm-python.x86_64 4.10.0-0.44.14.el6
vdsm-xmlrpc.noarch 4.10.0-0.44.14.el6
BUT - my concern is more that a VMs virtual memory (VSZ) allocation is much
higher than that of its configuration ?
qemu 24233 11.0 1.0 *3030420* 1008484 ? Sl 2012 2189:02
/usr/libexec/qemu-kvm -S -M rhel6.3.0 -cpu Conroe -enable-kvm *-m
2048*-smp 4,sockets=1,cores=4,threads=1 -name
Alex
--
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
12 years, 4 months
[Users] All-in-one setup and multipath errors
by Gianluca Cecchi
Hello,
I have an f18 setup with all-in-one ovirt nightly.
The system has two disks /dev/sda and /dev/sdb
I notice that in ovirt seup a multipath.conf file has been written:
# RHEV REVISION 0.9
defaults {
polling_interval 5
getuid_callout "/lib/udev/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n"
no_path_retry fail
user_friendly_names no
flush_on_last_del yes
fast_io_fail_tmo 5
dev_loss_tmo 30
max_fds 4096
}
devices {
device {
vendor "HITACHI"
product "DF.*"
getuid_callout "/lib/udev/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n"
}
device {
vendor "COMPELNT"
product "Compellent Vol"
no_path_retry fail
}
}
I get many of these errors in messages
Jan 10 23:48:39 tekkaman kernel: [ 7903.668803] device-mapper: table:
253:2: multipath: error getting device
Jan 10 23:48:39 tekkaman kernel: [ 7903.668812] device-mapper: ioctl: error
adding target to table
Jan 10 23:48:39 tekkaman kernel: [ 7903.672479] device-mapper: table:
253:2: multipath: error getting device
Jan 10 23:48:39 tekkaman kernel: [ 7903.672488] device-mapper: ioctl: error
adding target to table
Jan 10 23:48:39 tekkaman kernel: [ 7903.675306] device-mapper: table:
253:2: multipath: error getting device
Jan 10 23:48:39 tekkaman kernel: [ 7903.675315] device-mapper: ioctl: error
adding target to table
Jan 10 23:48:39 tekkaman kernel: [ 7903.678268] device-mapper: table:
253:2: multipath: error getting device
Jan 10 23:48:39 tekkaman kernel: [ 7903.678276] device-mapper: ioctl: error
adding target to table
Jan 10 23:48:39 tekkaman multipathd: dm-2: remove map (uevent)
Jan 10 23:48:39 tekkaman multipathd: dm-2: remove map (uevent)
Jan 10 23:48:39 tekkaman multipathd: dm-2: remove map (uevent)
Jan 10 23:48:39 tekkaman multipathd: dm-2: remove map (uevent)
Jan 10 23:48:39 tekkaman multipathd: dm-2: remove map (uevent)
Jan 10 23:48:39 tekkaman multipathd: dm-2: remove map (uevent)
Jan 10 23:48:39 tekkaman multipathd: dm-2: remove map (uevent)
Jan 10 23:48:39 tekkaman multipathd: dm-2: remove map (uevent)
Could the be related?
$ sudo multipath -l
Jan 10 23:49:27 | multipath.conf +5, invalid keyword: getuid_callout
Jan 10 23:49:27 | multipath.conf +18, invalid keyword: getuid_callout
Is this necessary for my internal disks or not?
Otherwise from
[g.cecchi@tekkaman ~]$ sudo /lib/udev/scsi_id --whitelisted
--replace-whitespace --device=/dev/sda
35002538043584d30
[g.cecchi@tekkaman ~]$ sudo /lib/udev/scsi_id --whitelisted
--replace-whitespace --device=/dev/sdb
35000cca313d8b629
I would put this in my multipath.conf:
blacklist {
wwid 35002538043584
wwid 35000cca313d8b629
}
Thanks,
Gianluca
12 years, 4 months
[Users] No working copy and past and usb redirect
by Jean Lÿffffe9olein BEBEY
--1854976548-1127407478-1357731284=:6824
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable
Hi all,=0A=0AI cann't copy and past from my desktop to the virtual guest Wi=
ndows. I have installed spice-guest-tools.0.3.exe on VM.=0A=0AUSB redirecti=
on not working also.=0A=0AAny help ?=0A=0AI use ovirt 3.1 : Fedora 17 x86_6=
4 for Virtualization Manager and 2 hosts ovirt 2.5.5=0A=0AJean
--1854976548-1127407478-1357731284=:6824
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"color:#000; background-color:#fff; font-family:ti=
mes new roman, new york, times, serif;font-size:12pt"><div>Hi all,</div><di=
v><br></div><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family=
: times new roman,new york,times,serif; background-color: transparent; font=
-style: normal;">I cann't copy and past from my desktop to the virtual gues=
t Windows. I have installed spice-guest-tools.0.3.exe on VM.<br></div><div =
style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: times new roman=
,new york,times,serif; background-color: transparent; font-style: normal;">=
USB redirection not working also.</div><div style=3D"color: rgb(0, 0, 0); f=
ont-size: 16px; font-family: times new roman,new york,times,serif; backgrou=
nd-color: transparent; font-style: normal;"><br></div><div style=3D"color: =
rgb(0, 0, 0); font-size: 16px; font-family: times new roman,new york,times,=
serif; background-color: transparent; font-style: normal;">Any help
?</div><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: ti=
mes new roman,new york,times,serif; background-color: transparent; font-sty=
le: normal;"><br></div><div style=3D"color: rgb(0, 0, 0); font-size: 16px; =
font-family: times new roman,new york,times,serif; background-color: transp=
arent; font-style: normal;">I use ovirt 3.1 : Fedora 17 x86_64 for Virtuali=
zation Manager and 2 hosts ovirt 2.5.5</div><div style=3D"color: rgb(0, 0, =
0); font-size: 16px; font-family: times new roman,new york,times,serif; bac=
kground-color: transparent; font-style: normal;"><br></div><div style=3D"co=
lor: rgb(0, 0, 0); font-size: 16px; font-family: times new roman,new york,t=
imes,serif; background-color: transparent; font-style: normal;">Jean<br></d=
iv><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: times n=
ew roman,new york,times,serif; background-color: transparent; font-style: n=
ormal;"></div></div></body></html>
--1854976548-1127407478-1357731284=:6824--
12 years, 4 months
Re: [Users] WG: No spice connection - Remote-Viewer quits after a few seconds
by David Jaša
Hi Dennis,
as you use remote-viewer, the steps below won't work for you, I thought you were still using spicec based previous information. Here are the steps for remote-viewer:
1. download debug-helper binary and place it to where remote-viewer.exe resides: http://elmarco.fedorapeople.org/debug-helper.exe
2. install windows build of gdb:
* get latest mingw-get-inst-BUILD_DATE.exe
* In "Select Components" choose : "MinGW Developer Toolkit" (last option)
* add path with gdb.exe to (system-wide) %PATH%
3. change registry entry:
HKCU\Software\spice-space.org\spicex\client
to point to debug-helper instead of remote-viewer from:
"$DIR\remote-viewer.exe --spice-controller"
to:
"$DIR\debug-helper.exe remote-viewer.exe --spice-controller"
4. connect to the VM
You should now see a cmd window running gdb that will print stdout/stderr of remote-viewer - could you copy the output from there?
David
Dennis Böck píše v Čt 10. 01. 2013 v 07:38 +0000:
> Hi David,
>
> do you have any new information about this problem?
> Do you know, whether there is anyone out there who managed to get a spice session with a Windows (Internet Explorer) client?
>
> Best regards and thanks in advance
> Dennis
> ________________________________________
> Von: users-bounces(a)ovirt.org [users-bounces(a)ovirt.org]" im Auftrag von "Dennis Böck [dennis(a)webdienstleistungen.com]
> Gesendet: Mittwoch, 2. Januar 2013 13:16
> An: users(a)oVirt.org
> Betreff: Re: [Users] No spice connection - Remote-Viewer quits after a few seconds
>
> Hi David,
>
> I set SPICEC_LOG_LEVEL=0 rebooted the machine but no spicec log was written in %temp%.
> If I perform a search in this folder for "spice" only two files appear:
> %temp%/spicex.log
> %temp%/low/spicex.log
> I even tried a new installation of http://spice-space.org/download/gtk/windows/virt-viewer-0.5.3_x86.exe but nothing changed.
>
> Best regards
> Dennis
>
> ________________________________________
> Von: David Jaša [djasa(a)redhat.com]
> Gesendet: Mittwoch, 2. Januar 2013 11:10
> An: Dennis Böck
> Cc: users(a)oVirt.org
> Betreff: Re: [Users] No spice connection - Remote-Viewer quits after a few seconds
>
> Hi Dennis,
>
> this log isn't exactly helpful either. :( Spicec just quits before it could receive any connection info from the plugin... Could you try to set another variable: SPICEC_LOG_LEVEL=0 and find spicec log (also in %temp%)?
>
> David
>
>
> Dennis Böck píše v Čt 27. 12. 2012 v 14:12 +0000:
> > Hi David,
> >
> > here is my %temp%/low/spicex.log (%temp%/spicex.log is not updated!), after setting the system variable spicex_debug_level=0:
> > 1356616807 INFO [4944:5044] spicex_init_logger: started
> > 1356616807 DEBUG [4944:5044] COSpiceX::put_DynamicMenu: DynamicMenu
> > 1356616807 INFO [4944:5044] COSpiceX::put_FullScreen: New FullScreen request newVal=0x0
> > 1356616807 DEBUG [4944:5044] COSpiceX::Connect: Running spicec (C:\Users\Dennis\AppData\Local\virt-viewer\bin\remote-viewer.exe --spice-controller)
> > 1356616807 INFO [4944:5044] COSpiceX::Connect: spicec pid 5996
> > 1356616809 DEBUG [4944:5044] COSpiceX::Connect: connecting to spice client's pipe
> > 1356616814 ERROR [4944:5044] COSpiceX::Connect: failed to connect to spice client pipe
> >
> > Best regards
> > Dennis
> > ________________________________________
> > Von: David Jaša [djasa(a)redhat.com]
> > Gesendet: Dienstag, 4. Dezember 2012 12:16
> > An: Einav Cohen
> > Cc: Dennis Böck; users(a)oVirt.org
> > Betreff: Re: [Users] No spice connection - Remote-Viewer quits after a few seconds
> >
> > Einav,
> >
> > Dennis's previous logs suggest that he's using spicex/IE/windows, not xpi/firefox/linux.
> >
> >
> > Dennis,
> >
> > could you go through my last week conversation with Karli Sjöberg and repeat the debugging steps described there?
> >
> > David
> >
> >
> > Einav Cohen píše v Po 03. 12. 2012 v 14:52 -0500:
> > > Hi Dennis,
> > >
> > > We need some more information that can be useful to us in order to try solving the problem.
> > > Can you please follow the instructions below for getting more detailed logs and reply with the results?
> > >
> > > 1. Set spice-xpi log level to DEBUG (for versions < 2.8, modify
> > > logger.ini).
> > >
> > > 2. Verify which client is running (we should get that from 1), e.g.
> > > by using top, or checking alternatives.
> > >
> > > 3. Getting version: rpm -q spice-xpi virt-viewer spice-client
> > >
> > > 4. client/remote-viewer logs:
> > >
> > > For spicec, there should be a file ~/.spicec/spicec.log
> > >
> > > For remote-viewer:
> > > In order to also get log messages of remote-viewer, run firefox from shell.
> > > For debug level log messages, the following environment variables should
> > > be set, for example
> > > $ export SPICE_DEBUG=1
> > > $ export G_DEBUG_MESSAGES=all
> > > $ firefox
> > >
> > > ----
> > > Thanks,
> > > Einav
> > >
> > > ----- Original Message -----
> > > > From: "Dennis Böck" <dennis(a)webdienstleistungen.com>
> > > > To: "Itamar Heim" <iheim(a)redhat.com>
> > > > Cc: "users(a)oVirt.org" <users(a)ovirt.org>
> > > > Sent: Sunday, December 2, 2012 5:28:58 PM
> > > > Subject: Re: [Users] No spice connection - Remote-Viewer quits after a few seconds
> > > >
> > > > I have the same problem, if I try to connect from admin portal.
> > > >
> > > > -----Ursprüngliche Nachricht-----
> > > > Von: Itamar Heim [mailto:iheim@redhat.com]
> > > > Gesendet: Samstag, 24. November 2012 23:47
> > > > An: Dennis Böck
> > > > Cc: users(a)oVirt.org
> > > > Betreff: Re: [Users] No spice connection - Remote-Viewer quits after
> > > > a few seconds
> > > >
> > > > On 11/23/2012 11:04 PM, Dennis Böck wrote:
> > > > > Here are some logs:
> > > >
> > > > is the same working from admin portal?
> > > >
> > > > >
> > > > > 1351785267 INFO [4504:4560] spicex_log_cleanup: done
> > > > > 1351785268 INFO [3788:5440] spicex_log_cleanup: done
> > > > > 1351867007 INFO [4868:4916] spicex_init_logger: started
> > > > > 1351867007 INFO [4868:4916] COSpiceX::put_FullScreen: New
> > > > > FullScreen
> > > > > request newVal=0x0
> > > > > 1351867007 INFO [4868:4916] COSpiceX::Connect: spicec pid 4948
> > > > > 1351867013 ERROR [4868:4916] COSpiceX::Connect: failed to connect
> > > > > to
> > > > > spice client pipe
> > > > > 1351867038 INFO [4868:4916] COSpiceX::put_FullScreen: New
> > > > > FullScreen
> > > > > request newVal=0x0
> > > > > 1351867038 INFO [4868:4916] COSpiceX::Connect: spicec pid 2732
> > > > > 1351867043 ERROR [4868:4916] COSpiceX::Connect: failed to connect
> > > > > to
> > > > > spice client pipe
> > > > > 1351867169 INFO [4868:4916] spicex_log_cleanup: done
> > > > > 1353687854 INFO [4568:2008] spicex_init_logger: started
> > > > > 1353687854 INFO [4568:2008] COSpiceX::put_FullScreen: New
> > > > > FullScreen
> > > > > request newVal=0x0
> > > > > 1353687854 INFO [4568:2008] COSpiceX::Connect: spicec pid 5776
> > > > > 1353687860 ERROR [4568:2008] COSpiceX::Connect: failed to connect
> > > > > to
> > > > > spice client pipe
> > > > > 1353687901 INFO [3064:6084] spicex_init_logger: started
> > > > > 1353687901 INFO [3064:6084] COSpiceX::put_FullScreen: New
> > > > > FullScreen
> > > > > request newVal=0x0
> > > > > 1353687901 INFO [3064:6084] COSpiceX::Connect: spicec pid 1576
> > > > > 1353687906 ERROR [3064:6084] COSpiceX::Connect: failed to connect
> > > > > to
> > > > > spice client pipe
> > > > > 1353687928 INFO [3064:6084] COSpiceX::put_FullScreen: New
> > > > > FullScreen
> > > > > request newVal=0x0
> > > > > 1353687928 INFO [3064:6084] COSpiceX::Connect: spicec pid 2412
> > > > > 1353687934 ERROR [3064:6084] COSpiceX::Connect: failed to connect
> > > > > to
> > > > > spice client pipe
> > > > > 1353688155 INFO [3064:6084] COSpiceX::put_FullScreen: New
> > > > > FullScreen
> > > > > request newVal=0x0
> > > > > 1353688155 INFO [3064:6084] COSpiceX::Connect: spicec pid 3108
> > > > > 1353688161 ERROR [3064:6084] COSpiceX::Connect: failed to connect
> > > > > to
> > > > > spice client pipe
> > > > > 1353688247 INFO [3064:6084] COSpiceX::put_FullScreen: New
> > > > > FullScreen
> > > > > request newVal=0x0
> > > > > 1353688247 INFO [3064:6084] COSpiceX::Connect: spicec pid 3792
> > > > > 1353688252 ERROR [3064:6084] COSpiceX::Connect: failed to connect
> > > > > to
> > > > > spice client pipe
> > > > > 1353688564 INFO [3064:6084] COSpiceX::put_FullScreen: New
> > > > > FullScreen
> > > > > request newVal=0x0
> > > > > 1353688564 INFO [3064:6084] COSpiceX::Connect: spicec pid 5472
> > > > > 1353688569 ERROR [3064:6084] COSpiceX::Connect: failed to connect
> > > > > to
> > > > > spice client pipe
> > > > >
> > > > > ________________________________________
> > > > > Von: Itamar Heim [iheim(a)redhat.com]
> > > > > Gesendet: Donnerstag, 1. November 2012 19:33
> > > > > An: Simon Grinberg
> > > > > Cc: Dennis Böck; users(a)oVirt.org
> > > > > Betreff: Re: [Users] No spice connection - Remote-Viewer quits
> > > > > after a
> > > > > few seconds
> > > > >
> > > > > On 11/01/2012 06:04 PM, Simon Grinberg wrote:
> > > > >> Make sure your host is accessible and resolvable from the client
> > > > >> machine Check the Spice ports are open on the host
> > > > >
> > > > > and provide spice client side logs
> > > > >
> > > > >>
> > > > >> ---------------------------------------------------------------------
> > > > >> -
> > > > >> --
> > > > >>
> > > > >> *From: *"Dennis Böck" <dennis(a)webdienstleistungen.com>
> > > > >> *To: *"users(a)oVirt.org" <users(a)ovirt.org>
> > > > >> *Sent: *Thursday, November 1, 2012 5:32:25 PM
> > > > >> *Subject: *[Users] No spice connection - Remote-Viewer quits
> > > > >> after a
> > > > >> few seconds
> > > > >>
> > > > >> Dear oVirt-User-List,
> > > > >>
> > > > >> when I try to connect to a spice-VM by clicking the
> > > > >> console-button
> > > > >> in the user portal the "Remote Viewer" appears with the text
> > > > >> "Setting up spice session", but a few seconds later it just
> > > > >> quits
> > > > >> without an error message.
> > > > >>
> > > > >> Here are a few corresponding log-lines of engine.log:
> > > > >>
> > > > >> 2012-10-31 16:21:39,041 INFO
> > > > >> [org.ovirt.engine.core.bll.SetVmTicketCommand]
> > > > >> (ajp--0.0.0.0-8009-9)
> > > > >> [14c13c60] Running command: SetVmTicketCommand internal:
> > > > >> false.
> > > > >> Entities affected : ID: cce2dc1a-a6d5-48b2-8fcc-52aa71d9016b
> > > > >> Type: VM
> > > > >> 2012-10-31 16:21:39,046 INFO
> > > > >> [org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand]
> > > > >> (ajp--0.0.0.0-8009-9) [14c13c60] START,
> > > > >> SetVmTicketVDSCommand(vdsId
> > > > >> = 277891b0-1cdc-11e2-b51a-002590533f86,
> > > > >> vmId=cce2dc1a-a6d5-48b2-8fcc-52aa71d9016b,
> > > > >> ticket=Yckfn2IndYcE,
> > > > >> validTime=120,m userName=admin@internal
> > > > >> <mailto:userName=admin@internal>,
> > > > >> userId=fdfc627c-d875-11e0-90f0-83df133b58cc), log id:
> > > > >> 64b13c0e
> > > > >> 2012-10-31 16:21:39,088 INFO
> > > > >> [org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand]
> > > > >> (ajp--0.0.0.0-8009-9) [14c13c60] FINISH,
> > > > >> SetVmTicketVDSCommand, log
> > > > >> id: 64b13c0e
> > > > >> 2012-10-31 16:21:39,205 WARN
> > > > >> [org.ovirt.engine.core.bll.GetConfigurationValueQuery]
> > > > >> (ajp--0.0.0.0-8009-1) calling GetConfigurationValueQuery
> > > > >> (SSLEnabled) with null version, using default general for
> > > > >> version
> > > > >> 2012-10-31 16:21:39,206 WARN
> > > > >> [org.ovirt.engine.core.bll.GetConfigurationValueQuery]
> > > > >> (ajp--0.0.0.0-8009-1) calling GetConfigurationValueQuery
> > > > >> (CipherSuite) with null version, using default general for
> > > > >> version
> > > > >> 2012-10-31 16:21:39,207 WARN
> > > > >> [org.ovirt.engine.core.bll.GetConfigurationValueQuery]
> > > > >> (ajp--0.0.0.0-8009-1) calling GetConfigurationValueQuery
> > > > >> (EnableSpiceRootCertificateValidation) with null version,
> > > > >> using
> > > > >> default general for version
> > > > >> 2012-10-31 16:21:39,212 WARN
> > > > >> [org.ovirt.engine.core.bll.GetConfigurationValueQuery]
> > > > >> (ajp--0.0.0.0-8009-1) calling GetConfigurationValueQuery
> > > > >> (SpiceToggleFullScreenKeys) with null version, using default
> > > > >> general
> > > > >> for version
> > > > >> 2012-10-31 16:21:39,213 WARN
> > > > >> [org.ovirt.engine.core.bll.GetConfigurationValueQuery]
> > > > >> (ajp--0.0.0.0-8009-1) calling GetConfigurationValueQuery
> > > > >> (SpiceReleaseCursorKeys) with null version, using default
> > > > >> general
> > > > >> for version
> > > > >>
> > > > >> Anyone any ideas?
> > > > >>
> > > > >> Best regards
> > > > >>
> > > > >> Dennis
> > > > >>
> > > > >>
> > > > >> _______________________________________________
> > > > >> Users mailing list
> > > > >> Users(a)ovirt.org
> > > > >> http://lists.ovirt.org/mailman/listinfo/users
> > > > >>
> > > > >>
> > > > >>
> > > > >>
> > > > >> _______________________________________________
> > > > >> Users mailing list
> > > > >> Users(a)ovirt.org
> > > > >> http://lists.ovirt.org/mailman/listinfo/users
> > > > >>
> > > > >
> > > >
> > > >
> > > >
> > > >
> > > > _______________________________________________
> > > > Users mailing list
> > > > Users(a)ovirt.org
> > > > http://lists.ovirt.org/mailman/listinfo/users
> > > >
> > > _______________________________________________
> > > Users mailing list
> > > Users(a)ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/users
> >
> > --
> >
> > David Jaša, RHCE
> >
> > SPICE QE based in Brno
> > GPG Key: 22C33E24
> > Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24
> > _______________________________________________
> > Users mailing list
> > Users(a)ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
> --
>
> David Jaša, RHCE
>
> SPICE QE based in Brno
> GPG Key: 22C33E24
> Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
--
David Jaša, RHCE
SPICE QE based in Brno
GPG Key: 22C33E24
Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24
12 years, 4 months
[Users] VM Priority for Run/Migration queue set failed
by Alexandru Vladulescu
This is a multi-part message in MIME format.
--------------040205070708090903030405
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Hi All,
I have the following problem:
When I try to edit the configuration of a VM, VM being shut down, I go
to High Availability section, select the check box for Highly Available,
and try to increase afterwards the queue priority from Low to Medium or
High.
Despite the fact that I check the box for the Medium or High, after I
hit OK, and return to general information about VM, I see that Priority
remains unchanged to Low.
This is not what happens with the Highly Available check box, as
changing this value updates the general info tab when the VM is being
clicked.
Might this be a bug ?
I am using CentOS 6.3 with dreyou's repo, and the rpm packs I have
installed on the node controller are:
/ovirt-engine-setup-3.1.0-3.19.el6.noarch//
//ovirt-engine-config-3.1.0-3.19.el6.noarch//
//ovirt-engine-jbossas711-1-0.x86_64//
//ovirt-log-collector-3.1.0-16.el6.noarch//
//ovirt-iso-uploader-3.1.0-16.el6.noarch//
//ovirt-engine-backend-3.1.0-3.19.el6.noarch//
//ovirt-engine-webadmin-portal-3.1.0-3.19.el6.noarch//
//ovirt-engine-dbscripts-3.1.0-3.19.el6.noarch//
//ovirt-engine-genericapi-3.1.0-3.19.el6.noarch//
//ovirt-engine-tools-common-3.1.0-3.19.el6.noarch//
//ovirt-engine-3.1.0-3.19.el6.noarch//
//ovirt-engine-sdk-3.1.0.5-1.el6.noarch//
//ovirt-image-uploader-3.1.0-16.el6.noarch//
//ovirt-engine-userportal-3.1.0-3.19.el6.noarch//
//ovirt-engine-restapi-3.1.0-3.19.el6.noarch//
//ovirt-engine-notification-service-3.1.0-3.19.el6.noarch//
//ovirt-engine-cli-3.1.0.7-1.el6.noarch//
/
Also, this is the latest log from engine.log when I run the actions
described in the upper part.
/
2013-01-10 13:36:51,816 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(http--0.0.0.0-8443-2) Unable to get value of property: isQuotaDefault
for class org.ovirt.engine.core.common.businessentities.VmStatic
2013-01-10 13:36:51,820 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(http--0.0.0.0-8443-2) Unable to get value of property: isQuotaDefault
for class org.ovirt.engine.core.common.businessentities.VmStatic
2013-01-10 13:36:51,822 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(http--0.0.0.0-8443-2) Unable to get value of property: managedDeviceMap
for class org.ovirt.engine.core.common.businessentities.VmStatic
2013-01-10 13:36:51,825 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(http--0.0.0.0-8443-2) Unable to get value of property: managedDeviceMap
for class org.ovirt.engine.core.common.businessentities.VmStatic
2013-01-10 13:36:51,850 INFO [org.ovirt.engine.core.bll.UpdateVmCommand]
(http--0.0.0.0-8443-2) [52c2cdc2] Running command: UpdateVmCommand
internal: false. Entities affected : ID:
96e6705a-030c-411a-b365-ad6ff3fcfb56 Type: VM
2013-01-10 13:36:51,868 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(http--0.0.0.0-8443-2) [52c2cdc2] START, IsValidVDSCommand(storagePoolId
= b6c128ae-5987-11e2-964c-001e8c47d368, ignoreFailoverLimit = false,
compatabilityVersion = null), log id: 35d20229
2013-01-10 13:36:51,873 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(http--0.0.0.0-8443-2) [52c2cdc2] FINISH, IsValidVDSCommand, return:
true, log id: 35d20229
2013-01-10 13:36:51,931 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.UpdateVMVDSCommand]
(http--0.0.0.0-8443-2) [52c2cdc2] START,
UpdateVMVDSCommand(storagePoolId = b6c128ae-5987-11e2-964c-001e8c47d368,
ignoreFailoverLimit = false, compatabilityVersion = null,
storageDomainId = 00000000-0000-0000-0000-000000000000,
infoDictionary.size = 1), log id: 1e0c9f16
2013-01-10 13:36:51,953 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.UpdateVMVDSCommand]
(http--0.0.0.0-8443-2) [52c2cdc2] FINISH, UpdateVMVDSCommand, log id:
1e0c9f16/
If anyone can give a clue about this, would be much appreciated.
Thanks
Alex.
--------------040205070708090903030405
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<br>
<br>
Hi All,<br>
<br>
I have the following problem:<br>
<br>
<br>
When I try to edit the configuration of a VM, VM being shut down, I
go to High Availability section, select the check box for
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
Highly Available, and try to increase afterwards the queue priority
from Low to Medium or High.<br>
<br>
Despite the fact that I check the box for the Medium or High, after
I hit OK, and return to general information about VM, I see that
Priority remains unchanged to Low.<br>
<br>
This is not what happens with the Highly Available check box, as
changing this value updates the general info tab when the VM is
being clicked. <br>
<br>
Might this be a bug ?<br>
<br>
<br>
I am using CentOS 6.3 with dreyou's repo, and the rpm packs I have
installed on the node controller are:<br>
<br>
<small><i>ovirt-engine-setup-3.1.0-3.19.el6.noarch</i><i><br>
</i><i>ovirt-engine-config-3.1.0-3.19.el6.noarch</i><i><br>
</i><i>ovirt-engine-jbossas711-1-0.x86_64</i><i><br>
</i><i>ovirt-log-collector-3.1.0-16.el6.noarch</i><i><br>
</i><i>ovirt-iso-uploader-3.1.0-16.el6.noarch</i><i><br>
</i><i>ovirt-engine-backend-3.1.0-3.19.el6.noarch</i><i><br>
</i><i>ovirt-engine-webadmin-portal-3.1.0-3.19.el6.noarch</i><i><br>
</i><i>ovirt-engine-dbscripts-3.1.0-3.19.el6.noarch</i><i><br>
</i><i>ovirt-engine-genericapi-3.1.0-3.19.el6.noarch</i><i><br>
</i><i>ovirt-engine-tools-common-3.1.0-3.19.el6.noarch</i><i><br>
</i><i>ovirt-engine-3.1.0-3.19.el6.noarch</i><i><br>
</i><i>ovirt-engine-sdk-3.1.0.5-1.el6.noarch</i><i><br>
</i><i>ovirt-image-uploader-3.1.0-16.el6.noarch</i><i><br>
</i><i>ovirt-engine-userportal-3.1.0-3.19.el6.noarch</i><i><br>
</i><i>ovirt-engine-restapi-3.1.0-3.19.el6.noarch</i><i><br>
</i><i>ovirt-engine-notification-service-3.1.0-3.19.el6.noarch</i><i><br>
</i><i>ovirt-engine-cli-3.1.0.7-1.el6.noarch</i><i><br>
</i></small><br>
<br>
Also, this is the latest log from engine.log when I run the actions
described in the upper part.<br>
<br>
<i><small><br>
2013-01-10 13:36:51,816 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(http--0.0.0.0-8443-2) Unable to get value of property:
isQuotaDefault for class
org.ovirt.engine.core.common.businessentities.VmStatic<br>
2013-01-10 13:36:51,820 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(http--0.0.0.0-8443-2) Unable to get value of property:
isQuotaDefault for class
org.ovirt.engine.core.common.businessentities.VmStatic<br>
2013-01-10 13:36:51,822 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(http--0.0.0.0-8443-2) Unable to get value of property:
managedDeviceMap for class
org.ovirt.engine.core.common.businessentities.VmStatic<br>
2013-01-10 13:36:51,825 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(http--0.0.0.0-8443-2) Unable to get value of property:
managedDeviceMap for class
org.ovirt.engine.core.common.businessentities.VmStatic<br>
2013-01-10 13:36:51,850 INFO
[org.ovirt.engine.core.bll.UpdateVmCommand]
(http--0.0.0.0-8443-2) [52c2cdc2] Running command:
UpdateVmCommand internal: false. Entities affected : ID:
96e6705a-030c-411a-b365-ad6ff3fcfb56 Type: VM<br>
2013-01-10 13:36:51,868 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(http--0.0.0.0-8443-2) [52c2cdc2] START,
IsValidVDSCommand(storagePoolId =
b6c128ae-5987-11e2-964c-001e8c47d368, ignoreFailoverLimit =
false, compatabilityVersion = null), log id: 35d20229<br>
2013-01-10 13:36:51,873 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(http--0.0.0.0-8443-2) [52c2cdc2] FINISH, IsValidVDSCommand,
return: true, log id: 35d20229<br>
2013-01-10 13:36:51,931 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.UpdateVMVDSCommand]
(http--0.0.0.0-8443-2) [52c2cdc2] START,
UpdateVMVDSCommand(storagePoolId =
b6c128ae-5987-11e2-964c-001e8c47d368, ignoreFailoverLimit =
false, compatabilityVersion = null, storageDomainId =
00000000-0000-0000-0000-000000000000, infoDictionary.size = 1),
log id: 1e0c9f16<br>
2013-01-10 13:36:51,953 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.UpdateVMVDSCommand]
(http--0.0.0.0-8443-2) [52c2cdc2] FINISH, UpdateVMVDSCommand,
log id: 1e0c9f16</small></i><br>
<br>
<br>
If anyone can give a clue about this, would be much appreciated.<br>
<br>
<br>
Thanks<br>
Alex.<br>
<br>
</body>
</html>
--------------040205070708090903030405--
12 years, 4 months
Re: [Users] oVirt 3.1 - VM Migration Issue
by Roy Golan
On 01/03/2013 05:07 PM, Tom Brown wrote:
>
>> interesting, please search for migrationCreate command on desination host and search for ERROR afterwords, what do you see?
>>
>> ----- Original Message -----
>>> From: "Tom Brown" <tom(a)ng23.net>
>>> To: users(a)ovirt.org
>>> Sent: Thursday, January 3, 2013 4:12:05 PM
>>> Subject: [Users] oVirt 3.1 - VM Migration Issue
>>>
>>>
>>> Hi
>>>
>>> I seem to have an issue with a single VM and migration, other VM's
>>> can migrate OK - When migrating from the GUI it appears to just hang
>>> but in the engine.log i see the following
>>>
>>> 2013-01-03 14:03:10,359 INFO [org.ovirt.engine.core.bll.VdsSelector]
>>> (ajp--0.0.0.0-8009-59) Checking for a specific VDS only -
>>> id:a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
>>> name:ovirt-node.domain-name, host_name(ip):10.192.42.165
>>> 2013-01-03 14:03:10,411 INFO
>>> [org.ovirt.engine.core.bll.MigrateVmToServerCommand]
>>> (pool-3-thread-48) [4d32917d] Running command:
>>> MigrateVmToServerCommand internal: false. Entities affected : ID:
>>> 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 Type: VM
>>> 2013-01-03 14:03:10,413 INFO [org.ovirt.engine.core.bll.VdsSelector]
>>> (pool-3-thread-48) [4d32917d] Checking for a specific VDS only -
>>> id:a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
>>> name:ovirt-node.domain-name, host_name(ip):10.192.42.165
>>> 2013-01-03 14:03:11,028 INFO
>>> [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
>>> (pool-3-thread-48) [4d32917d] START, MigrateVDSCommand(vdsId =
>>> 1a52b722-43a1-11e2-af96-3cd92b4c8e89,
>>> vmId=9dc63ce4-0f76-4963-adfe-6f8eb1a44806, srcHost=10.192.42.196,
>>> dstVdsId=a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
>>> dstHost=10.192.42.165:54321, migrationMethod=ONLINE), log id:
>>> 5011789b
>>> 2013-01-03 14:03:11,030 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
>>> (pool-3-thread-48) [4d32917d] VdsBroker::migrate::Entered
>>> (vm_guid=9dc63ce4-0f76-4963-adfe-6f8eb1a44806,
>>> srcHost=10.192.42.196, dstHost=10.192.42.165:54321, method=online
>>> 2013-01-03 14:03:11,031 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
>>> (pool-3-thread-48) [4d32917d] START, MigrateBrokerVDSCommand(vdsId =
>>> 1a52b722-43a1-11e2-af96-3cd92b4c8e89,
>>> vmId=9dc63ce4-0f76-4963-adfe-6f8eb1a44806, srcHost=10.192.42.196,
>>> dstVdsId=a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
>>> dstHost=10.192.42.165:54321, migrationMethod=ONLINE), log id:
>>> 7cd53864
>>> 2013-01-03 14:03:11,041 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
>>> (pool-3-thread-48) [4d32917d] FINISH, MigrateBrokerVDSCommand, log
>>> id: 7cd53864
>>> 2013-01-03 14:03:11,086 INFO
>>> [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
>>> (pool-3-thread-48) [4d32917d] FINISH, MigrateVDSCommand, return:
>>> MigratingFrom, log id: 5011789b
>>> 2013-01-03 14:03:11,606 INFO
>>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
>>> (QuartzScheduler_Worker-29) vds::refreshVmList vm id
>>> 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 is migrating to vds
>>> ovirt-node.domain-name ignoring it in the refresh till migration is
>>> done
>>> 2013-01-03 14:03:12,836 INFO
>>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
>>> (QuartzScheduler_Worker-36) VM test002.domain-name
>>> 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 moved from MigratingFrom --> Up
>>> 2013-01-03 14:03:12,837 INFO
>>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
>>> (QuartzScheduler_Worker-36) adding VM
>>> 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 to re-run list
>>> 2013-01-03 14:03:12,852 ERROR
>>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
>>> (QuartzScheduler_Worker-36) Rerun vm
>>> 9dc63ce4-0f76-4963-adfe-6f8eb1a44806. Called from vds
>>> ovirt-node002.domain-name
>>> 2013-01-03 14:03:12,855 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
>>> (pool-3-thread-48) START, MigrateStatusVDSCommand(vdsId =
>>> 1a52b722-43a1-11e2-af96-3cd92b4c8e89,
>>> vmId=9dc63ce4-0f76-4963-adfe-6f8eb1a44806), log id: 4721a1f3
>>> 2013-01-03 14:03:12,864 ERROR
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
>>> (pool-3-thread-48) Failed in MigrateStatusVDS method
>>> 2013-01-03 14:03:12,865 ERROR
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
>>> (pool-3-thread-48) Error code migrateErr and error message
>>> VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS,
>>> error = Fatal error during migration
>>> 2013-01-03 14:03:12,865 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
>>> (pool-3-thread-48) Command
>>> org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand
>>> return value
>>> Class Name:
>>> org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
>>> mStatus Class Name:
>>> org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
>>> mCode 12
>>> mMessage Fatal error during migration
>>>
>>>
>>> 2013-01-03 14:03:12,866 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
>>> (pool-3-thread-48) Vds: ovirt-node002.itvonline.ads
>>> 2013-01-03 14:03:12,867 ERROR
>>> [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-48)
>>> Command MigrateStatusVDS execution failed. Exception:
>>> VDSErrorException: VDSGenericException: VDSErrorException: Failed to
>>> MigrateStatusVDS, error = Fatal error during migration
>>> 2013-01-03 14:03:12,867 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
>>> (pool-3-thread-48) FINISH, MigrateStatusVDSCommand, log id: 4721a1f3
>>>
>>> Does anyone have any idea what this might be? I am using 3.1 from
>>> dreyou as these are CentOS 6 nodes
>>>
> any clue on which log on the new host ? I see the following in messages
VDSM is the virtualization agent. look at /var/log/vdsm/vdsm.log
>
> Jan 3 16:03:20 ovirt-node vdsm Storage.LVM WARNING lvm vgs failed: 5 [] [' Volume group "ab686999-f320-4a61-ae07-e99c2f858996" not found']
> Jan 3 16:03:20 ovirt-node vdsm Storage.StorageDomain WARNING Resource namespace ab686999-f320-4a61-ae07-e99c2f858996_imageNS already registered
> Jan 3 16:03:20 ovirt-node vdsm Storage.StorageDomain WARNING Resource namespace ab686999-f320-4a61-ae07-e99c2f858996_volumeNS already registered
> Jan 3 16:03:58 ovirt-node vdsm vm.Vm WARNING vmId=`9dc63ce4-0f76-4963-adfe-6f8eb1a44806`::Unknown type found, device: '{'device': 'unix', 'alias': 'channel0', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '1'}}' found
> Jan 3 16:03:58 ovirt-node vdsm vm.Vm WARNING vmId=`9dc63ce4-0f76-4963-adfe-6f8eb1a44806`::Unknown type found, device: '{'device': 'unix', 'alias': 'channel1', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '2'}}' found
> Jan 3 16:03:59 ovirt-node kernel: device vnet2 entered promiscuous mode
> Jan 3 16:03:59 ovirt-node kernel: ovirtmgmt: port 4(vnet2) entering forwarding state
> Jan 3 16:03:59 ovirt-node kernel: ovirtmgmt: port 4(vnet2) entering disabled state
> Jan 3 16:03:59 ovirt-node kernel: device vnet2 left promiscuous mode
> Jan 3 16:03:59 ovirt-node kernel: ovirtmgmt: port 4(vnet2) entering disabled state
>
> and the following in the qemu log for that VM on the new node
>
> 2013-01-03 16:03:59.706+0000: starting up
> LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -S -M rhel6.3.0 -cpu Nehalem -enable-kvm -m 1024 -smp 1,sockets=1,cores=1,threads=1 -name test002.itvonline.ads -uuid 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=6-3.el6.centos.9,serial=55414E03-C241-11DF-BBDA-64093408D485_d4:85:64:09:34:08,uuid=9dc63ce4-0f76-4963-adfe-6f8eb1a44806 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/test002.itvonline.ads.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2013-01-03T16:03:58,driftfix=slew -no-shutdown -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -drive file=/rhev/data-center/bb0beebf-edab-41e2-83b8-16bdbbc5
> dda7/2a1939bd-9fa3-4896-b8a9-46234172aae7/images/e8711e5d-2f06-4c0f-b5c6-fa0806d7448f/0d93c51f-f838-4143-815c-9b3457d1a934,if=none,id=drive-virtio-disk0,format=raw,serial=e8711e5d-2f06-4c0f-b5c6-fa0806d7448f,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0 -netdev tap,fd=32,id=hostnet0,vhost=on,vhostfd=33 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:c0:2a:00,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/test002.itvonline.ads.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/test002.itvonline.ads.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev pty,id=charconsole0 -de
> vice virtconsole,chardev=charconsole0,id=console0 -device usb-tablet,id=input0 -vnc 10.192.42.165:4,password -k en-us -vga qxl -global qxl-vga.vram_size=67108864 -incoming tcp:0.0.0.0:49160 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
> 2013-01-03 16:03:59.955+0000: shutting down
>
> but thats about it?
>
> thanks
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
12 years, 4 months
Re: [Users] What do you want to see in oVirt next?
by Trey Dockendorf
On Jan 3, 2013 4:15 PM, "Moran Goldboim" <mgoldboi(a)redhat.com> wrote:
>
> On 01/03/2013 07:42 PM, Darrell Budic wrote:
>>
>>
>> On Jan 3, 2013, at 10:25 AM, Patrick Hurrelmann wrote:
>>
>>> On 03.01.2013 17:08, Itamar Heim wrote:
>>>>
>>>> Hi Everyone,
>>>>
>>>>
>>>> as we wrap oVirt 3.2, I wanted to check with oVirt users on what they
>>>>
>>>> find good/useful in oVirt, and what they would like to see
>>>>
>>>> improved/added in coming versions?
>>>>
>>>>
>>>> Thanks,
>>>>
>>>> Itamar
>>>
>>>
>>> For me, I'd like to see official rpms for RHEL6/CentOS6. According to
>>> the traffic on this list quite a lot are using Dreyou's packages.
>>
>>
>> I'm going to second this strongly! Official support would be very much
appreciated. Bonus points for supporting a migration from the dreyou
packages. No offense to dreyou, of course, just rather be better supported
by the official line on Centos 6.x.
>
>
> EL6 rpms are planned to be delivered with 3.2 GA version, and nightly
builds from there on.
> hopefully we can push it to 3.2 beta.
>
> Moran.
>
>>
>>
>> Better support/integration of windows based SPICE clients would also be
much appreciated, I have many end users on Windows, and it's been a chore
to keep it working so far. This includes the client drivers for windows VMs
to support the SPICE display for multiple displays. More of a client side
thing, I know, but a desired feature in my environment.
>>
>> Thanks for the continued progress and support as well!
>>
>> -----------------
>> Darrell Budic
>> Zenfire
>>
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
Will the EL6 releases also include an EL6 version of ovirt-node? If not
will the build dependencies for ovirt node be available to allow for custom
node iso builds?
- Trey
12 years, 4 months
[Users] Configure NFS resource from Host
by jj197005
Hello everybody,
I'm trying to configure and install oVirt for test in our University
faculty. I have installed an engine an a host, both of them with Fedora
17. The engine is working and I can log-in in the Data Center with admin
user. I have tried to add my host to the data center portal and I
successfully done. After that I have tried to add a NFS resource which
is in the host that I have added. If I open a console in the engine and
log-in as vdsm user, I can mount the NFS resource without problems. The
problem is when I tried to add this NFS resource to the Data Center.
I have followed the tutorial
http://www.ovirt.org/Quick_Start_Guide#Configure_Storage, but when I
have completed the form and pressed "Ok", after one or two minutes I
receive one screen with bellow error:
"Error: A Request to the Server failed with the following Status Code: 500"
I'm attaching the vdsm.log file with the lines that are created with
this operation. I hope that someone can show me what is the problem. If
you need more information aboout the installation I can show it.
Many thanks in avanced,
Juanjo.
12 years, 4 months
[Users] Update on dates for oVirt 3.2 Release
by Mike Burns
At today's oVirt meeting, we reviewed the dates for the oVirt 3.2
Release. The updated dates are:
Devel Freeze and Branching: 2013-01-14
Beta Posted: 2013-01-15
Test Day: 2013-01-24
Target GA: 2013-01-30
The are several reasons for the slip. It is partially due to the slip
in the Fedora 18 Release schedule. The move to Fedora 18 has also
caused some issues for some of the sub-projects, most notably
ovirt-node.
Please let me know if you have any questions or concerns
Thanks
Mike Burns
on behalf of
The oVirt Team
12 years, 4 months
[Users] oVirt Weekly Meeting Minutes -- 2013-01-09
by Mike Burns
#ovirt: oVirt Weekly Sync
Meeting started by mburns at 15:01:57 UTC (full logs).
Meeting summary
agenda and roll call (mburns, 15:02:02)
workshops (mburns, 15:04:33)
NetApp workshop (Jan 22-24) is mostly ready to go. USB keys ordered,
facilities arranged, schedule online (dneary, 15:06:54)
NetApp workshop (Jan 22-24) is mostly ready to go. USB keys ordered,
facilities arranged, schedule online (mburns, 15:07:28)
NetApp workshop (Jan 22-24) is mostly ready to go. USB keys ordered,
facilities arranged, schedule online (dneary, 15:07:34)
Current activities are mostly around organising of board meeting, and
co-ordinating burn-in of USB keys with latest pre-release version of 3.2
(dneary, 15:07:36)
Accommodation block has expired - if you need a room for the dates of
the conference, you should call up ASAP to ensure availability, and we
cannot guarantee the rate any more (dneary, 15:08:22)
Registration status: 63 registered, capacity is 100. ~20-25 Red Hatters,
~40 non-Red Hatters (dneary, 15:09:08)
Registration will close on January 15th, due to our requirement to
finalise numbers for catering, and get visitor badges made (dneary,
15:09:57)
Currently promoting the workshop to the Bay Area clouderati, and getting
some decent traction this week (some Citrix/CloudStack sign-ups, one
Inktank sign-up, and promising feedback from Cloudfoundry) (dneary,
15:11:50)
release status (mburns, 15:19:10)
not making 2013-01-09 release date (mburns, 15:21:02)
status update for ovirt-node (mburns, 15:21:11)
found some late breaking blocking issues with ovirt-node and move to F18
(mburns, 15:21:37)
some around selinux changes, some around various other component changes
(mburns, 15:21:53)
patches are in review now that should fix them (mburns, 15:23:03)
beta branch for all packages due by Monday January 14 (mburns, 15:38:12)
beta posted by Tuesday Jan 15? (mburns, 15:38:26)
test day 2013-01-24 (mburns, 15:40:56)
AGREED: release date target set for 30-Jan (mburns, 15:50:41)
ACTION: mburns to update release page and send communication to lists
(mburns, 15:52:24)
infra report (mburns, 15:55:40)
working on details of new hosting design (quaid, 15:56:17)
http://etherpad.ovirt.org/p/new_hosting_design_Jan_2013 (quaid,
15:57:47)
we'll talk on arch@ about service cutover dates & such (quaid, 15:58:31)
workshop - China (mburns, 16:00:27)
dates are set for workshop in China, but very early in the process
(20-21 March) (mburns, 16:03:29)
need call for content, discussion on whether workshops are the right way
to go about this (mburns, 16:03:48)
workshop will be in Shanghai (mburns, 16:04:50)
other topics (mburns, 16:06:13)
Meeting ended at 16:09:41 UTC (full logs).
Action items
mburns to update release page and send communication to lists
Action items, by person
mburns
mburns to update release page and send communication to lists
People present (lines said)
mburns (93)
dneary (45)
mgoldboi (33)
aglitke1 (15)
sgordon (14)
lh (11)
quaid (10)
ovirtbot (6)
karimb (5)
itamar (5)
oschreib_ (4)
jb_netapp (3)
itamar1 (2)
dustins (1)
garrett (1)
Generated by MeetBot 0.1.4.
12 years, 4 months
[Users] trouble with pci passthrough
by Andreas Huser
hi everybody
i have trouble with pci passthrough of a parallel port adapter. I need this for a key dongle.
The Server is a single machine and i want to use them with the all-in-one plugin from ovirt.
I do some tests with:
Fedora 17, CentOS6.3, Oracle Linux 6.3
latest kernel, qemu-kvm and libvirt from repos. No extras or advanced configurations. Only a simple standard Server.
I install "yum groupinstall virtualization" + virt-manager and some other.
I configure iommu, modul blacklist and some other.
Then i starting a Windows Server 2003 and assign the parallel adapter to the running server. I look in the device manager and found the adapter card.
The dongle work finde and the Datev Lizenz Service are online.
.. so far so good
but when i install on the same Server ovirt. With same kernel qemu-kvm and libvirt!
And i attach the adapter card to the windows server 2003 look in the device manager and found the card with a error "device cannot be start (code 10)"
I am now looking for several days after the error and have diverse tried but I can not keep going.
can someone help me?
Thanks & greetings
Andreas
12 years, 4 months
[Users] ovirt tools - compatibility with RHEV
by Jiri Belka
Hi,
I just discovered some package names difference between ovirt tools
and RHEV tools (iso-uploader etc...). Also params to handle certificates
are different.
Are ovirt tools compatible with RHEV? Or I should use RHEV specific
packages to manage RHEV environments? If ovirt tools are tested to
always work with RHEV that would be nice.
jbelka
12 years, 4 months
[Users] Best practice to resize a WM disk image
by Ricky
Hi,
If I have a VM that has run out of disk space, how can I increase the
space in best way? One way is to add a second bigger disk to the WM
and then use dd or similar to copy. But is it possible to stretch the
original disk inside or outside oVirt and get oVirt to know the bigger
size?
Regards //Ricky
12 years, 4 months
[Users] Successfully virt-v2v from CentOS 6.3 VM to Ovirt 3.2 nightly
by Gianluca Cecchi
Hello,
on my oVirt Host configured with F18 and all-in-one and ovirt-nightly as of
ovirt-engine-3.2.0-1.20130107.git1a60fea.fc18.noarch
I was able to import a CentOS 5.8 VM coming from a CentOS 6.3 host.
The oVirt node server is the same where I'm unable to run a newly created
WIndows 7 32bit vm...
See http://lists.ovirt.org/pipermail/users/2013-January/011390.html
In this thread I would like to report about successful import phases and
some doubts about:
1) no password requested during virt-v2v
2) no connectivity in guest imported.
On CentOS 6.3 host
# virt-v2v -o rhev -osd 10.4.4.59:/EXPORT --network ovirtmgmt c56cr
c56cr_001: 100%
[===================================================================================]D
0h02m17s
virt-v2v: c56cr configured with virtio drivers.
---> I would expect to be asked for the password of a privileged user in
oVirt infra, instead the export process started without any prompt.
Is this correct?
In my opinion in this case it could be a security concern....
during virt-v2v command, on oVirt node I see this inside NFS Export domain:
$ sudo ls -l
/EXPORT/b878ad09-602f-47da-87f5-2829d67d3321/v2v.pmbPOGM_/30df5806-6911-41b3-8fef-1fd8d755659f
total 10485764
-rw-r--r--. 1 vdsm kvm 10737418240 Jan 9 16:05
0d0e8e12-8b35-4034-89fc-8cbd4a7d7d81
At the end of the process:
$ sudo ls -l /EXPORT/b878ad09-602f-47da-87f5-2829d67d3321/images/
total 4
drwxr-xr-x. 2 vdsm kvm 4096 Jan 9 16:05
30df5806-6911-41b3-8fef-1fd8d755659f
$ sudo ls -lR /EXPORT/
/EXPORT/:
total 4
drwxr-xr-x. 5 vdsm kvm 4096 Jan 9 16:06
b878ad09-602f-47da-87f5-2829d67d3321
/EXPORT/b878ad09-602f-47da-87f5-2829d67d3321:
total 12
drwxr-xr-x. 2 vdsm kvm 4096 Jan 9 16:01 dom_md
drwxr-xr-x. 3 vdsm kvm 4096 Jan 9 16:06 images
drwxr-xr-x. 4 vdsm kvm 4096 Jan 9 16:02 master
/EXPORT/b878ad09-602f-47da-87f5-2829d67d3321/dom_md:
total 8
-rw-rw----. 1 vdsm kvm 0 Jan 9 16:01 ids
-rw-rw----. 1 vdsm kvm 0 Jan 9 16:01 inbox
-rw-rw----. 1 vdsm kvm 512 Jan 9 16:01 leases
-rw-r--r--. 1 vdsm kvm 350 Jan 9 16:01 metadata
-rw-rw----. 1 vdsm kvm 0 Jan 9 16:01 outbox
/EXPORT/b878ad09-602f-47da-87f5-2829d67d3321/images:
total 4
drwxr-xr-x. 2 vdsm kvm 4096 Jan 9 16:05
30df5806-6911-41b3-8fef-1fd8d755659f
/EXPORT/b878ad09-602f-47da-87f5-2829d67d3321/images/30df5806-6911-41b3-8fef-1fd8d755659f:
total 10485768
-rw-r--r--. 1 vdsm kvm 10737418240 Jan 9 16:06
0d0e8e12-8b35-4034-89fc-8cbd4a7d7d81
-rw-r--r--. 1 vdsm kvm 330 Jan 9 16:05
0d0e8e12-8b35-4034-89fc-8cbd4a7d7d81.meta
/EXPORT/b878ad09-602f-47da-87f5-2829d67d3321/master:
total 8
drwxr-xr-x. 2 vdsm kvm 4096 Jan 9 16:02 tasks
drwxr-xr-x. 3 vdsm kvm 4096 Jan 9 16:06 vms
/EXPORT/b878ad09-602f-47da-87f5-2829d67d3321/master/tasks:
total 0
/EXPORT/b878ad09-602f-47da-87f5-2829d67d3321/master/vms:
total 4
drwxr-xr-x. 2 vdsm kvm 4096 Jan 9 16:06
2398149c-32b9-4bae-b572-134d973a759c
/EXPORT/b878ad09-602f-47da-87f5-2829d67d3321/master/vms/2398149c-32b9-4bae-b572-134d973a759c:
total 8
-rw-r--r--. 1 vdsm kvm 4649 Jan 9 16:06
2398149c-32b9-4bae-b572-134d973a759c.ovf
Then I began the vm import in webadmin:
Import process has begun for VM(s): c56cr.
You can check import status in the 'Events' tab of the specific destination
storage domain, or in the main 'Events' tab
---> regarding the import status, the "specific destination storage domain"
would be my DATA domain, correct?
Because I see nothing in it and nothing in export domain.
Instead I correctly see in main events tab of the cluster these two messages
2013-Jan-09, 16:16 Starting to import Vm c56cr to Data Center Poli, Cluster
Poli1
2013-Jan-09, 16:18 Vm c56cr was imported successfully to Data Center Poli,
Cluster Poli1
SO probably the first option should go away....?
During the import, on the oVirt host
[g.cecchi@f18aio ~]$ vmstat 3
procs -----------memory---------- ---swap-- -----io---- -system--
----cpu----
r b swpd free buff cache si so bi bo in cs us sy id
wa
1 1 0 1684556 121824 28660956 0 0 8 69 21 66 0 0
99 0
1 1 0 1515192 121824 28830112 0 0 0 58749 4468 6068 0 3
85 11
0 1 0 1330708 121828 29014320 0 0 0 59415 4135 5149 0 4
85 11
$ sudo iotop -d 3 -P -o -k
Total DISK READ: 0.33 K/s | Total DISK WRITE: 56564.47 K/s
PID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
22501 idle vdsm 55451.24 K/s 56459.45 K/s 0.00 % 91.03 % dd
if=/rhev/data-center/~count=10240 oflag=direct
831 be/4 root 0.00 K/s 0.00 K/s 0.00 % 3.56 % [flush-253:1]
576 be/3 root 0.00 K/s 19.69 K/s 0.00 % 0.72 % [jbd2/dm-1-8]
23309 be/3 vdsm 0.33 K/s 0.00 K/s 0.00 % 0.00 % python
/usr/share/vdsm/st~moteFileHandler.pyc 49 47
17057 be/4 apache 0.00 K/s 2.63 K/s 0.00 % 0.00 % httpd
-DFOREGROUND
15524 be/4 root 0.00 K/s 1.31 K/s 0.00 % 0.00 % libvirtd
--listen
$ ps -wfp 22501
UID PID PPID C STIME TTY TIME CMD
vdsm 22501 16120 8 16:16 ? 00:00:14 /usr/bin/dd
if=/rhev/data-center/89d40d09-5109-4070-b9b0-86f1addce8af/b878ad09-602f-
I was then able to power on and connect via vnc to the console.
But I noticed it has no connectivity with its gateway
Host is on vlan 65
(em3 + em3.65 cofigured)
host has
3: em3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
ovirtmgmt state UP qlen 1000
link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff
inet6 fe80::21c:c4ff:feab:3add/64 scope link
valid_lft forever preferred_lft forever
...
6: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP
link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff
inet6 fe80::21c:c4ff:feab:3add/64 scope link
valid_lft forever preferred_lft forever
7: em3.65@em3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP
link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff
inet 10.4.4.59/24 brd 10.4.4.255 scope global em3.65
inet6 fe80::21c:c4ff:feab:3add/64 scope link
valid_lft forever preferred_lft forever
...
13: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master ovirtmgmt state UNKNOWN qlen 500
link/ether fe:54:00:d3:8f:a3 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fed3:8fa3/64 scope link
valid_lft forever preferred_lft forever
[g.cecchi@f18aio ~]$ ip route list
default via 10.4.4.250 dev em3.65
10.4.4.0/24 dev em3.65 proto kernel scope link src 10.4.4.59
ovirtmgmt is tagged in datacenter Poli1
guest is originally configured (and it maintained this) on bridged vlan65
on CentOS 63 host. Its parameters
eth0 with
ip 10.4.4.53 and gw 10.4.4.250
from webadmin pov it seems ok. see also this screenshot
https://docs.google.com/open?id=0BwoPbcrMv8mvbENvR242VFJ2M1k
any help will be appreciated.
do I have to enable some kind of routing not enabled by default..?
Thanks,
Gianluca
12 years, 4 months
Re: [Users] What do you want to see in oVirt next?
by Sigbjorn Lie
On Thu, January 3, 2013 17:08, Itamar Heim wrote:
> Hi Everyone,
>
>
> as we wrap oVirt 3.2, I wanted to check with oVirt users on what they find good/useful in oVirt,
> and what they would like to see improved/added in coming versions?
Hi,
+1 for clustered ovirt manager for availability.
I've seen a pdf document describing how to configure Red Hat Cluster and GFS with RHEV-M, but I
feel Red Hat Cluster and GFS adds too much complexity compared to what is required for ovirt
manager.
Perhaps the use of glusterfs and keepalived (http://www.keepalived.org/) would be sufficient to
create an easy to configure ovirt manager failover cluster?
Regards,
Siggi
12 years, 4 months
Re: [Users] What do you want to see in oVirt next?
by Karli Sjöberg
--_000_5F9E965F5A80BC468BE5F40576769F09101FE58Aexchange21_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
QSBnZW5lcmFsICsxIGZvciBldmVyeSBwb2ludCBiYXNpY2FsbHkuIFZlcnkgZ29vZCBhbmQgdGhv
dWdodCB0aHJvdWdoIHBvaW50cy4gVGhlIG9uZXMgSSBsaWtlIHRoZSBtb3N0IGFyZToNCg0KKiBT
cGljZSBjbGllbnQtIGFuZCBwbHVnaW4gZm9yIEZpcmVmb3ggb24gV2luZG93cyBhbmQgVWJ1bnR1
DQoqIFByZWZlcnJhYmx5IHdvcmtpbmcgdGhyb3VnaCBib3RoIEFkbWluLSBhbmQgVXNlcihjdXN0
b21lcikgcG9ydGFsOykNCiogR3Vlc3QgQWdlbnQgZm9yIFdpbmRvd3MuIE1vc3Qgb2Ygb3VyIFZN
cyBhcmUgV2luMjAwOC0gYW5kIFIyLCAyMDEyJ3MgYXJlIGNvbWluZyB0b28uDQoqIEhvc3QgUHJv
ZmlsZXMgd291bGQgYmUgYSB0aW1lIHNhdmVyDQoqIFJlc291cmNlIFBvb2xzDQoqIENsb25lIFZN
J3MNCiogUmVzaXppbmcgVk0gZGlza3MgZnJvbSBHVUkNCiogVXBsb2FkIElTTydzIGZyb20gR1VJ
DQoqIEludGVncmF0ZSB2aXJ0LXYydiBpbnRvIEdVSQ0KKiBFZGl0IFZNIHNldHRpbmdzIGJlZm9y
ZSBpbXBvcnQNCg0KQWxzbyBJIHdvdWxkIGxpa2UgdG8gbWFrZSBhIHdpc2ggb2YgbXkgb3duOg0K
KiBUaW1lLXNoYXJlIHNjaGVkdWxpbmcuDQotIFRoZSBhYmlsaXR5IHRvICJyZW50IG91dCIgcmVz
b3VyY2VzIGF2YWlsYWJsZSBvbmx5IGR1cmluZyBOIHdpbmRvdyBvZiB0aW1lIGZvciBkaWZmZXJl
bnQgZ3JvdXBzIG9mIHVzZXJzLiBPdXIgdXNlcnMgdGVuZCB0byBvbmx5IG5lZWQgcmVzb3VyY2Ug
Y2FwYWNpdHkgaW4gYmF0Y2hlcywgY3J1bmNoaW5nIHRoZWlyIG51bWJlcnMgb3Igd2hhdG5vdCBk
dXJpbmcgYW4gaW50ZW5jZSBwZXJpb2Qgb2YgdGltZSBhbmQgY2FuIGJlIHF1aWV0IGZvciBhIHdo
aWxlLCB0aGVuIGNvbWUgYmFjayBmb3Igc29tZSBtb3JlLiBUaGF0wrRzIHdoeSBJIHdvdWxkIGxp
a2UgdG8gc2VlIGFuIGludGVyZmFjZSB3aGVyZSB3ZSBhcyBhbiBvcmdhbml6YXRpb24gY291bGQg
cG9vbCB0b2dldGhlciBvdXIgbWV0YWwgaW50byBvVmlydCBhbmQgdGhlbiB0b2dldGhlciBzaXQg
ZG93biBhbmQgc2NoZWR1bGUgaXQgc28gdGhhdCBldmVyeW9uZSBnZXRzIHRoZWlyIHBpY2Ugb2Yg
dGhlIGNha2UsIHNvIHRvIHNwZWFrLiBFLmcuIEdyb3VwIFggcmVudHMgNjQgdkNQVSdzIGFuZCA0
VEIgUkFNIGZyb20gMjAxMzA2MDEgdG8gMjAxMzA4MDEsIGR1cmluZyB3aGljaCB0aW1lIHRoZXkg
YXJlIGd1YXJhbnRlZWQgdGhvc2UgcmVzb3VyY2VzLCBhbmQgdGhlbiB0aG9zZSBtYWNoaW5lcyB3
aWxsIGV4cGlyZSBhbmQgYmUgYXV0b21hdGljYWxseSBkZWxldGVkLCBmcmVlaW5nIHVwIHRob3Nl
IHJlc291cmNlcyBhZ2Fpbi4gU29tZXRoaW5nIGxpa2UgdGhhdCB3b3VsZCBtYWtlIGNvbGxhYm9y
YXRpb24gYmV0d2VlbiBvdXIgZGlmZmVyZW50IGdyb3VwcyBtdWNoIGVhc2llci4NCg0KQmVzdCBS
ZWdhcmRzIGFuZCBIYXBweSBOZXcgWWVhciENCi9LYXJsaSBTasO2YmVyZw0KDQoNCmZyZSAyMDEz
LTAxLTA0IGtsb2NrYW4gMTI6MTEgKzAxMDAgc2tyZXYgUmVuw6kgS29jaCAob3ZpZG8pOg0KDQoN
CkhpLA0KDQpGaXJzdCBvZiBhbGwgdGhhbmtzIGZvciBzdGFydGluZyB0aGlzIGRpc2N1c3Npb24u
DQoNCk15IGZlZWRiYWNrIGlzIG1haW5seSBiYXNlZCBvbiBSSEVWLCBhcyBJJ20gdXNpbmcgUkhF
ViBpbiBvdXIgY29tcGFueQ0KYW5kIG9uIGN1c3RvbWVyIHNpZGUuIG9WaXJ0IGlzIGNvb2wgYXMg
SSBjYW4gc2VlIHdoYXQgbmV3IGZlYXR1cmVzIG5leHQNClJIRVYgdmVyc2lvbiBtYXkvd2lsbCBi
cmluZyBhbmQgd2hhdCBpc3N1ZXMgb3RoZXIgdXNlcnMgaGF2ZSB3aXRoIHRoaXMNCnRlY2hub2xv
Z3kuDQoNCldoYXQgSSBsaWtlOg0KKiB3ZWIgYmFzZWQgVUkNCiogS1ZNIGFzIGl0IGJyaW5ncyB0
aGUgaGlnaGVzdCB2aXJ0dWFsaXphdGlvbiBwZXJmb3JtYW5jZSBvbiB0aGUgbWFya2V0DQoqIFJI
RUwvRmVkb3JhIGFzIGEgaHlwZXJ2aXNvciBmb3IgbW9yZSBmbGV4aWJpbGl0eQ0KKiB0YWdzIGFu
ZCBib29rbWFya3Mgd2l0aCBzZWFyY2ggZmlsdGVycyAtIGdyZWF0IGZvciBiaWcgaW5zdGFsbGF0
aW9ucw0KKiBzZXJ2ZXIgYW5kIGRlc2t0b3AgdmlydHVhbGl6YXRpb24gY2FuIGJlIGRvbmUgd2l0
aCBvbmUgZ3VpDQoqIG9wZW4gc291cmNlIG9mIGNvdXJzZSA7KQ0KKiBob29rIHNjcmlwdHMgLSBz
b29vIGNvb2whDQoqIHJlc3QtYXBpDQoqIHVzZXIgcG9ydGFsIC0gc29tZSBvZiBvdXIgY3VzdG9t
ZXJzIHVzZSB0aGlzIGFzIGEgYmFzaWMgY2xvdWQgdG9vbCBmb3INCnByb3ZpZGluZyB2bXMgYW5k
IHNlbGYtcHJvdmlzaW9uaW5nIGZlYXR1cmVzIHdpdGggYWNjb3VudGluZyB1c2luZw0KcG9zdGdy
ZXNxbCBkYXRhYmFzZQ0KDQpXaGF0IEkgd291bGQgbGlrZSB0byBzZWUgaW4gZnV0dXJlIHJlbGVh
c2UgaW4gb1ZpcnQgYW5kIGFmdGVyIHRlc3RpbmcgaW4NClJIRVYgKG1vc3Qgb2YgdGhlc2UgcG9p
bnRzIGFyZSBiYXNlZCBvbiBjdXN0b21lciBmZWVkYmFjaywgdG9vLCB3aG8NCmJvdWdodCBSSEVW
IG9yIHdoZXJlIHdlIGhhZCBhdCBsZWFzdCBhIHByZS1zYWxlcyBhcHBvaW50bWVudCk6DQoNCiog
U3BpY2UtWFBJIGZvciBGaXJlZm94IG9uIFdpbmRvd3MgKGFuZCBtYXliZSBVYnVudHUpOg0KSSBr
bm93IGl0J3MgYSBsb3Qgb2Ygd29yayB0byBidWlsZCBpdCBmb3IgZXZlcnkgbmV3IGZpcmVmb3gg
dmVyc2lvbiwgYnV0DQpwcm92aWRpbmcgaXQgZm9yIEZpcmVmb3ggTFRTIHdvdWxkIGJlIGEgcmVh
bGx5IGdvb2Qgc3RhcnQuIEF0IHRoZSBtb21lbnQNCkludGVybmV0IEV4cGxvcmVyIGlzdCByZXF1
aXJlZCBmb3IgZnVsbCB1c2Ugb2YgYWRtaW4gdW5kIHVzZXIgcG9ydGFsIChvcg0KVk5DIGhhcyB0
byBiZSB1c2VkKS4NCg0KKiBvdmlydC1ndWVzdC1hZ2VudHMgZm9yIG1ham9yIExpbnV4IGRpc3Ry
aWJ1dGlvbnM6DQotIG9wZW5TVVNFDQotIFNMRVMNCi0gVWJ1bnR1DQotIERlYmlhbg0KDQoqIFZp
cnR1YWxpemUgb3ZpcnQtZW5naW5lIG9uIG92aXJ0IGhvc3RzDQpJIGtub3cgdGhhdCB0aGlzIHdv
dWxkIHJlcXVpcmUgbWFqb3IgY2hhbmdlcyBkdWUgdG8gaG93IG9WaXJ0IHdvcmtzDQoobGlidmly
dCBhbmQgdmRzbSksIGJ1dCBuZWFybHkgZXZlcnkgY3VzdG9tZXJzIGlzIGFza2luZzogd2h5IGNh
bid0IEkNCmluc3RhbGwgbXkgaHlwZXJ2aXNvcnMsIGNyZWF0ZSBhIHZtIGZvciBSSEVWIG1hbmFn
ZXIgYW5kIGNvbmZpZ3VyZSB0aGUNCmVudmlyb25tZW50IHRoZW4gKGluIHRoZSBzYW1lIHdheSBh
cyBpdCBpcyBwb3NzaWJsZSB3aXRoIFZNd2FyZSk/DQoNCiogTWFrZSBvdmlydC1lbmdpbmUgbW9y
ZSBFbnRlcnByaXNlIHJlYWR5DQpDb21wYXJlZCB3aXRoIFZNd2FyZSAoc29ycnkgZm9yIGFsd2F5
cyBjb21wYXJpbmcgaXQgd2l0aCBWTXdhcmUsIGJ1dA0KaXQncyB0aGUgbWFya2V0IGxlYWRlciBh
bmQgaXQgaGFzIG1hbnkgbmljZSBmZWF0dXJlcyB3aGljaCB3b3VsZCBiZQ0KZ3JlYXQgaW4gb1Zp
cnQvUkhFViwgdG9vKSBvVmlydCByZXF1aXJlZCBtb3JlIHRhc2tzIGRvbmUgb24gY29tbWFuZCBs
aW5lDQpvciB2aWEgQVBJL1NoZWxsL0dVSSBhcyBmZWF0dXJlcyBhcmUgbWlzc2luZyBpbiB0aGUg
d2ViIEdVSToNCg0KLSBEZXBsb3kgbmV0d29yayBjb25maWdzIG9uIGFsbCBob3N0cw0KV2hlbiBj
cmVhdGluZyBhIG5ldyBsb2dpY2FsIG5ldHdvcmsgaW4gYSBDbHVzdGVyIGl0IGhhcyB0byBiZSBj
cmVhdGVkIG9uDQphbGwgaG9zdHMgYmVmb3JlIGl0IGNhbiBiZSB1c2VkLiBJdCB3b3VsZCBiZSBn
cmVhdCBpZiB0aGlzIGNvdWxkIGJlIGRvbmUNCmF1dG9tYXRpY2FsbHkgKGV4Y2VwdCBmb3Igcmhl
dm0vc3BpY2Uvc3RvcmFnZS1uZXR3b3JrcyB3aGVyZSBhbiBJUA0KYWRkcmVzcyBvZiBlYWNoIGhv
c3QgaXMgcmVxdWlyZWQpLiBUaGUgd29ya2Zsb3cgY291bGQgYmU6DQorIGNyZWF0ZSBuZXcgbG9n
aWNhbCBuZXR3b3JrDQorIGNsaWNrIG9uIGRlcGxveSBidXR0b24NCisgY3JlYXRlIGlmY2ZnLWZp
bGVzIG9uIGhvc3RzDQorIG1ha2UgaWZjZmctZmlsZXMgcGVyc2lzdGVudCBvbiBvdmlydCBOb2Rl
cw0KKyBicmluZyBpbnRlcmZhY2UgdXANCisgY2hlY2sgaWYgaW50ZXJmYWNlcyBhcmUgdXANCisg
bWFrZSBsb2dpY2FsIG5ldHdvcmsgT3BlcmF0aW9uYWwgaWYgZGVwbG95bWVudCB3YXMgc3VjY2Vz
c2Z1bCBvbiBhbGwNCmhvc3RzDQoNCi0gQ29uZmlndXJlIG92aXJ0IGhvc3RzIGZyb20gR1VJDQpJ
dCB3b3VsZCBiZSByZWFsbHkgYW4gaW1wcm92ZW1lbnQgaWYgYWxsIHNldHRpbmdzIG9mIHRoZSBU
VUkgKGxpa2UgRE5TLA0KTlRQLCBzeXNsb2csIGtkdW1wLCBhZG1pbiBhbmQgcm9vdCBwYXNzd29y
ZHMsIGhvc3RuYW1lLC4uLikgY291bGQgYmUNCmNvbmZpZ3VyZWQgaW4gb3ZpcnQtZW5naW5lIGd1
aSwgdG9vIChpbiBob3N0cyB0YWIpLiBQbHVzIGluY2x1ZGluZw0KY2hhbmdpbmcgbXVsdGlwYXRo
LmNvbmYgaW4gV2ViIEdVSSB3b3VsZCBiZSB2ZXJ5IG5pY2UuDQoNCi0gQ3JlYXRlIGhvc3QgcHJv
ZmlsZXMNClN0b3JpbmcgYWxsIGNvbmZpZ3VyYXRpb24gc2V0dGluZ3Mgb2YgdGhlIFRVSSBhbmQg
aG9zdCB0YWIgKHdpdGhvdXQgaXANCmFkZHJlc3NlcyBmb3Igc3VyZSkgaW4gYSBwcm9maWxlIHdv
dWxkIG1ha2UgY2hhbmdlcyBlYXNpZXIgYW5kIHNwZWVkIHVwDQpkZXBsb3ltZW50IG9mIG5ldyBo
eXBlcnZpc29ycy4gSSdtIHRoaW5raW5nIG9mOg0KKyBjcmVhdGUgYSBwcm9maWxlIHdoaWNoIGNv
bnRhaW5zIGUuZy4gRE5TLCBOVFAgYW5kIGFkbWluIHBhc3N3b3Jkcw0KKyBsaW5rIHByb2ZpbGUg
dG8gaG9zdHMNCisgY2hhbmdlIEROUyBpbiBwcm9maWxlDQorIGluIGhvc3RzIHRhYiB5b3UgY2Fu
IHNlZSB0aGF0IHRoZXJlIGFyZSBjaGFuZ2VzIGJldHdlZW4gdGhlIHByb2ZpbGUNCmFuZCB0aGUg
aG9zdHMNCisgYnJpbmcgaG9zdCBpbiBtYWludGVuYW5jZSBtb2RlDQorIHN5bmMgcHJvZmlsZSAo
YW5kIG1heWJlIHJlYm9vdCBob3N0KQ0KV2hlbiBpbnN0YWxsaW5nIG5ldyBob3N0cyAodmlhIENE
IG9yIFBYRSkgdGhlIHByb2ZpbGUgY2FuIGJlIHVzZWQgdG8NCmF1dG9tYXRpY2FsbHkgY29uZmln
dXJlIHRoZSBob3N0IHdpdGhvdXQgcHJvdmlkaW5nIGFsbCBzZXR0aW5ncyB2aWEgYm9vdA0Kb3B0
aW9ucyArIGNvbmZpZ3VyZSBhbGwgbmV0d29ya3MsIGN1c3RvbSBtdWx0aXBhdGguY29uZiwuLi4g
SG9zdCBwcm9maWxlDQpzaG91bGQgYmUgdXNlZCBmb3IgZnVsbCBSSEVML0ZlZG9yYSBob3N0cyBh
cyB3ZWxsLg0KDQotIFVwZGF0ZSBSSEVML0ZlZG9yYSBIeXBlcnZpc29ycyBmcm9tIEdVSQ0KSXQg
d291bGQgYmUgbmljZSBpZiBGZWRvcmEgaG9zdHMgY291bGQgYmUgdXBkYXRlZCBmcm9tIHRoZSBv
dmlydC1lbmdpbmUNCkdVSSBsaWtlIHRoZSBvdmlydCBOb2RlLiBSdW5uaW5nIHl1bSBjaGVjay11
cGRhdGUgb24gaG9zdCBhbmQgZGlzcGxheWluZw0Kbm90aWNlIGlmIHRoZXJlIGFyZSB1cGRhdGVz
IGF2YWlsYWJsZS4NCg0KLSBJbXBsZW1lbnQgcmVzb3VyY2UgcG9vbHMNCkF0IHRoZSBtb21lbnQg
b25seSBRdW90YXMgYXJlIGF2YWlsYWJsZSAod2hpY2ggaXMgZ3JlYXQsIGJ0dyksIGJ1dCBpbg0K
c29tZSBjYXNlcyBpdCdzIG5lY2Vzc2FyeSB0byBpbXBsZW1lbnQgcmVzb3VyY2UgcG9vbHMuIEUu
Zy4gTGltaXQgQ1BVLA0KTWVtb3J5IGFuZCBOZXR3b3JrIGZvciBncm91cCBvZiB0ZXN0IHZtcywg
YnV0IGdpdmUgZnVsbCByZXNvdXJjZXMgdG8NCnByb2R1Y3Rpb24gdm1zLiBUaGlzIGNvdWxkIGJl
IGRvbmUgd2l0aCBjZ3JvdXBzLg0KDQotIENsb25lIHZtcw0KSSdtIG1pc3NpbmcgdGhlIHBvc3Np
YmlsaXR5IG9mIGNsb25pbmcgdm1zIHdpdGhvdXQgY3JlYXRpbmcgYSB0ZW1wbGF0ZS4NCg0KLSBS
ZXNpemUgZGlzayBpbiBHVUkNCkluY3JlYXNpbmcgdGhlIHNpemUgb2YgYSBkaXNrIHdvdWxkIGhl
bHAgYSBsb3QuIEF0IHRoZSBtb21lbnQgSSBjcmVhdGUNCm5ldyBkaXNrcyBhbmQgcHV0IGl0IGlu
dG8gdm9sdW1lIGdyb3VwIG9uIHZtLCBidXQgcmVzaXppbmcgd291bGQgYmUNCm5pY2VyIGluIHNv
bWUgY2FzZXMgYW5kIHdpbGwgcmVkdWNlIHRoZSBudW1iZXIgb2YgZGlza3MuDQoNCi0gVXBsb2Fk
IElTT3Mgd2l0aGluIHRoZSBHVUkgdG8gSVNPIGRvbWFpbg0KDQotIEludGVncmF0ZSB2aXJ0LXYy
diBpbnRvIG9WaXJ0IEdVSQ0KSXQgd291bGQgYmUgY29vbCBpZiBWTXMgY291bGQgYmUgTWlncmF0
ZWQgZnJvbSBvdGhlciBzeXN0ZW1zIHdpdGhpbiB0aGUNCkdVSSB1c2luZyB2aXJ0LXYydiBhcyBh
IGJhY2tlbmQuDQoNCi0gRWRpdCBzZXR0aW5ncyBvZiB2bXMgYmVmb3JlIGltcG9ydGluZyB0aGVt
DQpNb3N0IHRoZSBvZiB0aGUgc2V0dGluZ3MgbGlrZSBkaXNrIHR5cGUgKElERS9WaXJ0SU8pLCBu
aWMgYnV0IGFsc28gdm0NCnR5cGUgKHNlcnZlci9kZXNrdG9wKSBhbmQgYWNjZXNzIHByb3RvY29s
IChTcGljZS9WTkMpIGNhbid0IGJlIGVkaXRlZA0Kd2l0aGluZyBvVmlydC9SSEVWTSBHVUkgYmVm
b3JlIGltcG9ydGluZyB0aGVtLiBUaGVzZSBzZXR0aW5ncyBjYW4gb25seQ0KYmUgY2hhbmdlZCBi
eSBlZGl0aW5nIHRoZSB4bWwgZGlyZWN0bHkuIERpc2sgdHlwZSBhbmQgbmljIGlzIHJpc2t5IG9m
DQpjb3Vyc2UsIGJ1dCB3b3JraW5nIGZvciBtb3N0IExpbnV4IGRpc3RyaWJ1dGlvbnMuDQoNCi0g
VXNlIGV4aXN0aW5nIHNoYXJlIGZvciBJU08gZG9tYWluDQpXaGVuIGNyZWF0aW5nIGFuIElTTyBk
b21haW4sIG9WaXJ0L1JIRVYgY3JlYXRlcyBpdCdzIG93biBkaXJlY3RvcnkNCnN0cnVjdHVyZSB3
aXRoIElEcy4gSXQgd291bGQgYmUgbmljZSBpZiBhbiBleGlzdGluZyBzaGFyZSBjb3VsZCBiZSB1
c2VkDQooZS5nLiBhbiBJU08gc2hhcmUgb24gTkZTIHNlcnZlciB3aGljaCBpcyB1c2VkIGJ5IG90
aGVyIHNlcnZpY2UsIHRvbykNCndpdGhvdXQgY3JlYXRpbmcgdGhlIHN0cnVjdHVyZSB3aXRoIElE
cy4gSSBrbm93IHRoYXQgdGhlIElEcyBhcmUgbmVlZGVkDQppbnRlcm5hbGx5IGJ1dCBJIHRoaW5r
IGl0IHNob3VsZCBiZSBwb3NzaWJsZSB0byByZXVzZSBhbiBleGlzdGluZyBzaGFyZS4NCg0KLSBQ
dXQgSVNPIGRvbWFpbiBvbiBORlMsIGlTQ1NJLCBGQyBhbmQgR2x1c3RlciBzdG9yYWdlLCB0b28N
Cg0KSSBrbm93IHRoYXQgdGhlc2UgYXJlIGEgbG90IG9mIGZlYXR1cmVzIEkgd291bGQgbGlrZSB0
byBzZWUgYW5kIGl0IHdvdWxkDQpiZSBncmVhdCBpZiBzb21lIG9mIHRoZW0gd291bGQgYmUgaW1w
bGVtZW50ZWQgaW4gZnV0dXJlIHJlbGVhc2VzLg0KDQpCdHcsIG9uIG9mIHRoZSBjb29sZXN0IGZl
YXR1cmVzIG9mIG9WaXJ0IHNlZW1zIHRvIGJlIHRoZSBVSSBwbHVnaW4NCmZlYXR1cmUsIHdoaWNo
IEkgaGF2ZW4ndCB0ZXN0ZWQgeWV0LiBJIHJlYWxseSBob3BlIHRoYXQgdGhpcyB3aWxsIGJlDQph
dmFpbGFibGUgaW4gUkhFViAzLjIsIHRvby4NCg0KQXMgSSBkb24ndCBoYXZlIHRoYXQgbXVjaCBK
YXZhIGtub3dsZWRnZSB0byBpbXBsZW1lbnQgc29tZSBvZiBteSBmZWF0dXJlDQpyZXF1ZXN0cyBh
bmQgY29udHJpYnV0ZSB0aGUgY29kZSwgSSBjYW4gZG8gc29tZSBQZXJsIGFuZCBKYXZhU2NyaXB0
IGFuZA0KZXh0ZW5kIG9WaXJ0IChhbmQgaG9wZWZ1bGx5IFJIRVYsIHRvbykgbGlrZSBPdmVkIGRp
ZCB3aXRoIEZvcmVtYW4gKGdyZWF0DQpqb2IgYW5kIG1hbnkgdGhhbmtzIGZvciB0aGUgZG9jdW1l
bnRhdGlvbiBpbiB5b3VyIHdpa2khKS4NCg0KDQoNCg0K
--_000_5F9E965F5A80BC468BE5F40576769F09101FE58Aexchange21_
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: base64
PCFET0NUWVBFIEhUTUwgUFVCTElDICItLy9XM0MvL0RURCBIVE1MIDQuMCBUUkFOU0lUSU9OQUwv
L0VOIj4NCjxodG1sPg0KPGhlYWQ+DQo8bWV0YSBodHRwLWVxdWl2PSJDb250ZW50LVR5cGUiIGNv
bnRlbnQ9InRleHQvaHRtbDsgY2hhcnNldD11dGYtOCI+DQo8bWV0YSBuYW1lPSJHRU5FUkFUT1Ii
IGNvbnRlbnQ9Ikd0a0hUTUwvNC40LjQiPg0KPC9oZWFkPg0KPGJvZHk+DQpBIGdlbmVyYWwgJiM0
MzsxIGZvciBldmVyeSBwb2ludCBiYXNpY2FsbHkuIFZlcnkgZ29vZCBhbmQgdGhvdWdodCB0aHJv
dWdoIHBvaW50cy4gVGhlIG9uZXMgSSBsaWtlIHRoZSBtb3N0IGFyZTo8YnI+DQo8YnI+DQoqIFNw
aWNlIGNsaWVudC0gYW5kIHBsdWdpbiBmb3IgRmlyZWZveCBvbiBXaW5kb3dzIGFuZCBVYnVudHU8
YnI+DQoqIFByZWZlcnJhYmx5IHdvcmtpbmcgdGhyb3VnaCBib3RoIEFkbWluLSBhbmQgVXNlcihj
dXN0b21lcikgcG9ydGFsOyk8YnI+DQoqIEd1ZXN0IEFnZW50IGZvciBXaW5kb3dzLiBNb3N0IG9m
IG91ciBWTXMgYXJlIFdpbjIwMDgtIGFuZCBSMiwgMjAxMidzIGFyZSBjb21pbmcgdG9vLjxicj4N
CiogSG9zdCBQcm9maWxlcyB3b3VsZCBiZSBhIHRpbWUgc2F2ZXI8YnI+DQoqIFJlc291cmNlIFBv
b2xzPGJyPg0KKiBDbG9uZSBWTSdzPGJyPg0KKiBSZXNpemluZyBWTSBkaXNrcyBmcm9tIEdVSTxi
cj4NCiogVXBsb2FkIElTTydzIGZyb20gR1VJPGJyPg0KKiBJbnRlZ3JhdGUgdmlydC12MnYgaW50
byBHVUk8YnI+DQoqIEVkaXQgVk0gc2V0dGluZ3MgYmVmb3JlIGltcG9ydDxicj4NCjxicj4NCkFs
c28gSSB3b3VsZCBsaWtlIHRvIG1ha2UgYSB3aXNoIG9mIG15IG93bjo8YnI+DQoqIFRpbWUtc2hh
cmUgc2NoZWR1bGluZy48YnI+DQotIFRoZSBhYmlsaXR5IHRvICZxdW90O3JlbnQgb3V0JnF1b3Q7
IHJlc291cmNlcyBhdmFpbGFibGUgb25seSBkdXJpbmcgTiB3aW5kb3cgb2YgdGltZSBmb3IgZGlm
ZmVyZW50IGdyb3VwcyBvZiB1c2Vycy4gT3VyIHVzZXJzIHRlbmQgdG8gb25seSBuZWVkIHJlc291
cmNlIGNhcGFjaXR5IGluIGJhdGNoZXMsIGNydW5jaGluZyB0aGVpciBudW1iZXJzIG9yIHdoYXRu
b3QgZHVyaW5nIGFuIGludGVuY2UgcGVyaW9kIG9mIHRpbWUgYW5kIGNhbiBiZSBxdWlldCBmb3Ig
YQ0KIHdoaWxlLCB0aGVuIGNvbWUgYmFjayBmb3Igc29tZSBtb3JlLiBUaGF0wrRzIHdoeSBJIHdv
dWxkIGxpa2UgdG8gc2VlIGFuIGludGVyZmFjZSB3aGVyZSB3ZSBhcyBhbiBvcmdhbml6YXRpb24g
Y291bGQgcG9vbCB0b2dldGhlciBvdXIgbWV0YWwgaW50byBvVmlydCBhbmQgdGhlbiB0b2dldGhl
ciBzaXQgZG93biBhbmQgc2NoZWR1bGUgaXQgc28gdGhhdCBldmVyeW9uZSBnZXRzIHRoZWlyIHBp
Y2Ugb2YgdGhlIGNha2UsIHNvIHRvIHNwZWFrLiBFLmcuDQogR3JvdXAgWCByZW50cyA2NCB2Q1BV
J3MgYW5kIDRUQiBSQU0gZnJvbSAyMDEzMDYwMSB0byAyMDEzMDgwMSwgZHVyaW5nIHdoaWNoIHRp
bWUgdGhleSBhcmUgZ3VhcmFudGVlZCB0aG9zZSByZXNvdXJjZXMsIGFuZCB0aGVuIHRob3NlIG1h
Y2hpbmVzIHdpbGwgZXhwaXJlIGFuZCBiZSBhdXRvbWF0aWNhbGx5IGRlbGV0ZWQsIGZyZWVpbmcg
dXAgdGhvc2UgcmVzb3VyY2VzIGFnYWluLiBTb21ldGhpbmcgbGlrZSB0aGF0IHdvdWxkIG1ha2Ug
Y29sbGFib3JhdGlvbg0KIGJldHdlZW4gb3VyIGRpZmZlcmVudCBncm91cHMgbXVjaCBlYXNpZXIu
PGJyPg0KPGJyPg0KQmVzdCBSZWdhcmRzIGFuZCBIYXBweSBOZXcgWWVhciE8YnI+DQovS2FybGkg
U2rDtmJlcmc8YnI+DQo8YnI+DQo8YnI+DQpmcmUgMjAxMy0wMS0wNCBrbG9ja2FuIDEyOjExICYj
NDM7MDEwMCBza3JldiBSZW7DqSBLb2NoIChvdmlkbyk6DQo8YmxvY2txdW90ZSB0eXBlPSJDSVRF
Ij4NCjxwcmU+CkhpLAoKRmlyc3Qgb2YgYWxsIHRoYW5rcyBmb3Igc3RhcnRpbmcgdGhpcyBkaXNj
dXNzaW9uLgoKTXkgZmVlZGJhY2sgaXMgbWFpbmx5IGJhc2VkIG9uIFJIRVYsIGFzIEknbSB1c2lu
ZyBSSEVWIGluIG91ciBjb21wYW55CmFuZCBvbiBjdXN0b21lciBzaWRlLiBvVmlydCBpcyBjb29s
IGFzIEkgY2FuIHNlZSB3aGF0IG5ldyBmZWF0dXJlcyBuZXh0ClJIRVYgdmVyc2lvbiBtYXkvd2ls
bCBicmluZyBhbmQgd2hhdCBpc3N1ZXMgb3RoZXIgdXNlcnMgaGF2ZSB3aXRoIHRoaXMKdGVjaG5v
bG9neS4KCldoYXQgSSBsaWtlOgoqIHdlYiBiYXNlZCBVSQoqIEtWTSBhcyBpdCBicmluZ3MgdGhl
IGhpZ2hlc3QgdmlydHVhbGl6YXRpb24gcGVyZm9ybWFuY2Ugb24gdGhlIG1hcmtldAoqIFJIRUwv
RmVkb3JhIGFzIGEgaHlwZXJ2aXNvciBmb3IgbW9yZSBmbGV4aWJpbGl0eQoqIHRhZ3MgYW5kIGJv
b2ttYXJrcyB3aXRoIHNlYXJjaCBmaWx0ZXJzIC0gZ3JlYXQgZm9yIGJpZyBpbnN0YWxsYXRpb25z
Ciogc2VydmVyIGFuZCBkZXNrdG9wIHZpcnR1YWxpemF0aW9uIGNhbiBiZSBkb25lIHdpdGggb25l
IGd1aQoqIG9wZW4gc291cmNlIG9mIGNvdXJzZSA7KQoqIGhvb2sgc2NyaXB0cyAtIHNvb28gY29v
bCEKKiByZXN0LWFwaQoqIHVzZXIgcG9ydGFsIC0gc29tZSBvZiBvdXIgY3VzdG9tZXJzIHVzZSB0
aGlzIGFzIGEgYmFzaWMgY2xvdWQgdG9vbCBmb3IKcHJvdmlkaW5nIHZtcyBhbmQgc2VsZi1wcm92
aXNpb25pbmcgZmVhdHVyZXMgd2l0aCBhY2NvdW50aW5nIHVzaW5nCnBvc3RncmVzcWwgZGF0YWJh
c2UKCldoYXQgSSB3b3VsZCBsaWtlIHRvIHNlZSBpbiBmdXR1cmUgcmVsZWFzZSBpbiBvVmlydCBh
bmQgYWZ0ZXIgdGVzdGluZyBpbgpSSEVWIChtb3N0IG9mIHRoZXNlIHBvaW50cyBhcmUgYmFzZWQg
b24gY3VzdG9tZXIgZmVlZGJhY2ssIHRvbywgd2hvCmJvdWdodCBSSEVWIG9yIHdoZXJlIHdlIGhh
ZCBhdCBsZWFzdCBhIHByZS1zYWxlcyBhcHBvaW50bWVudCk6CgoqIFNwaWNlLVhQSSBmb3IgRmly
ZWZveCBvbiBXaW5kb3dzIChhbmQgbWF5YmUgVWJ1bnR1KToKSSBrbm93IGl0J3MgYSBsb3Qgb2Yg
d29yayB0byBidWlsZCBpdCBmb3IgZXZlcnkgbmV3IGZpcmVmb3ggdmVyc2lvbiwgYnV0CnByb3Zp
ZGluZyBpdCBmb3IgRmlyZWZveCBMVFMgd291bGQgYmUgYSByZWFsbHkgZ29vZCBzdGFydC4gQXQg
dGhlIG1vbWVudApJbnRlcm5ldCBFeHBsb3JlciBpc3QgcmVxdWlyZWQgZm9yIGZ1bGwgdXNlIG9m
IGFkbWluIHVuZCB1c2VyIHBvcnRhbCAob3IKVk5DIGhhcyB0byBiZSB1c2VkKS4KCiogb3ZpcnQt
Z3Vlc3QtYWdlbnRzIGZvciBtYWpvciBMaW51eCBkaXN0cmlidXRpb25zOgotIG9wZW5TVVNFCi0g
U0xFUwotIFVidW50dQotIERlYmlhbgoKKiBWaXJ0dWFsaXplIG92aXJ0LWVuZ2luZSBvbiBvdmly
dCBob3N0cwpJIGtub3cgdGhhdCB0aGlzIHdvdWxkIHJlcXVpcmUgbWFqb3IgY2hhbmdlcyBkdWUg
dG8gaG93IG9WaXJ0IHdvcmtzCihsaWJ2aXJ0IGFuZCB2ZHNtKSwgYnV0IG5lYXJseSBldmVyeSBj
dXN0b21lcnMgaXMgYXNraW5nOiB3aHkgY2FuJ3QgSQppbnN0YWxsIG15IGh5cGVydmlzb3JzLCBj
cmVhdGUgYSB2bSBmb3IgUkhFViBtYW5hZ2VyIGFuZCBjb25maWd1cmUgdGhlCmVudmlyb25tZW50
IHRoZW4gKGluIHRoZSBzYW1lIHdheSBhcyBpdCBpcyBwb3NzaWJsZSB3aXRoIFZNd2FyZSk/Cgoq
IE1ha2Ugb3ZpcnQtZW5naW5lIG1vcmUgRW50ZXJwcmlzZSByZWFkeQpDb21wYXJlZCB3aXRoIFZN
d2FyZSAoc29ycnkgZm9yIGFsd2F5cyBjb21wYXJpbmcgaXQgd2l0aCBWTXdhcmUsIGJ1dAppdCdz
IHRoZSBtYXJrZXQgbGVhZGVyIGFuZCBpdCBoYXMgbWFueSBuaWNlIGZlYXR1cmVzIHdoaWNoIHdv
dWxkIGJlCmdyZWF0IGluIG9WaXJ0L1JIRVYsIHRvbykgb1ZpcnQgcmVxdWlyZWQgbW9yZSB0YXNr
cyBkb25lIG9uIGNvbW1hbmQgbGluZQpvciB2aWEgQVBJL1NoZWxsL0dVSSBhcyBmZWF0dXJlcyBh
cmUgbWlzc2luZyBpbiB0aGUgd2ViIEdVSToKCi0gRGVwbG95IG5ldHdvcmsgY29uZmlncyBvbiBh
bGwgaG9zdHMKV2hlbiBjcmVhdGluZyBhIG5ldyBsb2dpY2FsIG5ldHdvcmsgaW4gYSBDbHVzdGVy
IGl0IGhhcyB0byBiZSBjcmVhdGVkIG9uCmFsbCBob3N0cyBiZWZvcmUgaXQgY2FuIGJlIHVzZWQu
IEl0IHdvdWxkIGJlIGdyZWF0IGlmIHRoaXMgY291bGQgYmUgZG9uZQphdXRvbWF0aWNhbGx5IChl
eGNlcHQgZm9yIHJoZXZtL3NwaWNlL3N0b3JhZ2UtbmV0d29ya3Mgd2hlcmUgYW4gSVAKYWRkcmVz
cyBvZiBlYWNoIGhvc3QgaXMgcmVxdWlyZWQpLiBUaGUgd29ya2Zsb3cgY291bGQgYmU6CiYjNDM7
IGNyZWF0ZSBuZXcgbG9naWNhbCBuZXR3b3JrCiYjNDM7IGNsaWNrIG9uIGRlcGxveSBidXR0b24K
JiM0MzsgY3JlYXRlIGlmY2ZnLWZpbGVzIG9uIGhvc3RzCiYjNDM7IG1ha2UgaWZjZmctZmlsZXMg
cGVyc2lzdGVudCBvbiBvdmlydCBOb2RlcwomIzQzOyBicmluZyBpbnRlcmZhY2UgdXAKJiM0Mzsg
Y2hlY2sgaWYgaW50ZXJmYWNlcyBhcmUgdXAKJiM0MzsgbWFrZSBsb2dpY2FsIG5ldHdvcmsgT3Bl
cmF0aW9uYWwgaWYgZGVwbG95bWVudCB3YXMgc3VjY2Vzc2Z1bCBvbiBhbGwKaG9zdHMKCi0gQ29u
ZmlndXJlIG92aXJ0IGhvc3RzIGZyb20gR1VJCkl0IHdvdWxkIGJlIHJlYWxseSBhbiBpbXByb3Zl
bWVudCBpZiBhbGwgc2V0dGluZ3Mgb2YgdGhlIFRVSSAobGlrZSBETlMsCk5UUCwgc3lzbG9nLCBr
ZHVtcCwgYWRtaW4gYW5kIHJvb3QgcGFzc3dvcmRzLCBob3N0bmFtZSwuLi4pIGNvdWxkIGJlCmNv
bmZpZ3VyZWQgaW4gb3ZpcnQtZW5naW5lIGd1aSwgdG9vIChpbiBob3N0cyB0YWIpLiBQbHVzIGlu
Y2x1ZGluZwpjaGFuZ2luZyBtdWx0aXBhdGguY29uZiBpbiBXZWIgR1VJIHdvdWxkIGJlIHZlcnkg
bmljZS4KCi0gQ3JlYXRlIGhvc3QgcHJvZmlsZXMKU3RvcmluZyBhbGwgY29uZmlndXJhdGlvbiBz
ZXR0aW5ncyBvZiB0aGUgVFVJIGFuZCBob3N0IHRhYiAod2l0aG91dCBpcAphZGRyZXNzZXMgZm9y
IHN1cmUpIGluIGEgcHJvZmlsZSB3b3VsZCBtYWtlIGNoYW5nZXMgZWFzaWVyIGFuZCBzcGVlZCB1
cApkZXBsb3ltZW50IG9mIG5ldyBoeXBlcnZpc29ycy4gSSdtIHRoaW5raW5nIG9mOgomIzQzOyBj
cmVhdGUgYSBwcm9maWxlIHdoaWNoIGNvbnRhaW5zIGUuZy4gRE5TLCBOVFAgYW5kIGFkbWluIHBh
c3N3b3JkcwomIzQzOyBsaW5rIHByb2ZpbGUgdG8gaG9zdHMKJiM0MzsgY2hhbmdlIEROUyBpbiBw
cm9maWxlCiYjNDM7IGluIGhvc3RzIHRhYiB5b3UgY2FuIHNlZSB0aGF0IHRoZXJlIGFyZSBjaGFu
Z2VzIGJldHdlZW4gdGhlIHByb2ZpbGUKYW5kIHRoZSBob3N0cwomIzQzOyBicmluZyBob3N0IGlu
IG1haW50ZW5hbmNlIG1vZGUKJiM0Mzsgc3luYyBwcm9maWxlIChhbmQgbWF5YmUgcmVib290IGhv
c3QpCldoZW4gaW5zdGFsbGluZyBuZXcgaG9zdHMgKHZpYSBDRCBvciBQWEUpIHRoZSBwcm9maWxl
IGNhbiBiZSB1c2VkIHRvCmF1dG9tYXRpY2FsbHkgY29uZmlndXJlIHRoZSBob3N0IHdpdGhvdXQg
cHJvdmlkaW5nIGFsbCBzZXR0aW5ncyB2aWEgYm9vdApvcHRpb25zICYjNDM7IGNvbmZpZ3VyZSBh
bGwgbmV0d29ya3MsIGN1c3RvbSBtdWx0aXBhdGguY29uZiwuLi4gSG9zdCBwcm9maWxlCnNob3Vs
ZCBiZSB1c2VkIGZvciBmdWxsIFJIRUwvRmVkb3JhIGhvc3RzIGFzIHdlbGwuCgotIFVwZGF0ZSBS
SEVML0ZlZG9yYSBIeXBlcnZpc29ycyBmcm9tIEdVSQpJdCB3b3VsZCBiZSBuaWNlIGlmIEZlZG9y
YSBob3N0cyBjb3VsZCBiZSB1cGRhdGVkIGZyb20gdGhlIG92aXJ0LWVuZ2luZQpHVUkgbGlrZSB0
aGUgb3ZpcnQgTm9kZS4gUnVubmluZyB5dW0gY2hlY2stdXBkYXRlIG9uIGhvc3QgYW5kIGRpc3Bs
YXlpbmcKbm90aWNlIGlmIHRoZXJlIGFyZSB1cGRhdGVzIGF2YWlsYWJsZS4KCi0gSW1wbGVtZW50
IHJlc291cmNlIHBvb2xzCkF0IHRoZSBtb21lbnQgb25seSBRdW90YXMgYXJlIGF2YWlsYWJsZSAo
d2hpY2ggaXMgZ3JlYXQsIGJ0dyksIGJ1dCBpbgpzb21lIGNhc2VzIGl0J3MgbmVjZXNzYXJ5IHRv
IGltcGxlbWVudCByZXNvdXJjZSBwb29scy4gRS5nLiBMaW1pdCBDUFUsCk1lbW9yeSBhbmQgTmV0
d29yayBmb3IgZ3JvdXAgb2YgdGVzdCB2bXMsIGJ1dCBnaXZlIGZ1bGwgcmVzb3VyY2VzIHRvCnBy
b2R1Y3Rpb24gdm1zLiBUaGlzIGNvdWxkIGJlIGRvbmUgd2l0aCBjZ3JvdXBzLgoKLSBDbG9uZSB2
bXMKSSdtIG1pc3NpbmcgdGhlIHBvc3NpYmlsaXR5IG9mIGNsb25pbmcgdm1zIHdpdGhvdXQgY3Jl
YXRpbmcgYSB0ZW1wbGF0ZS4gCgotIFJlc2l6ZSBkaXNrIGluIEdVSQpJbmNyZWFzaW5nIHRoZSBz
aXplIG9mIGEgZGlzayB3b3VsZCBoZWxwIGEgbG90LiBBdCB0aGUgbW9tZW50IEkgY3JlYXRlCm5l
dyBkaXNrcyBhbmQgcHV0IGl0IGludG8gdm9sdW1lIGdyb3VwIG9uIHZtLCBidXQgcmVzaXppbmcg
d291bGQgYmUKbmljZXIgaW4gc29tZSBjYXNlcyBhbmQgd2lsbCByZWR1Y2UgdGhlIG51bWJlciBv
ZiBkaXNrcy4KCi0gVXBsb2FkIElTT3Mgd2l0aGluIHRoZSBHVUkgdG8gSVNPIGRvbWFpbgoKLSBJ
bnRlZ3JhdGUgdmlydC12MnYgaW50byBvVmlydCBHVUkKSXQgd291bGQgYmUgY29vbCBpZiBWTXMg
Y291bGQgYmUgTWlncmF0ZWQgZnJvbSBvdGhlciBzeXN0ZW1zIHdpdGhpbiB0aGUKR1VJIHVzaW5n
IHZpcnQtdjJ2IGFzIGEgYmFja2VuZC4KCi0gRWRpdCBzZXR0aW5ncyBvZiB2bXMgYmVmb3JlIGlt
cG9ydGluZyB0aGVtCk1vc3QgdGhlIG9mIHRoZSBzZXR0aW5ncyBsaWtlIGRpc2sgdHlwZSAoSURF
L1ZpcnRJTyksIG5pYyBidXQgYWxzbyB2bQp0eXBlIChzZXJ2ZXIvZGVza3RvcCkgYW5kIGFjY2Vz
cyBwcm90b2NvbCAoU3BpY2UvVk5DKSBjYW4ndCBiZSBlZGl0ZWQKd2l0aGluZyBvVmlydC9SSEVW
TSBHVUkgYmVmb3JlIGltcG9ydGluZyB0aGVtLiBUaGVzZSBzZXR0aW5ncyBjYW4gb25seQpiZSBj
aGFuZ2VkIGJ5IGVkaXRpbmcgdGhlIHhtbCBkaXJlY3RseS4gRGlzayB0eXBlIGFuZCBuaWMgaXMg
cmlza3kgb2YKY291cnNlLCBidXQgd29ya2luZyBmb3IgbW9zdCBMaW51eCBkaXN0cmlidXRpb25z
LgoKLSBVc2UgZXhpc3Rpbmcgc2hhcmUgZm9yIElTTyBkb21haW4KV2hlbiBjcmVhdGluZyBhbiBJ
U08gZG9tYWluLCBvVmlydC9SSEVWIGNyZWF0ZXMgaXQncyBvd24gZGlyZWN0b3J5CnN0cnVjdHVy
ZSB3aXRoIElEcy4gSXQgd291bGQgYmUgbmljZSBpZiBhbiBleGlzdGluZyBzaGFyZSBjb3VsZCBi
ZSB1c2VkCihlLmcuIGFuIElTTyBzaGFyZSBvbiBORlMgc2VydmVyIHdoaWNoIGlzIHVzZWQgYnkg
b3RoZXIgc2VydmljZSwgdG9vKQp3aXRob3V0IGNyZWF0aW5nIHRoZSBzdHJ1Y3R1cmUgd2l0aCBJ
RHMuIEkga25vdyB0aGF0IHRoZSBJRHMgYXJlIG5lZWRlZAppbnRlcm5hbGx5IGJ1dCBJIHRoaW5r
IGl0IHNob3VsZCBiZSBwb3NzaWJsZSB0byByZXVzZSBhbiBleGlzdGluZyBzaGFyZS4KCi0gUHV0
IElTTyBkb21haW4gb24gTkZTLCBpU0NTSSwgRkMgYW5kIEdsdXN0ZXIgc3RvcmFnZSwgdG9vCgpJ
IGtub3cgdGhhdCB0aGVzZSBhcmUgYSBsb3Qgb2YgZmVhdHVyZXMgSSB3b3VsZCBsaWtlIHRvIHNl
ZSBhbmQgaXQgd291bGQKYmUgZ3JlYXQgaWYgc29tZSBvZiB0aGVtIHdvdWxkIGJlIGltcGxlbWVu
dGVkIGluIGZ1dHVyZSByZWxlYXNlcy4KCkJ0dywgb24gb2YgdGhlIGNvb2xlc3QgZmVhdHVyZXMg
b2Ygb1ZpcnQgc2VlbXMgdG8gYmUgdGhlIFVJIHBsdWdpbgpmZWF0dXJlLCB3aGljaCBJIGhhdmVu
J3QgdGVzdGVkIHlldC4gSSByZWFsbHkgaG9wZSB0aGF0IHRoaXMgd2lsbCBiZQphdmFpbGFibGUg
aW4gUkhFViAzLjIsIHRvby4KCkFzIEkgZG9uJ3QgaGF2ZSB0aGF0IG11Y2ggSmF2YSBrbm93bGVk
Z2UgdG8gaW1wbGVtZW50IHNvbWUgb2YgbXkgZmVhdHVyZQpyZXF1ZXN0cyBhbmQgY29udHJpYnV0
ZSB0aGUgY29kZSwgSSBjYW4gZG8gc29tZSBQZXJsIGFuZCBKYXZhU2NyaXB0IGFuZApleHRlbmQg
b1ZpcnQgKGFuZCBob3BlZnVsbHkgUkhFViwgdG9vKSBsaWtlIE92ZWQgZGlkIHdpdGggRm9yZW1h
biAoZ3JlYXQKam9iIGFuZCBtYW55IHRoYW5rcyBmb3IgdGhlIGRvY3VtZW50YXRpb24gaW4geW91
ciB3aWtpISkuCgoKPC9wcmU+DQo8L2Jsb2NrcXVvdGU+DQo8YnI+DQo8L2JvZHk+DQo8L2h0bWw+
DQo=
--_000_5F9E965F5A80BC468BE5F40576769F09101FE58Aexchange21_--
12 years, 4 months
[Users] Error in Cron config?
by Tom Brown
--Apple-Mail=_8E7A7646-906D-4B9D-8274-43DAE2890AC1
Content-Transfer-Encoding: 7bit
Content-Type: text/plain;
charset=us-ascii
Hi - just enabled mail out from my HV's and i am seeing this
/etc/cron.hourly/vdsm-logrotate:
error: /etc/logrotate.d/vdsm:18 unknown option 'su' -- ignoring line
error: /etc/logrotate.d/vdsm:18 unexpected text
I use dreyou however i am doubtful he created this file
[root@ovirt-node002 ~]# rpm -qf /etc/logrotate.d/vdsm
vdsm-4.10.1-0.77.20.el6.x86_64
thanks
--Apple-Mail=_8E7A7646-906D-4B9D-8274-43DAE2890AC1
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
charset=us-ascii
<html><head></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; =
"><div><br></div><div>Hi - just enabled mail out from my HV's and i am =
seeing this</div><div><br></div><div><div style=3D"font-family: =
Consolas; ">/etc/cron.hourly/vdsm-logrotate:</div><div =
style=3D"font-family: Consolas; "><br></div><div style=3D"font-family: =
Consolas; ">error: /etc/logrotate.d/vdsm:18 unknown option 'su' -- =
ignoring line</div><div style=3D"font-family: Consolas; ">error: =
/etc/logrotate.d/vdsm:18 unexpected text</div></div><div =
style=3D"font-family: Consolas; "><br></div><div style=3D"font-family: =
Consolas; ">I use dreyou however i am doubtful he created this =
file</div><div style=3D"font-family: Consolas; =
"><br></div><div><div><font class=3D"Apple-style-span" =
face=3D"Consolas">[root@ovirt-node002 ~]# rpm -qf =
/etc/logrotate.d/vdsm</font></div><div><font class=3D"Apple-style-span" =
face=3D"Consolas">vdsm-4.10.1-0.77.20.el6.x86_64</font></div></div><div><f=
ont class=3D"Apple-style-span" =
face=3D"Consolas"><br></font></div><div><font class=3D"Apple-style-span" =
face=3D"Consolas">thanks</font></div></body></html>=
--Apple-Mail=_8E7A7646-906D-4B9D-8274-43DAE2890AC1--
12 years, 4 months
[Users] Different Clusters for virt-hosts and storage-hosts
by noc
Hi All,
I'm trying out ovirt-nightly and would like to split my virtualisation
hosts from my storage hosts. So I setup a second cluster (Storage) which
will only do gluster storage, added two hosts to it, added two volumes
to it (gluster-data, gluster-iso) and added a master Data domain
(GlusterData) and an iso domain (GlusterIso). The storagetype is posixFS
and creating a disk is no problem but for one thing. When it has
finished I can't see it in the interface but it exists on disk and can
be selected when a VM is created.
Now for the real test ;-)
If I add a host to the default cluster it will display an error and end
up Non Operational because it can't connect to the storage (Failed to
connect Host host01 to the Storage Domains GlusterData). From vdsm.log I
see that it tries to mount but it fails:
Thread-20::DEBUG::2013-01-09
14:19:01,792::misc::84::Storage.Misc.excCmd::(<lambda>) '/bin/sudo -n
/bin/mount -t glusterfs st01.nieuwland.nl:/gluster-data
/rhev/data-center/mnt/st01.nieuwland.nl:_gluster-data' (cwd None)
Thread-20::ERROR::2013-01-09
14:19:01,933::hsm::2212::Storage.HSM::(connectStorageServer) Could not
connect to storageServer
Running the mount command by hand and mounting it to /mnt works OK.
Problem seems to be that ../mnt/st01.nieuwland.nl:_gluster-data doesn't
exist.
I'm doing something wrong and can't this not work because I'm not
understanding how clusters are supposed to work or should this work?
I switched to posixfs because I have similar problems when using NFS but
then between the two storage host when they are also virtualistation hosts.
This is test install so I can rebuild anything you want/need, I have the
logs from a clean install until now.
Versions of packages on storage and virt host
Storage:
glusterfs-3.4.0qa6-1.el6.x86_64
glusterfs-server-3.4.0qa6-1.el6.x86_64
glusterfs-fuse-3.4.0qa6-1.el6.x86_64
vdsm-gluster-4.10.3-0.50.gitc6625ce.fc17.noarch
[root@st02 ~]# rpm -aq | grep vdsm
vdsm-cli-4.10.3-0.50.gitc6625ce.fc17.noarch
vdsm-gluster-4.10.3-0.50.gitc6625ce.fc17.noarch
vdsm-xmlrpc-4.10.3-0.50.gitc6625ce.fc17.noarch
vdsm-4.10.3-0.50.gitc6625ce.fc17.x86_64
vdsm-python-4.10.3-0.50.gitc6625ce.fc17.x86_64
Virt host:
glusterfs-fuse-3.4.0qa6-1.el6.x86_64
glusterfs-3.4.0qa6-1.el6.x86_64
[root@host01 ~]# rpm -aq | grep vdsm
vdsm-python-4.10.3-0.50.gitc6625ce.fc17.x86_64
vdsm-xmlrpc-4.10.3-0.50.gitc6625ce.fc17.noarch
vdsm-4.10.3-0.50.gitc6625ce.fc17.x86_64
vdsm-cli-4.10.3-0.50.gitc6625ce.fc17.noarch
Managment:
ovirt-engine-userportal-3.2.0-1.20130109.gitd1d2442.fc17.noarch
ovirt-engine-setup-3.2.0-1.20130109.gitd1d2442.fc17.noarch
ovirt-host-deploy-java-0.0.0-0.0.master.20130107.gitaa0edd4.fc17.noarch
ovirt-engine-tools-common-3.2.0-1.20130109.gitd1d2442.fc17.noarch
ovirt-iso-uploader-3.1.0-1.fc17.noarch
ovirt-engine-cli-3.2.0.8-1.20130107.git0b16093.fc17.noarch
ovirt-engine-notification-service-3.2.0-1.20130109.gitd1d2442.fc17.noarch
ovirt-engine-sdk-3.2.0.6-1.20121227.git6abd520.fc17.noarch
ovirt-host-deploy-0.0.0-0.0.master.20130107.gitaa0edd4.fc17.noarch
ovirt-engine-webadmin-portal-3.2.0-1.20130109.gitd1d2442.fc17.noarch
ovirt-engine-genericapi-3.2.0-1.20130109.gitd1d2442.fc17.noarch
ovirt-engine-backend-3.2.0-1.20130109.gitd1d2442.fc17.noarch
ovirt-engine-dbscripts-3.2.0-1.20130109.gitd1d2442.fc17.noarch
ovirt-engine-config-3.2.0-1.20130109.gitd1d2442.fc17.noarch
ovirt-release-fedora-5-2.noarch
ovirt-engine-3.2.0-1.20130109.gitd1d2442.fc17.noarch
ovirt-image-uploader-3.1.0-1.fc17.noarch
ovirt-engine-restapi-3.2.0-1.20130109.gitd1d2442.fc17.noarch
ovirt-log-collector-3.1.0-1.fc17.noarch
12 years, 4 months
[Users] chanbe locale for spice console
by Jean Lÿffffe9olein BEBEY
--1682216824-2123551442-1357723306=:38604
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable
Hi all,=0A=0APlease, how do I change the locale for spice console to have i=
t in french (fr_fr ou fr) ?=0A=0AI use ovirt 3.1 : Fedora 17 x86_64 for Vir=
tualization Manager and 2 hosts ovirt 2.5.5.=0A=0AThanks.=0A=0AJean=0A
--1682216824-2123551442-1357723306=:38604
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"color:#000; background-color:#fff; font-family:ti=
mes new roman, new york, times, serif;font-size:12pt"><div>Hi all,</div><di=
v><br></div><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family=
: times new roman,new york,times,serif; background-color: transparent; font=
-style: normal;">Please, how do I change the locale for spice console to ha=
ve it in french (fr_fr ou fr) ?</div><div style=3D"color: rgb(0, 0, 0); fon=
t-size: 16px; font-family: times new roman,new york,times,serif; background=
-color: transparent; font-style: normal;"><br></div><div style=3D"color: rg=
b(0, 0, 0); font-size: 16px; font-family: times new roman,new york,times,se=
rif; background-color: transparent; font-style: normal;">I use ovirt 3.1 : =
Fedora 17 x86_64 for Virtualization Manager and 2 hosts ovirt 2.5.5.</div><=
div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: times new r=
oman,new york,times,serif; background-color: transparent; font-style:
normal;"><br></div><div style=3D"color: rgb(0, 0, 0); font-size: 16px; fon=
t-family: times new roman,new york,times,serif; background-color: transpare=
nt; font-style: normal;">Thanks.</div><div style=3D"color: rgb(0, 0, 0); fo=
nt-size: 16px; font-family: times new roman,new york,times,serif; backgroun=
d-color: transparent; font-style: normal;"><br></div><div style=3D"color: r=
gb(0, 0, 0); font-size: 16px; font-family: times new roman,new york,times,s=
erif; background-color: transparent; font-style: normal;">Jean<br></div></d=
iv></body></html>
--1682216824-2123551442-1357723306=:38604--
12 years, 4 months
[Users] libvirt implimentation in oVirt
by Arindam Choudhury
hi,
I am a new user. I have downloaded the source code and out of curiosity
I ran grep to find out the code related to libvirt. but both grep for
"import libvirt" and "import org.libvirt" returned empty.
Where is the libvirt related code then?
Sincerely,
Arindam Choudhury
12 years, 4 months
Re: [Users] What do you want to see in oVirt next?
by Patrick Hurrelmann
On 03.01.2013 17:25, Patrick Hurrelmann wrote:
> On 03.01.2013 17:08, Itamar Heim wrote:
>> Hi Everyone,
>>
>> as we wrap oVirt 3.2, I wanted to check with oVirt users on what they
>> find good/useful in oVirt, and what they would like to see
>> improved/added in coming versions?
>>
>> Thanks,
>> Itamar
>
> For me, I'd like to see official rpms for RHEL6/CentOS6. According to
> the traffic on this list quite a lot are using Dreyou's packages.
>
> But I'm really looking forward to oVirt 3.2 (reading all those commit
> whets my appetite) :)
>
> Regards
> Patrick
And after thinking a bit more about it, this is what I like to see in
addition:
- clustered engine (eliminate this SPOF)
- when FreeIPA is used for authentication, make use of its CA and
generate certificates using ipa-getcert
Regards
Patrick
--
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg
HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
12 years, 4 months
Re: [Users] Cannot connect on console with spice on VM
by Jean Lÿffffe9olein BEBEY
--1854976548-2048366785-1357723155=:33449
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Thanks,=0A=0AIt is ok now. the problem was the empty CA file.=0A=0A=0AJean=
=0A=0A=0A________________________________=0A De=C2=A0: David Ja=C5=A1a <dja=
sa(a)redhat.com>=0A=C3=80=C2=A0: Jean L=C3=BFffffe9olein BEBEY <jlbebey@yahoo=
.fr> =0ACc=C2=A0: "users(a)ovirt.org" <users(a)ovirt.org> =0AEnvoy=C3=A9 le : J=
eudi 3 janvier 2013 13h20=0AObjet=C2=A0: Re: [Users] Cannot connect on cons=
ole with spice on VM=0A =0AJean L=C3=BFffffe9olein BEBEY p=C3=AD=C5=A1e v =
=C4=8Ct 03. 01. 2013 v 10:21 +0000:=0A> Hi all,=0A> =0A> =0A> When i want t=
o connect on console when spice is active in VM, i have=0A> this error : VM=
is down. Exit message: internal error Process exited=0A> while reading con=
sole log output: char device redirected to /dev/pts/0=0A> do_spice_init: st=
arting 0.10.1 reds_init_ssl: Could not use ca file.=0A=0AThat looks like a =
certificate installation error. There was some thread=0Arecently where exac=
tly such issue was resolved.=0A=0ADavid=0A=0A> So i cannot connect to VM co=
nsole with spice but it is ok with VNC.=0A> =0A> =0A> Also, i have error wh=
en i want to migrate VM.=0A> =0A> =0A> I use ovirt 3.1 : Fedora 17 x86_64 f=
or Virtualization Manager and 2=0A> hosts ovirt 2.5.5.=0A> =0A> =0A> =0A> P=
lease, help!=0A> _______________________________________________=0A> Users =
mailing list=0A> Users(a)ovirt.org=0A> http://lists.ovirt.org/mailman/listinf=
o/users=0A=0A-- =0A=0ADavid Ja=C5=A1a, RHCE=0A=0ASPICE QE based in Brno=0AG=
PG Key:=C2=A0 =C2=A0 22C33E24 =0AFingerprint: 513A 060B D1B4 2A72 7F0D 027=
8 B125 CD00 22C3 3E24
--1854976548-2048366785-1357723155=:33449
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"color:#000; background-color:#fff; font-family:ti=
mes new roman, new york, times, serif;font-size:12pt"><div><span><span styl=
e=3D"font-style: italic;"></span><span style=3D"font-style: italic;">Thanks=
,</span></span></div><div style=3D"color: rgb(0, 0, 0); font-size: 16px; fo=
nt-family: times new roman,new york,times,serif; background-color: transpar=
ent; font-style: italic;"><br><span><span style=3D"font-style: italic;"></s=
pan></span></div><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-f=
amily: times new roman,new york,times,serif; background-color: transparent;=
font-style: italic;"><span><span style=3D"font-style: italic;">It is ok no=
w. the problem was the empty CA file.</span></span></div><div style=3D"colo=
r: rgb(0, 0, 0); font-size: 16px; font-family: times new roman,new york,tim=
es,serif; background-color: transparent; font-style: italic;"><br><span></s=
pan></div><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: =
times new
roman,new york,times,serif; background-color: transparent; font-style: ita=
lic;"><span><br></span></div><div>Jean<br></div> <div style=3D"font-family=
: times new roman, new york, times, serif; font-size: 12pt;"> <div style=3D=
"font-family: times new roman, new york, times, serif; font-size: 12pt;"> <=
div dir=3D"ltr"> <font face=3D"Arial" size=3D"2"> <hr size=3D"1"> <b><span=
style=3D"font-weight:bold;">De :</span></b> David Ja=C5=A1a <djasa=
@redhat.com><br> <b><span style=3D"font-weight: bold;">=C3=80 :</sp=
an></b> Jean L=C3=BFffffe9olein BEBEY <jlbebey(a)yahoo.fr> <br><b><span=
style=3D"font-weight: bold;">Cc :</span></b> "users(a)ovirt.org" <us=
ers(a)ovirt.org> <br> <b><span style=3D"font-weight: bold;">Envoy=C3=A9 le=
:</span></b> Jeudi 3 janvier 2013 13h20<br> <b><span style=3D"font-weight:=
bold;">Objet :</span></b> Re: [Users] Cannot connect on console with =
spice on VM<br> </font> </div> <br>Jean L=C3=BFffffe9olein BEBEY p=C3=AD=C5=
=A1e v =C4=8Ct 03. 01. 2013 v 10:21
+0000:<br>> Hi all,<br>> <br>> <br>> When i want to connect on=
console when spice is active in VM, i have<br>> this error : VM is down=
. Exit message: internal error Process exited<br>> while reading console=
log output: char device redirected to /dev/pts/0<br>> do_spice_init: st=
arting 0.10.1 reds_init_ssl: Could not use ca file.<br><br>That looks like =
a certificate installation error. There was some thread<br>recently where e=
xactly such issue was resolved.<br><br>David<br><br>> So i cannot connec=
t to VM console with spice but it is ok with VNC.<br>> <br>> <br>>=
Also, i have error when i want to migrate VM.<br>> <br>> <br>> I =
use ovirt 3.1 : Fedora 17 x86_64 for Virtualization Manager and 2<br>> h=
osts ovirt 2.5.5.<br>> <br>> <br>> <br>> Please, help!<br>> =
_______________________________________________<br>> Users mailing list<=
br>> <a ymailto=3D"mailto:Users@ovirt.org"
href=3D"mailto:Users@ovirt.org">Users(a)ovirt.org</a><br>> <a href=3D"htt=
p://lists.ovirt.org/mailman/listinfo/users" target=3D"_blank">http://lists.=
ovirt.org/mailman/listinfo/users</a><br><br>-- <br><br>David Ja=C5=A1a, RHC=
E<br><br>SPICE QE based in Brno<br>GPG Key: 22C33E24 <br>Fing=
erprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24<br><br><br><br><=
br><br> </div> </div> </div></body></html>
--1854976548-2048366785-1357723155=:33449--
12 years, 4 months
[Users] Storage domain mount options
by Alexandru Vladulescu
Hello Guys,
I would like to ask you help for the following problem I am facing right
now.
As we know, on the Storage tab we can configure storage domains in many
types. I am using NFS, and when configuring a such export on the
Advanced Parameters section we can only adjust the NFS version, Retrans
and Timeout.
After a successful configuration setup, on all hypervisors I have the
mount share declared attached. The result of mount for the share
datastore looks like below:
nas01.net:/datastore01/nas01.ISO on
/rhev/data-center/mnt/nas01.net:_datastore01_nas01.ISO type nfs
(rw,soft,nosharecache,timeo=10,retrans=6,vers=4,addr=10.20.30.10,clientaddr=10.20.30.102)
All good and sound but how can I tune the NFS mounting command, for
example let's say adding noatime value to the mount option list ?
Many thanks,
Alex.
12 years, 4 months
[Users] oVirt warning about memory usage
by Alex Leonhardt
Hi,
just seen a memory usage warning in oVirt's Admin Interface saying the
available memory is below threshold of 1024MB - however - when I checked
the host, it had still 20GB left ?
See screenshot.
Alex
12 years, 4 months
[Users] 3.2 release management notes
by Jonathan Horne
--_000_9BE6F493F83A594DA60C45E6A09DC5AC0170B096AUSP01DAG0201co_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
my boss is super concerned about this that he found on the page:
http://www.ovirt.org/OVirt_3.2_release-management
MUST: No blockers on the lower level components - libvirt, lvm,device-mappe=
r,qemu-kvm, Jboss, postgres, iscsi-initiator
can someone tell me what this is supposed to be telling me?
thanks,
jonathan
________________________________
This is a PRIVATE message. If you are not the intended recipient, please de=
lete without copying and kindly advise us by e-mail of the mistake in deliv=
ery. NOTE: Regardless of content, this e-mail shall not operate to bind SKO=
POS to any order or other contract unless pursuant to explicit written agre=
ement or government initiative expressly permitting the use of e-mail for s=
uch purpose.
--_000_9BE6F493F83A594DA60C45E6A09DC5AC0170B096AUSP01DAG0201co_
Content-Type: text/html; charset="us-ascii"
Content-ID: <4A2F16C69B857F4695F93E9F9EFC458B(a)collaborationhost.net>
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
</head>
<body style=3D"word-wrap:break-word; color:rgb(0,0,0); font-size:14px; font=
-family:Calibri,sans-serif">
<div>
<div>
<div><span class=3D"Apple-style-span" style=3D"font-family:Calibri">
<div style=3D"color:rgb(0,0,0); font-family:Calibri,sans-serif">my boss is =
super concerned about this that he found on the page:</div>
</span><a href=3D"http://www.ovirt.org/OVirt_3.2_release-management">http:/=
/www.ovirt.org/OVirt_3.2_release-management</a><span class=3D"Apple-style-s=
pan" style=3D"font-family:Calibri">
<div style=3D"color:rgb(0,0,0); font-family:Calibri,sans-serif"><br>
</div>
<div style=3D"color:rgb(0,0,0); font-family:Calibri,sans-serif"><b style=3D=
"font-weight:600; color:rgb(46,52,54); font-family:'Source Sans Pro',sans-s=
erif; font-size:14px; font-style:normal; font-variant:normal; letter-spacin=
g:normal; line-height:20px; orphans:2; text-align:left; text-indent:0px; te=
xt-transform:none; white-space:normal; widows:2; word-spacing:0px; backgrou=
nd-color:rgb(255,255,255)">MUST</b><span style=3D"color:rgb(46,52,54); font=
-size:14px; font-style:normal; font-variant:normal; font-weight:normal; let=
ter-spacing:normal; line-height:20px; orphans:2; text-align:left; text-inde=
nt:0px; text-transform:none; white-space:normal; widows:2; word-spacing:0px=
; background-color:rgb(255,255,255); display:inline!important; float:none; =
font-family:'Source Sans Pro',sans-serif">:
No blockers on the lower level components - libvirt, lvm,device-mapper,qem=
u-kvm, Jboss, postgres, iscsi-initiator</span></div>
<div style=3D"color:rgb(0,0,0); font-family:Calibri,sans-serif"><span style=
=3D"color:rgb(46,52,54); font-size:14px; font-style:normal; font-variant:no=
rmal; font-weight:normal; letter-spacing:normal; line-height:20px; orphans:=
2; text-align:left; text-indent:0px; text-transform:none; white-space:norma=
l; widows:2; word-spacing:0px; background-color:rgb(255,255,255); display:i=
nline!important; float:none; font-family:'Source Sans Pro',sans-serif"><br>
</span></div>
<div style=3D"color:rgb(0,0,0); font-family:Calibri,sans-serif"><span style=
=3D"color:rgb(46,52,54); font-size:14px; font-style:normal; font-variant:no=
rmal; font-weight:normal; letter-spacing:normal; line-height:20px; orphans:=
2; text-align:left; text-indent:0px; text-transform:none; white-space:norma=
l; widows:2; word-spacing:0px; display:inline!important; float:none; backgr=
ound-color:rgb(254,255,254); font-family:'Source Sans Pro',sans-serif">can
someone tell me what this is supposed to be telling me?</span></div>
<div style=3D"color:rgb(0,0,0); font-family:Calibri,sans-serif"><span style=
=3D"color:rgb(46,52,54); font-size:14px; font-style:normal; font-variant:no=
rmal; font-weight:normal; letter-spacing:normal; line-height:20px; orphans:=
2; text-align:left; text-indent:0px; text-transform:none; white-space:norma=
l; widows:2; word-spacing:0px; display:inline!important; float:none; backgr=
ound-color:rgb(254,255,254); font-family:'Source Sans Pro',sans-serif"><br>
</span></div>
<div style=3D"text-align:left"><font class=3D"Apple-style-span" color=3D"#2=
e3436" face=3D"Source Sans Pro,sans-serif"><span class=3D"Apple-style-span"=
style=3D"line-height:20px">thanks,</span></font></div>
<div style=3D"text-align:left"><font class=3D"Apple-style-span" color=3D"#2=
e3436" face=3D"Source Sans Pro,sans-serif"><span class=3D"Apple-style-span"=
style=3D"line-height:20px">jonathan</span></font></div>
</span></div>
<div>
<div></div>
</div>
</div>
</div>
<br>
<hr>
<font color=3D"Gray" face=3D"Arial" size=3D"1">This is a PRIVATE message. I=
f you are not the intended recipient, please delete without copying and kin=
dly advise us by e-mail of the mistake in delivery. NOTE: Regardless of con=
tent, this e-mail shall not operate to
bind SKOPOS to any order or other contract unless pursuant to explicit wri=
tten agreement or government initiative expressly permitting the use of e-m=
ail for such purpose.</font>
</body>
</html>
--_000_9BE6F493F83A594DA60C45E6A09DC5AC0170B096AUSP01DAG0201co_--
12 years, 4 months
[Users] mac address re-use
by Jonathan Horne
--_000_9BE6F493F83A594DA60C45E6A09DC5AC01709931AUSP01DAG0201co_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
my 3.2 install seems to be reusing mac address of formerly-deleted guests. =
is this the normal behavior, and or can i turn this off? i would prefer t=
o have a unique mac address for each new VM created.
thanks,
jonathan
________________________________
This is a PRIVATE message. If you are not the intended recipient, please de=
lete without copying and kindly advise us by e-mail of the mistake in deliv=
ery. NOTE: Regardless of content, this e-mail shall not operate to bind SKO=
POS to any order or other contract unless pursuant to explicit written agre=
ement or government initiative expressly permitting the use of e-mail for s=
uch purpose.
--_000_9BE6F493F83A594DA60C45E6A09DC5AC01709931AUSP01DAG0201co_
Content-Type: text/html; charset="us-ascii"
Content-ID: <1C15068072CEE84099EA5BF1C178CF8D(a)collaborationhost.net>
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
</head>
<body style=3D"color:rgb(0,0,0); font-size:14px; font-family:Calibri,sans-s=
erif; word-wrap:break-word">
<div>
<div>
<div>my 3.2 install seems to be reusing mac address of formerly-deleted gue=
sts. is this the normal behavior, and or can i turn this off? i=
would prefer to have a unique mac address for each new VM created.</div>
<div><br>
</div>
<div>thanks,</div>
<div>jonathan</div>
<div>
<div></div>
</div>
</div>
</div>
<br>
<hr>
<font color=3D"Gray" face=3D"Arial" size=3D"1">This is a PRIVATE message. I=
f you are not the intended recipient, please delete without copying and kin=
dly advise us by e-mail of the mistake in delivery. NOTE: Regardless of con=
tent, this e-mail shall not operate to
bind SKOPOS to any order or other contract unless pursuant to explicit wri=
tten agreement or government initiative expressly permitting the use of e-m=
ail for such purpose.</font>
</body>
</html>
--_000_9BE6F493F83A594DA60C45E6A09DC5AC01709931AUSP01DAG0201co_--
12 years, 4 months
[Users] Fedora 17 Guests for Desktop Virtualization
by Kevin Daly
I am trying to set up Fedora 17 guests and I've come across a problem with
Spice.. It works fine with a Gnome 3 sesson, but if I try et Cinnamon or
KDE4, the bottom panel is unresponsive.. You cannot click on anything in
the panel, cannot configure it etc etc...
With KDE4, I have configured a panel at the top, it works fine.. but if I
try move it to the bottom it's dead.
Any insights would be helpful.
I am running vdagent and vdagentd on the Fedora guest.
--
============================
Kevin Daly
12 years, 4 months
[Users] net device initialization order
by Jonathan Horne
--_000_9BE6F493F83A594DA60C45E6A09DC5AC01709419AUSP01DAG0201co_
Content-Type: text/plain; charset="Windows-1252"
Content-Transfer-Encoding: quoted-printable
i just did my first 3.2 install, i have a single manager and a single node.
i have also just created my first VM, and it has 2 network interfaces.
nic1 0aa9 =96 ovirtmgmt network
nic2 0aa7 =96 iscsi network
in the configuration dialog, nic2/0aa7 was the first one that was actually =
added, and that was the one i wanted in the ovirtmgmt network (and subseque=
ntly i will have to reverse my network cards, and i will have to see nic2 a=
s my mgmt/eth0 and nic1 as my iscsi in the web page), but it looks like the=
guests are initializing their pci devices in backwards order.
is this something i can set somewhere, or is this a bug?
[Skopos Web] JONATHAN HORNE | Systems Administrator | Skopos Web<http=
://www.skopos.us>
e: jhorne(a)skopos.us<mailto:jhorne@skopos.us> t: 214.520.4600x5042 f: 2=
14.520.5079
________________________________
This is a PRIVATE message. If you are not the intended recipient, please de=
lete without copying and kindly advise us by e-mail of the mistake in deliv=
ery. NOTE: Regardless of content, this e-mail shall not operate to bind SKO=
POS to any order or other contract unless pursuant to explicit written agre=
ement or government initiative expressly permitting the use of e-mail for s=
uch purpose.
--_000_9BE6F493F83A594DA60C45E6A09DC5AC01709419AUSP01DAG0201co_
Content-Type: text/html; charset="Windows-1252"
Content-ID: <4D552CD712ACE24C976AE09640AFDF37(a)collaborationhost.net>
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DWindows-1=
252">
</head>
<body style=3D"word-wrap:break-word; color:rgb(0,0,0); font-size:14px; font=
-family:Calibri,sans-serif">
<div>
<div>
<div>i just did my first 3.2 install, i have a single manager and a single =
node.</div>
<div><br>
</div>
<div>i have also just created my first VM, and it has 2 network interfaces.=
</div>
<div><br>
</div>
<div>nic1 0aa9 =96 ovirtmgmt network</div>
<div>nic2 0aa7 =96 iscsi network</div>
<div><br>
</div>
<div>in the configuration dialog, nic2/0aa7 was the first one that was actu=
ally added, and that was the one i wanted in the ovirtmgmt network (and sub=
sequently i will have to reverse my network cards, and i will have to see n=
ic2 as my mgmt/eth0 and nic1 as
my iscsi in the web page), but it looks like the guests are initializing t=
heir pci devices in backwards order.</div>
<div><br>
</div>
<div>is this something i can set somewhere, or is this a bug?</div>
<div>
<div>
<div style=3D"font-family:Calibri,sans-serif; font-size:14px"></div>
<div><font class=3D"Apple-style-span" face=3D"Times">
<table border=3D"0" style=3D"font-family:Times; letter-spacing:normal; orph=
ans:2; text-indent:0px; text-transform:none; widows:2; word-spacing:0px; ma=
rgin-top:10px; margin-left:1px; margin-bottom:10px; width:673px">
<tbody>
<tr height=3D"45">
<td width=3D"40"><img src=3D"http://dl.dropbox.com/u/67303051/Logo_Signatur=
e.gif" alt=3D"Skopos Web" width=3D"151" height=3D"55"></td>
<td><span style=3D"font-size:12px; color:rgb(128,128,128); line-height:22px=
; font-family:'Trebuchet MS',helvetica,San-Serif"><b style=3D"color:rgb(173=
,41,17); text-transform:uppercase; letter-spacing:2px">JONATHAN HORNE =
</b> | Systems Administrator | <a href=3D"http://www.=
skopos.us" title=3D"visit Skopos Web" style=3D"text-decoration:initial; bor=
der-bottom-width:1px; border-bottom-style:dotted; border-bottom-color:rgb(1=
36,136,136); color:rgb(136,136,136)">Skopos
Web</a></span><br>
<span style=3D"font-size:11px; color:rgb(178,178,178); font-family:'Trebuch=
et MS',helvetica,San-Serif"><b style=3D"color:rgb(136,136,136)">e:</b> =
; <a href=3D"mailto:jhorne@skopos.us" title=3D"email %FN%" style=3D"te=
xt-decoration:initial; border-bottom-width:1px; border-bottom-style:dotted;=
border-bottom-color:rgb(178,178,178); color:rgb(178,178,178)">jhorne@skopo=
s.us</a> <b style=3D"color:rgb(136,136,136)">t:</b> 2=
14.520.4600x5042
<b style=3D"color:rgb(136,136,136)">f:</b> 214.520.5079</=
span></td>
</tr>
</tbody>
</table>
</font></div>
</div>
</div>
</div>
</div>
<br>
<hr>
<font color=3D"Gray" face=3D"Arial" size=3D"1">This is a PRIVATE message. I=
f you are not the intended recipient, please delete without copying and kin=
dly advise us by e-mail of the mistake in delivery. NOTE: Regardless of con=
tent, this e-mail shall not operate to
bind SKOPOS to any order or other contract unless pursuant to explicit wri=
tten agreement or government initiative expressly permitting the use of e-m=
ail for such purpose.</font>
</body>
</html>
--_000_9BE6F493F83A594DA60C45E6A09DC5AC01709419AUSP01DAG0201co_--
12 years, 4 months
Re: [Users] What do you want to see in oVirt next?
by Shu Ming
Jiri Belka:
> On Sat, 05 Jan 2013 11:30:12 +0800
> Shu Ming <shuming(a)linux.vnet.ibm.com> wrote:
>
>> Jiri Belka :
>>> On Thu, 03 Jan 2013 18:08:48 +0200
>>> Itamar Heim <iheim(a)redhat.com> wrote:
>>>
>>>> Hi Everyone,
>>>>
>>>> as we wrap oVirt 3.2, I wanted to check with oVirt users on what
>>>> they find good/useful in oVirt, and what they would like to see
>>>> improved/added in coming versions?
>>> 1. Virtual Serial Port
>>> * accessible via network
>>> * accessible encrypted via network (qemu doesn't do it yet,
>>> IIRC)
>>> * vSPC-like (virtual serial port concentrator) app which would
>>> act as "proxy" to access individual VM's virtual serial ports
>>>
>>> - vmware docs: http://tinyurl.com/7dg3ll5
>>> - vSPC 3rd party info:
>>> http://isnotajoke.com/vmware_virtual_serial_ports.html
>> Here is the work undergoing:
>>
>> http://gerrit.ovirt.org/#/c/10381/
> Hi,
>
> are you able to migrate such VM? If I read it correctly it is vdsm
> which is doing network part to access virtual serial console which
> connects to local PTY device.
If the VM is migrated to another host, another host port should be
reconnected.
>
> So the idea would be something like this? 'conserver' -> ovirt-sdk/cli
> -> remote serial console via HTTP streaming handler ???
>
> jbelka
It is like: conserver in vdsm ---> http port multiplexing --->
remote serial console via HTTP streaming handler
ovirt-sdk/cli is not necessary here, it was only used to create a sample VM in this patch test.
>
--
---
舒明 Shu Ming
Open Virtualization Engineerning; CSTL, IBM Corp.
Tel: 86-10-82451626 Tieline: 9051626 E-mail: shuming(a)cn.ibm.com or shuming(a)linux.vnet.ibm.com
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC
12 years, 4 months
[Users] Default compatibility version in ovirt nightly
by Gianluca Cecchi
Hello,
I remember some time ago someone writing about compatibility version
defaults for 3.2 that should be put to 3.2.
I'm using oVirt nightly in all-in-one config as of
ovirt-engine-3.2.0-1.20130101.git2184580.fc18.noarch
but compatibility versions defaults are still 3.1 both for local_datacenter
and local_cluster and Default Datacenter and Cluster
Gianluca
12 years, 4 months
[Users] Action Needed: Upcoming Deadlines for oVirt Workshop at NetApp
by Leslie Hawthorn
Hello everyone,
***If you will not be attending the oVirt workshop taking place at
NetApp HQ on 22-24 January 2013 [0], you can stop reading now.***
Hotel Room Block Expiring:
If you require a hotel room as part of your visit for the workshop,
please book your room ASAP [1], as we will be releasing extra rooms in
our block at close of business tomorrow, 8 January. If you are unable
to book a room by close of business tomorrow, all unbooked rooms in our
block will be released. You may still request our room block rate but
you will no longer be guaranteed lodging at the Country Inn and Suites
as of Wednesday, 9 January.
Registration Deadline:
In order to have an accurate headcount for catering, please ensure you
have completed your registration for the event no later than close of
business on Tuesday, 15 January. [1] Please take a moment to register
and to remind any friends and colleagues who would be interested in
attending of our registration deadline.
If you or a colleague are unable to register by Tuesday, 15 January, but
would still like to attend please contact Dave Neary off-list. [2] Dave
will do his best to ensure that we are able to process late
registrations, though we unfortunately cannot make any guarantees.
Thank you once again to Patrick Rogers, Denise Ridolfo, Jon Benedict,
Talia Reyes-Ortiz and the rest of the fine folks at NetApp for hosting
this workshop and all their hard work to bring the community together in
Sunnyvale.
If you have any questions, please let me know. I look forward to
(re)meeting you at the oVirt workshop at NetApp.
[0] - http://www.ovirt.org/NetApp_Workshop_January_2013
[1] - http://ovirtnetapp2013.eventbrite.com/#
[2] - dneary at redhat dot com
Cheers,
LH
--
Leslie Hawthorn
Community Action and Impact
Open Source and Standards @ Red Hat
identi.ca/lh
twitter.com/lhawthorn]
12 years, 4 months
Re: [Users] What do you want to see in oVirt next?
by Charlie
On Thu, Jan 3, 2013 at 11:08 AM, Itamar Heim <iheim(a)redhat.com> wrote:
> Hi Everyone,
>
> as we wrap oVirt 3.2, I wanted to check with oVirt users on what they find
> good/useful in oVirt, and what they would like to see improved/added in
> coming versions?
>
> Thanks,
> Itamar
Good/useful: Open Source virtualization with a strong web management
interface. Rapidly improving, too.
wish improved: SPICE connection reliability and LDAPS support.
wish added: native ATA-over-Ethernet SAN support.
wish removed: Kerberos dependencies. Let people who want Kerb have
it, but don't force it where it's not needed. LDAP over SSL is
secure.
Many thanks to all the oVirt team for all their hard work!
--Charlie
12 years, 4 months
[Users] adding iso images
by Carl T. Miller
Is there a way to add an iso image to an nfs share by simply
copying a file? Or is there a command to run from one of the
hosts? The only method I know is using engine-iso-uploader
and it's not working in my environment.
c
12 years, 4 months
[Users] unable to install ovirt nightly all in one due to java dependency
by Gianluca Cecchi
Helo,
on a freshly installed F18 I get this
$ sudo yum install ovirt-engine-setup-plugin-allinone
...
--> Processing Dependency: java-1.7.0-openjdk >= 1:1.7.0.9-2.3.3.2 for
package: ovirt-engine-3.2.0-1.20130104.git2ad721c.fc18.noarch
---> Package plexus-cli.noarch 0:1.2-12.fc18 will be installed
--> Finished Dependency Resolution
Error: Package: ovirt-engine-3.2.0-1.20130104.git2ad721c.fc18.noarch
(ovirt-nightly)
Requires: java-1.7.0-openjdk >= 1:1.7.0.9-2.3.3.2
Installed: 1:java-1.7.0-openjdk-1.7.0.9-2.3.3.fc18.1.x86_64
(@fedora)
java-1.7.0-openjdk = 1:1.7.0.9-2.3.3.fc18.1
My java 1.7.0.9-2.3.3.fc18.1 is requested by libreoffice...
I don't see any java in ovirt repo..
Is it really true such a dependency at a low level?
1.7.0.9-2.3.3.2
vs
1.7.0.9-2.3.3.fc18.1
what an universally runnable language... ;-)
If I try to remove java
Dependencies Resolved
=======================================================================================================================================
Package Arch
Version Repository Size
=======================================================================================================================================
Removing:
java-1.7.0-openjdk x86_64
1:1.7.0.9-2.3.3.fc18.1 @fedora 89 M
Removing for dependencies:
icedtea-web x86_64
1.3.1-1.fc18 @fedora 873 k
libreoffice-calc x86_64
1:3.6.3.2-8.fc18 @fedora 23 M
libreoffice-core x86_64
1:3.6.3.2-8.fc18 @fedora 219 M
libreoffice-draw x86_64
1:3.6.3.2-8.fc18 @fedora 2.2 M
libreoffice-graphicfilter x86_64
1:3.6.3.2-8.fc18 @fedora 1.1 M
libreoffice-impress x86_64
1:3.6.3.2-8.fc18 @fedora 3.1 M
libreoffice-langpack-en x86_64
1:3.6.3.2-8.fc18 @fedora 0.0
libreoffice-langpack-it x86_64
1:3.6.3.2-8.fc18 @fedora 27 M
libreoffice-math x86_64
1:3.6.3.2-8.fc18 @fedora 3.2 M
libreoffice-pdfimport x86_64
1:3.6.3.2-8.fc18 @fedora 1.5 M
libreoffice-presenter-screen x86_64
1:3.6.3.2-8.fc18 @fedora 2.2 M
libreoffice-ure x86_64
1:3.6.3.2-8.fc18 @fedora 8.6 M
libreoffice-writer x86_64
1:3.6.3.2-8.fc18 @fedora 15 M
libreoffice-xsltfilter x86_64
1:3.6.3.2-8.fc18 @fedora 1.9 M
Transaction Summary
=======================================================================================================================================
Remove 1 Package (+14 Dependent packages)
12 years, 4 months
Re: [Users] "ataX: resetting link" problem with ovirt 3.1 on Fedora Core 17
by Yuval M
uname -a output:
Linux segfault.home 3.6.5-1.fc17.x86_64 #1 SMP Wed Oct 31 19:37:18 UTC 2012
x86_64 x86_64 x86_64 GNU/Linux
and the other host:
Linux kernelpanic.home 3.6.8-2.fc17.x86_64 #1 SMP Tue Nov 27 19:35:02 UTC
2012 x86_64 x86_64 x86_64 GNU/Linux
if there is any other diagnostic information required I will happily supply
it, just tell me what...
Yuval
On Fri, Jan 4, 2013 at 7:15 PM, Marcelo Barbosa <
mr.marcelo.barbosa(a)gmail.com> wrote:
> Hi Yuval,
>
> What is kernel version ? The oVirt stable version(3.1) run better from:
>
> [admin@firehome ~]$ uname -a
> Linux firehome.no-ip.org *3.3.4-5*.fc17.x86_64 #1 SMP Mon May 7 17:29:34
> UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
>
> Marcelo Barbosa
> *mr.marcelo.barbosa(a)gmail.com*
>
>
> On Fri, Jan 4, 2013 at 2:59 PM, Yuval M <yuvalme(a)gmail.com> wrote:
>
>> Hello,
>> We're M.Sc students at Tel-Aviv University trying to setup a basic ovirt
>> system with 2 hosts.
>> Both run Fedora Core 17.
>>
>> We've run unto a problem that makes the hosts seem down from the web
>> management UI and stops the VMs that run on them.
>> this occurs on both hosts at the very same second, which leads me to
>> believe it's not a hardware problem:
>> ("segfault" is the name of the server. don't ask.)
>>
>> Has anyone seen something like this?
>> any suggestions?
>> Thanks,
>>
>> Yuval Meir
>> Limor Gavish
>>
>>
>>
>> Dec 29 14:00:10 segfault kernel: [403596.851539] NFS: Cache request
>> denied due to non-unique superblock keys
>> Dec 29 14:00:10 segfault kernel: [403596.930361] ata1: hard resetting link
>> Dec 29 14:00:10 segfault kernel: [403597.237778] ata1: SATA link down
>> (SStatus 0 SControl 300)
>> Dec 29 14:00:10 segfault kernel: [403597.239721] ata1: EH complete
>> Dec 29 14:00:10 segfault kernel: [403597.241759] ata2: hard resetting link
>> Dec 29 14:00:10 segfault kernel: [403597.548726] ata2: SATA link up 6.0
>> Gbps (SStatus 133 SControl 300)
>> Dec 29 14:00:10 segfault kernel: [403597.559061] ata2.00: configured for
>> UDMA/133
>> Dec 29 14:00:10 segfault kernel: [403597.559066] ata2: EH complete
>> Dec 29 14:00:10 segfault kernel: [403597.559279] ata3: hard resetting link
>> Dec 29 14:00:11 segfault kernel: [403597.866689] ata3: SATA link down
>> (SStatus 0 SControl 300)
>> Dec 29 14:00:11 segfault kernel: [403597.868680] ata3: EH complete
>> Dec 29 14:00:11 segfault kernel: [403597.868928] ata4: hard resetting link
>> Dec 29 14:00:11 segfault kernel: [403598.176588] ata4: SATA link down
>> (SStatus 0 SControl 300)
>> Dec 29 14:00:11 segfault kernel: [403598.178470] ata4: EH complete
>> Dec 29 14:00:11 segfault kernel: [403598.178707] ata5: hard resetting link
>> Dec 29 14:00:11 segfault kernel: [403598.485535] ata5: SATA link down
>> (SStatus 0 SControl 300)
>> Dec 29 14:00:11 segfault kernel: [403598.487472] ata5: EH complete
>> Dec 29 14:00:11 segfault kernel: [403598.487656] ata6: hard resetting link
>> Dec 29 14:00:12 segfault kernel: [403598.795437] ata6: SATA link down
>> (SStatus 0 SControl 300)
>> Dec 29 14:00:12 segfault kernel: [403598.797325] ata6: EH complete
>> Dec 29 14:00:12 segfault kernel: [403598.797583] ata7: hard resetting link
>> Dec 29 14:00:12 segfault kernel: [403599.105616] ata7: SATA link down
>> (SStatus 0 SControl 300)
>> Dec 29 14:00:12 segfault kernel: [403599.107307] ata7: EH complete
>> Dec 29 14:00:12 segfault kernel: [403599.109318] ata8: hard resetting link
>> Dec 29 14:00:12 segfault kernel: [403599.416217] ata8: SATA link up 1.5
>> Gbps (SStatus 113 SControl 300)
>> Dec 29 14:00:12 segfault kernel: [403599.416650] ata8.00: configured for
>> UDMA/66
>> Dec 29 14:00:12 segfault kernel: [403599.416769] ata8: EH complete
>> Dec 29 14:00:12 segfault kernel: [403599.416922] ata9: hard resetting link
>> Dec 29 14:00:13 segfault kernel: [403599.722240] ata9: SATA link down
>> (SStatus 0 SControl 300)
>> Dec 29 14:00:13 segfault kernel: [403599.722256] ata9: EH complete
>> Dec 29 14:00:13 segfault kernel: [403599.722545] ata10: hard resetting
>> link
>> Dec 29 14:00:13 segfault kernel: [403600.027130] ata10: SATA link down
>> (SStatus 0 SControl 300)
>> Dec 29 14:00:13 segfault kernel: [403600.027234] ata10: EH complete
>> Dec 29 14:00:13 segfault kernel: [403600.027483] ata11.00: hard resetting
>> link
>> Dec 29 14:00:13 segfault kernel: [403600.331952] ata11.01: hard resetting
>> link
>> Dec 29 14:00:14 segfault kernel: [403600.787975] ata11.00: SATA link up
>> 3.0 Gbps (SStatus 123 SControl 300)
>> Dec 29 14:00:14 segfault kernel: [403600.787989] ata11.01: SATA link down
>> (SStatus 0 SControl 300)
>> Dec 29 14:00:14 segfault kernel: [403600.829480] ata11.00: configured for
>> UDMA/133
>> Dec 29 14:00:14 segfault kernel: [403600.829487] ata11: EH complete
>> Dec 29 14:00:14 segfault kernel: [403600.829638] ata12.00: hard resetting
>> link
>> Dec 29 14:00:14 segfault kernel: [403601.133734] ata12.01: hard resetting
>> link
>> Dec 29 14:00:14 segfault kernel: [403601.589741] ata12.00: SATA link up
>> 3.0 Gbps (SStatus 123 SControl 300)
>> Dec 29 14:00:14 segfault kernel: [403601.589757] ata12.01: SATA link up
>> 3.0 Gbps (SStatus 123 SControl 300)
>> Dec 29 14:00:14 segfault kernel: [403601.600049] ata12.00: configured for
>> UDMA/133
>> Dec 29 14:00:14 segfault kernel: [403601.604444] ata12.01: configured for
>> UDMA/133
>> Dec 29 14:00:14 segfault kernel: [403601.604450] ata12: EH complete
>> Dec 29 14:00:14 segfault kernel: [403601.604745] ata13: hard resetting
>> link
>> Dec 29 14:00:15 segfault kernel: [403601.921125] ata13: SATA link down
>> (SStatus 0 SControl 300)
>> Dec 29 14:00:15 segfault kernel: [403601.921144] ata13: EH complete
>> Dec 29 14:00:15 segfault kernel: [403601.921300] ata14: hard resetting
>> link
>> Dec 29 14:00:15 segfault kernel: [403602.377523] ata14: SATA link up 1.5
>> Gbps (SStatus 113 SControl 300)
>> Dec 29 14:00:15 segfault kernel: [403602.386842] ata14.00: configured for
>> UDMA/100
>> Dec 29 14:00:15 segfault kernel: [403602.392491] ata14: EH complete
>> Dec 29 14:00:15 segfault kernel: [403602.392695] ata15: soft resetting
>> link
>> Dec 29 14:00:15 segfault kernel: [403602.543523] ata15: EH complete
>> Dec 29 14:00:15 segfault kernel: [403602.543727] ata16: soft resetting
>> link
>> Dec 29 14:00:16 segfault kernel: [403602.706229] ata16: EH complete
>> Dec 29 14:00:16 segfault vdsm Storage.LVM WARNING lvm vgs failed: 5 [] ['
>> Volume group "78b1f41d-29cf-4e1a-a84d-fb9175f4388e" not found']
>> Dec 29 14:00:16 segfault kernel: [403603.273230] ata1: hard resetting link
>> Dec 29 14:00:16 segfault kernel: [403603.580216] ata1: SATA link down
>> (SStatus 0 SControl 300)
>> Dec 29 14:00:16 segfault kernel: [403603.582803] ata1: EH complete
>> Dec 29 14:00:16 segfault kernel: [403603.585201] ata2: hard resetting link
>> Dec 29 14:00:17 segfault kernel: [403603.892210] ata2: SATA link up 6.0
>> Gbps (SStatus 133 SControl 300)
>> Dec 29 14:00:17 segfault kernel: [403603.901635] ata2.00: configured for
>> UDMA/133
>> Dec 29 14:00:17 segfault kernel: [403603.901641] ata2: EH complete
>> Dec 29 14:00:17 segfault kernel: [403603.901873] ata3: hard resetting link
>> Dec 29 14:00:17 segfault kernel: [403604.209120] ata3: SATA link down
>> (SStatus 0 SControl 300)
>> Dec 29 14:00:17 segfault kernel: [403604.210966] ata3: EH complete
>> Dec 29 14:00:17 segfault kernel: [403604.211156] ata4: hard resetting link
>> Dec 29 14:00:17 segfault kernel: [403604.517966] ata4: SATA link down
>> (SStatus 0 SControl 300)
>> Dec 29 14:00:17 segfault kernel: [403604.519902] ata4: EH complete
>> Dec 29 14:00:17 segfault kernel: [403604.520157] ata5: hard resetting link
>> Dec 29 14:00:18 segfault kernel: [403604.826862] ata5: SATA link down
>> (SStatus 0 SControl 300)
>> Dec 29 14:00:18 segfault kernel: [403604.828830] ata5: EH complete
>> Dec 29 14:00:18 segfault kernel: [403604.829067] ata6: hard resetting link
>> Dec 29 14:00:18 segfault kernel: [403605.135731] ata6: SATA link down
>> (SStatus 0 SControl 300)
>> Dec 29 14:00:18 segfault kernel: [403605.137698] ata6: EH complete
>> Dec 29 14:00:18 segfault kernel: [403605.137887] ata7: hard resetting link
>> Dec 29 14:00:18 segfault kernel: [403605.445714] ata7: SATA link down
>> (SStatus 0 SControl 300)
>> Dec 29 14:00:18 segfault kernel: [403605.447619] ata7: EH complete
>> Dec 29 14:00:18 segfault kernel: [403605.449682] ata8: hard resetting link
>> Dec 29 14:00:19 segfault kernel: [403605.756684] ata8: SATA link up 1.5
>> Gbps (SStatus 113 SControl 300)
>> Dec 29 14:00:19 segfault kernel: [403605.757166] ata8.00: configured for
>> UDMA/66
>> Dec 29 14:00:19 segfault kernel: [403605.757266] ata8: EH complete
>> Dec 29 14:00:19 segfault kernel: [403605.757485] ata9: hard resetting link
>> Dec 29 14:00:19 segfault kernel: [403606.064643] ata9: SATA link down
>> (SStatus 0 SControl 300)
>> Dec 29 14:00:19 segfault kernel: [403606.064660] ata9: EH complete
>> Dec 29 14:00:19 segfault kernel: [403606.064839] ata10: hard resetting
>> link
>> Dec 29 14:00:19 segfault kernel: [403606.369548] ata10: SATA link down
>> (SStatus 0 SControl 300)
>> Dec 29 14:00:19 segfault kernel: [403606.369562] ata10: EH complete
>> Dec 29 14:00:19 segfault kernel: [403606.369842] ata11.00: hard resetting
>> link
>> Dec 29 14:00:20 segfault kernel: [403606.675420] ata11.01: hard resetting
>> link
>> Dec 29 14:00:20 segfault kernel: [403607.131342] ata11.00: SATA link up
>> 3.0 Gbps (SStatus 123 SControl 300)
>> Dec 29 14:00:20 segfault kernel: [403607.131355] ata11.01: SATA link down
>> (SStatus 0 SControl 300)
>> Dec 29 14:00:20 segfault kernel: [403607.248752] ata11.00: configured for
>> UDMA/133
>> Dec 29 14:00:20 segfault kernel: [403607.248759] ata11: EH complete
>> Dec 29 14:00:20 segfault kernel: [403607.249045] ata12.00: hard resetting
>> link
>> Dec 29 14:00:20 segfault kernel: [403607.554194] ata12.01: hard resetting
>> link
>> Dec 29 14:00:21 segfault kernel: [403608.011215] ata12.00: SATA link up
>> 3.0 Gbps (SStatus 123 SControl 300)
>> Dec 29 14:00:21 segfault kernel: [403608.011244] ata12.01: SATA link up
>> 3.0 Gbps (SStatus 123 SControl 300)
>> Dec 29 14:00:21 segfault kernel: [403608.021439] ata12.00: configured for
>> UDMA/133
>> Dec 29 14:00:21 segfault kernel: [403608.028239] ata12.01: configured for
>> UDMA/133
>> Dec 29 14:00:21 segfault kernel: [403608.028243] ata12: EH complete
>> Dec 29 14:00:21 segfault kernel: [403608.028442] ata13: hard resetting
>> link
>> Dec 29 14:00:21 segfault kernel: [403608.345178] ata13: SATA link down
>> (SStatus 0 SControl 300)
>> Dec 29 14:00:21 segfault kernel: [403608.345195] ata13: EH complete
>> Dec 29 14:00:21 segfault kernel: [403608.345443] ata14: hard resetting
>> link
>> Dec 29 14:00:22 segfault kernel: [403608.800805] ata14: SATA link up 1.5
>> Gbps (SStatus 113 SControl 300)
>> Dec 29 14:00:22 segfault kernel: [403608.810109] ata14.00: configured for
>> UDMA/100
>> Dec 29 14:00:22 segfault kernel: [403608.815632] ata14: EH complete
>> Dec 29 14:00:22 segfault kernel: [403608.815863] ata15: soft resetting
>> link
>> Dec 29 14:00:22 segfault kernel: [403608.966859] ata15: EH complete
>> Dec 29 14:00:22 segfault kernel: [403608.967054] ata16: soft resetting
>> link
>> Dec 29 14:00:22 segfault kernel: [403609.130010] ata16: EH complete
>> Dec 29 14:00:22 segfault vdsm Storage.LVM WARNING lvm vgs failed: 5 [] ['
>> Volume group "f0175c0c-75f1-4518-ba77-f7476171f6c6" not found']
>> Dec 29 14:00:22 segfault vdsm Storage.StorageDomain WARNING Resource
>> namespace f0175c0c-75f1-4518-ba77-f7476171f6c6_imageNS already registered
>> Dec 29 14:00:22 segfault vdsm Storage.StorageDomain WARNING Resource
>> namespace f0175c0c-75f1-4518-ba77-f7476171f6c6_volumeNS already registered
>> Dec 29 14:00:23 segfault vdsm Storage.LVM WARNING lvm vgs failed: 5 [] ['
>> Volume group "78b1f41d-29cf-4e1a-a84d-fb9175f4388e" not found']
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
12 years, 4 months
Re: [Users] Wiki: Network summary page
by Dan Kenigsberg
On Thu, Jan 03, 2013 at 05:12:12PM +0100, Adrian Gibanel wrote:
> As I mentioned on another message I've been given wiki edition permissions. I was supposed to write a way of having hostonly networks thanks to libvirt and thanks to a former mailing list message but... Here's what happened. I wanted to know if there was something in the wiki about it and... So...
>
> Here's my wiki page:
> http://www.ovirt.org/User:Adrian15
>
> And here there's a kind of Network summary of what you can find on the oVirt wiki itself about network and mainly my thoughts about them:
> http://www.ovirt.org/User:Adrian15/Network_Index
>
> As you can see there are some ideas:
>
> * The mockups are not clearly defined as such in their own pages and there should be a common header for that
> * Some mockups are too hidden from main page.
> * There's no a central way of knowing the supported oVirt network topologies (now with my index it's a bit easier)
> * Network stuff is messed up with some bits in one place and some bits in another place.
>
> Not sure when I'll be able to write to continue this Network index writing and the host-only one because I've already a lot of time with this little summary but if you think that I can search for another term in wiki search instead of "Network" that unveils other networks page don't hesitate to tell me it.
>
> So... Enjoy and improve! (As this is a wiki).
Thanks, Adrian. Revamping the networking topic on the wiki is a great
initiative - it should not be tucked away in a personal sub page. It
would be great if you could tag network-related pages with a
"Networking" category, and move your summary to that category's page.
P.s I am a bit confused on what you mean about hostonly network - is
this related to http://www.ovirt.org/Features/Nicless_Network ?
12 years, 4 months
Re: [Users] What do you want to see in oVirt next?
by Andrew Cathrow
----- Original Message -----
> From: "David Jaša" <djasa(a)redhat.com>
> To: "Bret Palsson" <bret(a)getjive.com>
> Cc: users(a)ovirt.org
> Sent: Friday, January 4, 2013 10:20:22 AM
> Subject: Re: [Users] What do you want to see in oVirt next?
>
> Bret Palsson píše v Pá 04. 01. 2013 v 08:05 -0700:
> > I'd like to see a built in VNC console that is java or a Spice java
> > console, something that runs cross platform, OS X, Windows, Linux
> > etc...
>
> There are purely browser-based solutions for both: noVNC (mature
> AFAIK)
> and spice-html5 (still in beta quality, developers welcome). The
> problem
Beta quality is probably a little optimistic for spice-htm4
> with both is that they require websockets on server side which is not
> yet ready in qemu - spice-server implementation had to be reverted
> and
You can run a novnc server on another host to proxy it - similar to what Horizon does.
> will need a rewrite and libvncserver gained initial support just few
> months ago.
>
> I'm sure that web-based consoles will be available in oVirt shortly
> after all layers below get the necessary features.
>
> David
>
> >
> > On Fri, Jan 4, 2013 at 7:04 AM, David Jaša <djasa(a)redhat.com>
> > wrote:
> > Marcelo Barbosa píše v Pá 04. 01. 2013 v 11:46 -0200:
> > > Dear Itamar,
> > >
> > >
> > > My dreams for oVirt 3.2:
> > >
> > >
> > > 1) Packages CentOs 6.x (ovirt-engine,
> > > ovirt-guest-agent);
> > > 2) Resize VM disk from GUI ovirt-engine;
> > > 3) Spice for Google Chrome, this browser run in
> > > anywere
> > O.S. and largest world use;
> >
> >
> > spice-xpi 2.8 will support chrome/chromium but it will
> > still
> > be linux-only.
> >
> > David
> >
> > > 4) Show in Virtual Machines > General > ip/ip's from
> > > vm
> > OR Virtual Machines > Network Interfaces > show ip;
> > > 5) ISO upload from GUI ovirt-engine the best option;
> > > 6) Monitoring enviroment oVirt to Zabbix and
> > > Nagios(total
> > vms, mem clusters, and more...) example:
> > https://github.com/dougsland/nagios-plugins-rhev
> > >
> > >
> > > I think this would oVirt the project to another level,
> > much higher ...
> > >
> > >
> > > Thanks.
> > >
> > > Marcelo Barbosa
> > > mr.marcelo.barbosa(a)gmail.com
> > >
> > >
> > > On Fri, Jan 4, 2013 at 9:18 AM, Soeren Grunewald
> > <soeren.grunewald(a)avionic-design.de> wrote:
> > > On 01/03/2013 07:04 PM, Itamar Heim wrote:
> > > On 01/03/2013 07:25 PM, Soeren Grunewald
> > wrote:
> > > On 01/03/2013 05:33 PM, Itamar
> > > Heim
> > wrote:
> > > On 01/03/2013 06:28 PM,
> > Soeren Grunewald wrote:
> > > On 01/03/2013
> > > 05:08
> > PM, Itamar Heim wrote:
> > > Hi
> > > Everyone,
> > >
> > > as we
> > > wrap
> > oVirt 3.2, I wanted to check with oVirt users on what they
> > > find
> > good/useful in oVirt, and what they would like to see
> > >
> > improved/added in coming versions?
> > >
> > > A nice feature
> > > would
> > be to be able to migrate guests between AMD and
> > > Intel host
> > > machines.
> > > KVM should be
> > > able
> > to support it [1]. I don't know if other hypervisors
> > > are able/can to
> > support this. Because oVirt aims to support different
> > > hypervisors it
> > > might
> > be problematic. But I think offline migration
> > > should be
> > > possible.
> > >
> > > can you please elaborate
> > what do you mean by offline migration?
> > >
> > >
> > > With offline migration, I means
> > > the
> > guest is not running while moving
> > > it from one host to another.
> > >
> > > I have 2 host machines for
> > > testing.
> > One is running on a AMD Opteron and
> > > the other Intel Xeon. Since I can
> > not put both machines in the same
> > > cluster, I have specified a
> > > cluster
> > called AMD (for the opteron) and
> > > another called Intel (for the
> > > xeon).
> > > Now I would like to move a guest
> > from the AMD machine to the Intel
> > > machine. Since the CPU does not
> > match I can't.
> > >
> > > I'm not aware that you can't.
> > >
> > >
> > > Ok, now is see my problem.
> > >
> > >
> > > you should be able to move the VM from
> > > one
> > cluster to the other.
> > > yes, it will see a different cpu model.
> > > why
> > is that an issue?
> > >
> > >
> > >
> > > To move the guest I need to "edit" the guest and
> > than I can change the cluster.
> > >
> > > Thanks,
> > > Soeren
> > >
> > >
> > >
> > >
> > > I assume the solution for this
> > > could
> > be a generic cluster definition.
> > > Something with a limited/specific
> > cpu set equal to the qemu
> > > configuration "-cpu qemu32" or
> > > "-cpu
> > qemu64".
> > >
> > >
> > > Regards,
> > > Soeren
> > >
> > > [...]
> > >
> > >
> > >
> > >
> > > _______________________________________________
> > > Users mailing list
> > > Users(a)ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/users
> > >
> > >
> > >
> > > _______________________________________________
> > > Users mailing list
> > > Users(a)ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/users
> >
> >
> > --
> >
> > David Jaša, RHCE
> >
> > SPICE QE based in Brno
> > GPG Key: 22C33E24
> > Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3
> > 3E24
> >
> >
> >
> > _______________________________________________
> > Users mailing list
> > Users(a)ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
> >
> > _______________________________________________
> > Users mailing list
> > Users(a)ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
> --
>
> David Jaša, RHCE
>
> SPICE QE based in Brno
> GPG Key: 22C33E24
> Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24
>
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
12 years, 4 months
Re: [Users] Destktop VM to Server VM ?
by Michael Pasternak
On 01/03/2013 09:59 PM, Itamar Heim wrote:
> On 01/03/2013 05:32 PM, Hideo Goto wrote:
>> First of all, a Happy new year to every subscriber of the list.
>>
>> My first concern of the year about ovirt:
>>
>> Is it possible to change the type of an existing VM from desktop to
>> server by any possible means?
>> In my case, the VM was imported from KVM to Ovirt 3.0 environment
>> using virt-v2v. I have just found out that the VM was recognized as
>> desktop while expected as server .
>>
>> Thanks in advance for any advise.
>
> I'm pretty sure the REST API/SDK/CLI would allow this as updating this field of the VM.
> michael?
it's used to work afair, but now i see this:
[oVirt shell (connected)]# show vm iscsi_desktop | grep type
type : desktop
[oVirt shell (connected)]#
[oVirt shell (connected)]# update vm iscsi_desktop --type server
error:
status: 400
reason: Bad Request
detail: Failed updating the properties of the VM. This may be caused either by:
1. The values selected are not appropriate for the VM; or
2. Its values cannot be updated while the VM is in UP state (Please shut down the VM in order to modify properties such as CPU or cluster).
Omer?
--
Michael Pasternak
RedHat, ENG-Virtualization R&D
12 years, 4 months
[Users] Storage Domain
by sirin
Hi all,
Why offline Storage Domain with download iso image on storage?
Download iso image and parallel installation OS fail (
may be this is bug? it's not correct…
Artem
12 years, 4 months
Re: [Users] Destktop VM to Server VM ?
by Hideo Goto
Thanks a lot Itamar.
Everything became clear.
Unfortunately, We are still using ovirt 3.0, for which I think ovirt-shell
is not available.
So, I will make a small java program.
Best RGDS
2013/1/5 Itamar Heim <iheim(a)redhat.com>
> On 01/04/2013 07:04 PM, Hideo Goto wrote:
>
>> Thaks a lot for your suggestion.
>>
>> in fact I found the <type>parameter which specifies server/desktop,
>> checking the output of VMs' parameters by REST (/api/vms).
>> Is it this very "type" paramter to change?
>>
>
> yes.
>
>
>
>> Best RGDS
>> Hideo GOTO
>>
>> P.S.
>> I would be happy if you could recomend me any tool, if existing,
>> which might make it easy to update parameters by REST.
>>
>
> ovirt-shell is a cli tool which should let you update such a parameter
> with a one liner command.
> you can also use the python or java sdk's for any heavier scripting.
>
>
>
>> 2013/1/4 Itamar Heim <iheim(a)redhat.com>:
>>
>>> On 01/03/2013 05:32 PM, Hideo Goto wrote:
>>>
>>>>
>>>> First of all, a Happy new year to every subscriber of the list.
>>>>
>>>> My first concern of the year about ovirt:
>>>>
>>>> Is it possible to change the type of an existing VM from desktop to
>>>> server by any possible means?
>>>> In my case, the VM was imported from KVM to Ovirt 3.0 environment
>>>> using virt-v2v. I have just found out that the VM was recognized as
>>>> desktop while expected as server .
>>>>
>>>> Thanks in advance for any advise.
>>>>
>>>
>>>
>>> I'm pretty sure the REST API/SDK/CLI would allow this as updating this
>>> field
>>> of the VM.
>>> michael?
>>>
>>
>
>
12 years, 4 months
Re: [Users] Failed to import Vm from export to storagedomain
by Haim Ateya
Hi Ricky,
its really interesting, the vm process failed to start as libvirt identified double use of same PCI address:
Thread-3111::ERROR::2013-01-03 16:30:27,373::vm::617::vm.Vm::(_startUnderlyingVm) vmId=`9741c58b-e7b2-41d8-9f35-8ea79ca81528`::The vm start process failed
Traceback (most recent call last):
File "/usr/share/vdsm/vm.py", line 579, in _startUnderlyingVm
self._run()
File "/usr/share/vdsm/libvirtvm.py", line 1421, in _run
self._connection.createXML(domxml, flags),
File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line 83, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2489, in createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: XML error: Attempted double use of PCI Address '0:0:1.2' (may need "multifunction='on'" for device on function 0
Thread-3111::DEBUG::2013-01-03 16:30:27,377::vm::933::vm.Vm::(setDownStatus) vmId=`9741c58b-e7b2-41d8-9f35-8ea79ca81528`::Changed state to Down: XML error: Attempted double use of PCI Address '0:0:1.2' (may need "multifunction='on'" for device on function 0
from the VM xml, I see you trying to use 6 devices of USB with same PCI address:
<controller type="usb">
<address bus="0x00" domain="0x0000" function="0x2" slot="0x01" type="pci"/>
</controller>
<controller type="usb">
<address bus="0x00" domain="0x0000" function="0x2" slot="0x01" type="pci"/>
</controller>
<controller type="usb">
<address bus="0x00" domain="0x0000" function="0x2" slot="0x01" type="pci"/>
</controller>
<controller type="usb">
<address bus="0x00" domain="0x0000" function="0x2" slot="0x01" type="pci"/>
</controller>
<controller type="usb">
<address bus="0x00" domain="0x0000" function="0x2" slot="0x01" type="pci"/>
</controller>
<controller type="usb">
<address bus="0x00" domain="0x0000" function="0x2" slot="0x01" type="pci"/>
</controller>
need to understand what went wrong there, was it the export attempt that created this problematic entry in the OVF file or was it the import?
anyway, please open a bug for it.
Haim
----- Original Message -----
> From: "Ricky" <rockybaloo(a)gmail.com>
> To: Users(a)ovirt.org
> Sent: Thursday, January 3, 2013 7:00:21 PM
> Subject: [Users] Failed to import Vm from export to storagedomain
>
> Hi,
>
> So, Finally I reinstalled the whole cluster after I have exported
> every VM.
>
> I followed this wiki
> http://wiki.dreyou.org/dokuwiki/doku.php?id=ovirt_rpm_start31.
>
> Everything went up except that I just have one host installed in the
> cluster. The other host is still serving the old cluster....
>
> When trying to import my old VMs I got problems with some of them not
> beeing imported. But 3 of my VMs did and one of them was my
> mailserver... but when I try to start the VM I hit the wall again.
>
> I have attached the vdsm.log, right now I cant see the forest for all
> the trees........
>
> Regards //Ricky
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
12 years, 4 months
Re: [Users] ISO path is not empty
by Haim Ateya
I don't see any particular reason for blocking this in general, since we create our domain structure under /ISO/<sdUUID>/..
so it doesn't really matter.
please open a bug for it.
Haim
----- Original Message -----
> From: "Mohsen Saeedi" <mohsen.saeedi(a)gmail.com>
> To: users(a)ovirt.org
> Sent: Friday, January 4, 2013 10:49:25 AM
> Subject: [Users] ISO path is not empty
>
>
> Hi
> I have a problem with Ovir t engine-setup. when i run it and in the
> setup process then it checks the ISO local path and print a error:
> directory /ISO is not empty
> I make a new partition and make ext4 filesystem and the i mount it
> under /ISO. we know it ha s a lost+foun d directory. i think it
> should be fixed in newer version.
> Thanks.
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
12 years, 4 months
Re: [Users] What do you want to see in oVirt next?
by Shu Ming
Jiri Belka :
> On Thu, 03 Jan 2013 18:08:48 +0200
> Itamar Heim <iheim(a)redhat.com> wrote:
>
>> Hi Everyone,
>>
>> as we wrap oVirt 3.2, I wanted to check with oVirt users on what they
>> find good/useful in oVirt, and what they would like to see
>> improved/added in coming versions?
> 1. Virtual Serial Port
> * accessible via network
> * accessible encrypted via network (qemu doesn't do it yet, IIRC)
> * vSPC-like (virtual serial port concentrator) app which would act
> as "proxy" to access individual VM's virtual serial ports
>
> - vmware docs: http://tinyurl.com/7dg3ll5
> - vSPC 3rd party info:
> http://isnotajoke.com/vmware_virtual_serial_ports.html
Here is the work undergoing:
http://gerrit.ovirt.org/#/c/10381/
>
> 2. Clustered engine
> * 2 engine nodes making an app cluster both same priority
>
> 3. OS independent as possible
> * engine talks to vdsmd, thus engine should not depend on any
> Linux specific OS features; engine thus could with some work
> be possible to install on *BSD, Solaris etc...
> * add other OS types into engine (with nice icons), ESXi/VirtualBox
> both offer other OS types besides Windows/Linux.
>
> jbelka
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
--
---
舒明 Shu Ming
Open Virtualization Engineerning; CSTL, IBM Corp.
Tel: 86-10-82451626 Tieline: 9051626 E-mail: shuming(a)cn.ibm.com or shuming(a)linux.vnet.ibm.com
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC
12 years, 4 months
[Users] Subject: Re: What do you want to see in oVirt next?
by Thomas Scofield
- Ovirt management server should be able to run as a virtual machine
managed by ovirt
- Clustered management server, active/passive, or even better active/active
- Support for additional iternal users, so monitoring and automation don't
need to rely on an external source for authentication
- guest agent should have the ability to configure portions of the virtual
machine
-- set the hostname of the virtual machine
-- configure the network information for the virtual machine
-- reset root password
> Hi Everyone,
>
> as we wrap oVirt 3.2, I wanted to check with oVirt users on what they
> find good/useful in oVirt, and what they would like to see
> improved/added in coming versions?
>
> Thanks,
> Itamar
12 years, 4 months
[Users] Missing archives
by Karsten 'quaid' Wade
This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enig87AB028DD52DF6D3A58E2A06
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
On 21 Dec I fixed the archive for this mailing list at the request of a
user.
When I did that, I failed to set the permissions and ownership for the
changed mailbox file. It remained owned by root, so Mailman was unable
to write to the archives. :( :(
I think we can fix this if anyone has a copy of all the email sent to
this list since 21 Dec. in an .mbox format.
Anyone have everything from 21 Dec. to this message I just sent (which
should hit the archives)?
- Karsten
--=20
Karsten 'quaid' Wade, Sr. Analyst - Community Growth
http://TheOpenSourceWay.org .^\ http://community.redhat.com
@quaid (identi.ca/twitter/IRC) \v' gpg: AD0E0C41
--------------enig87AB028DD52DF6D3A58E2A06
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with undefined - http://www.enigmail.net/
iD8DBQFQ5yIr2ZIOBq0ODEERAoiHAKDezC4oZtKpa9z4ZSfceO+7eScbXgCgoKdN
lOmulECyehwk3RqHkKQQe9Q=
=p9Z2
-----END PGP SIGNATURE-----
--------------enig87AB028DD52DF6D3A58E2A06--
12 years, 4 months