[Users] problem with all-in-one ovirt install on Centos 6.3 related to hostname resolution
by Frederic Bevia
Hello,
First, i apologise if it's the wrong place for my question.
I had try to install ovirt all-in-one on a Centos6 test server, following this
howtos: http://wiki.centos.org/HowTos/oVirt,
http://blog.jebpages.com/archives/up-and-running-with-ovirt-3-1-edition/ and
http://www.ovirt.org/Quick_Start_Guide.
I followed the pre-requisites, cpu, ram, disks, packages etc..
My server is a test server with fresh new Centos 6.3 install, and so, it's in a
lan séparated from the main prod lan.
But It hass acces to Internet and DNS, but it isn't registred in our DNSs.
Simply it's declared in his own hostfile ( /etc/hosts/). It's fqdn is
srv-santos.cg33.fr (note the lame joke :-)), and when I ping it from himself
(ping srv-santos;cg33.fr), the ping is OK (idem with ping localhost). When with
python i do this:
gethostbyname('srv-santos.cg33.fr')
'172.18.93.215'
the response is OK.
So the OS himself can resolve is own hostname via the hosts file, since it isn't
in DNS. I also checked that in the resolv.conf, the order is hosts, bind.
But when i do the engine-set-up i obtain this:
srv-santos.cg33.fr did not resolve into an IP address
User input failed validation, do you still wish to use it? (yes|no):
If i say yes, the install pursue, but it fails:
"Error: There's a problem with JBoss service.Check that it's up and rerun setup.
Please check log file /var/log/ovirt-engine/engine-setup_2013_03_06_09_52_24.log
for more information"
The last lines of the log
-->File "/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py", line 216,
in waitForJbossUp
utils.retry(isHealthPageUp, tries=45, timeout=350, sleep=5)
File "/usr/share/ovirt-engine/scripts/common_utils.py", line 929, in retry
return func()
File "/usr/share/ovirt-engine/scripts/plugins/all_in_one_100.py", line 429, in
isHealthPageUp
raise Exception(ERROR_JBOSS_STATUS)
Exception: Error: There's a problem with JBoss service.Check that it's up and
rerun setup. <--
I even started jboss-as during one of my many tries:
[root@srv-santos fred]# service jboss-as status
jboss-as is running (pid 22043)
So I think that the setup process can't resolve the name of the host to test if
jboss is up and the servlet ok.
This possibly means that the name resolution is only done with DNS, without
checking the hostfile.
Can someone help me to solve this problem (without seting up a local bind server
on my host or registrering my test server in our DNS servers of course:-)
Thanx
11 years, 8 months
[Users] Adding a bond.vlantag to ovirt-node
by Alex Leonhardt
All,
I've added manually a bond.<vlantag> to a Hyper-Visor, then added a bridge
and slaved bond.<vlantag> to it.
I've upped the interfaces (no IPs), however ovirt-engine still wont allow
me to add the new bridged interface. I did this yesterday, so I thought,
maybe it's just a cache issue, however, it doesnt seem to update the HVs
network config periodically ?
What can I do to get this sorted ?? I dont want to have to restart the
networking as VMs are running and needed.
FWIW, the setup looks like this -
eth0
| ----- bond0.111 --- br1
| ----- bond0.112 --- ovirtmgmt
eth1
the change was :
eth0
| ----- bond0.111 --- br1
| ----- bond0.112 --- ovirtmgmt
| ----- bond0.113 --- br2
eth1
I then added "br2" to the ovirt-engine config, however, I'm not able to
assign it to bond in the network config (web admin interface) for the hyper
visor / host.
Also see screenshot attached.
Thanks
Alex
--
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
11 years, 8 months
[Users] oVirt Weekly Meeting Minutes -- 2013-03-06
by Mike Burns
Minutes:
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-03-06-15.00.html
Minutes (text):
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-03-06-15.00.txt
Log:
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-03-06-15.00.log.html
*** Additional information included below that came in after the
meeting. See lines that start with ***
============================
#ovirt: oVirt Weekly Meeting
============================
Meeting started by mburns at 15:00:24 UTC. The full logs are available
at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-03-06-15.00.log.html
.
Meeting summary
---------------
* agenda and roll call (mburns, 15:00:42)
* Agenda (mburns, 15:00:46)
* * Release Planning and Updates (mburns, 15:00:57)
* * Conferences and Workshops (mburns, 15:01:06)
* * Sub-project Reports (mburns, 15:01:24)
* Release Planning (mburns, 15:03:44)
* release management wiki page is up for 3.3 (mburns, 15:04:15)
* LINK: http://wiki.ovirt.org/OVirt_3.3_release-management (mburns,
15:04:17)
* mburns has sent a request for feature pages and project contacts for
3.3 (mburns, 15:06:06)
* no responses yet (but that's because I just sent it this morning)
(mburns, 15:06:31)
* mburns also sent out email about overall project management process
going forward to arch@ and board@ (mburns, 15:07:05)
* dneary and mgoldboi have also been working on the overall release
process, material will be sent to arch and board in response to
mburns email (mburns, 15:08:48)
* Please get feature pages created in the wiki ASAP and link them to
the release management page (mburns, 15:09:38)
* oVirt 3.2 Updates (mburns, 15:11:36)
* ovirt-node pushed live last thursday including vdsm and fix for a
security issue (mburns, 15:11:55)
* patches for el6 builds are included in engine master and proposed on
engine-3.2 branch (mburns, 15:12:29)
* currently in testing by jhernand and oschreib_ (mburns, 15:12:58)
* beta of el6 packages and ga of f18 packages for engine 3.2.1
available next week (mburns, 15:14:45)
* YamaKasY to work with oschreib_ on getting a list of issues in 3.2
that should be fixed in an async release (mburns, 15:24:01)
* mburns to try to get on top of el6 vdsm builds this week (mburns,
15:24:19)
* workshops and conferences (mburns, 15:28:15)
* LINK: http://www.ovirt.org/Intel_Workshop_May_2013 (theron,
15:29:40)
* moving forward with the oVirt workshop in Shangha (mburns,
15:29:56)
* orking though logistics directly with Intel, and we've published the
CFP to the oVirt website, as well as the mailing lists (mburns,
15:30:06)
* LINK: http://www.ovirt.org/Intel_Workshop_May_2013 (mburns,
15:30:14)
* we're actively looking for speakers now, if anyone can think of
different avenues, feel free to let theron know (mburns, 15:31:12)
* Attendees here thinking about attending should get visas in order
ASAP (mburns, 15:32:49)
* early in planning, but we are prepping for 3 tracks. (mburns,
15:32:58)
* IDEA: focus on dev, user, and integration tracks.. thinking space
for showcasing oVirt integration with Gluster (mburns, 15:33:12)
* itamar and the intel GM have been asked to do keynotes, more info to
follow on this (mburns, 15:33:54)
* The oVirt marketing group (once finalized who's in it) will be
reaching out to OVA to help with promotions (mburns, 15:35:49)
* lodging and additional logistics are being worked, expect updates on
the oVirt website as they materialize (mburns, 15:35:58)
* Sub-Project Report -- infra (mburns, 15:43:44)
* jenkins has migrated to a new host at AlterWay (mburns, 15:45:51)
* things appear stable with the new jenkins master (mburns, 15:46:21)
*** working on the rackspace boxes. Going to be oVirt on F18 to get
nested virtualization. Will run RHEL in VMs on top.
*** other info in the minutes
*** LINK:
http://resources.ovirt.org/meetings/ovirt/2013/ovirt.2013-03-04-15.03.html
* Other Topics (mburns, 15:47:58)
Meeting ended at 15:52:30 UTC.
Action Items
------------
Action Items, by person
-----------------------
* **UNASSIGNED**
* (none)
People Present (lines said)
---------------------------
* mburns (86)
* theron (21)
* oschreib_ (17)
* YamaKasY (15)
* dneary (10)
* ovirtbot (5)
* tfeldman (3)
* mgoldboi (3)
* doron_ (1)
* lh (1)
* dustins (1)
* quaid (0)
* ewoud (0)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
11 years, 8 months
[Users] ovirt / 2 iscsi storage domains / same LUN IDs
by Alex Leonhardt
Hi there,
I was doing some testing around ovirt and iscsi and found an issue where as
when you use "dd" to create "backing-stores" for iscsi and you point ovirt
to it to discover & login, it thinks the LUN ID is the same although the
target is different and adds additional paths to the config
(automagically?) bringing down the iSCSI storage domain.
See attached screenshot of what I got when trying to a "new iscsi san
storage domain" to ovirt. The Storage Domain is now down and I cannot get
rid of the config (???) how do I force it to logout of the targets ??
Also, anyone know how to deal with the duplicate LUN ID issue ?
Thanks
Alex
--
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
11 years, 8 months
Re: [Users] ovirt reporting wrong cpu family
by Adrian Gibanel
I happen to have the same problem in oVirt 3.1 on Fedora.
Any workaround?
$ sudo vdsClient -s 0 getVdsCaps | grep -i flags ; echo -e -n "\n" ;cat /proc/cpuinfo | grep "model name" | head -n 1
cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,aperfmperf,eagerfpu,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,sse4_1,sse4_2,popcnt,tsc_deadline_timer,xsave,avx,lahf_lm,arat,epb,xsaveopt,pln,pts,dtherm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_coreduo,model_Conroe
model name : Intel(R) Core(TM) i3-2130 CPU @ 3.40GHz
$ rpm -qa | grep -i ovirt ; rpm -qa | grep -i vdsm
ovirt-release-fedora-5-2.noarch
vdsm-python-4.10.0-10.fc17.x86_64
vdsm-cli-4.10.0-10.fc17.noarch
vdsm-4.10.0-10.fc17.x86_64
vdsm-xmlrpc-4.10.0-10.fc17.noarch
----- Mensaje original -----
> De: "Jithin Raju" <rajujith(a)gmail.com>
> Para: "Itamar Heim" <iheim(a)redhat.com>
> CC: users(a)ovirt.org
> Enviados: Jueves, 3 de Enero 2013 4:49:34
> Asunto: Re: [Users] ovirt reporting wrong cpu family
> Hi ,
> Please find the requested flags below:
> [root@fig /]# vdsClient -s 0 getVdsCaps | grep -i flags
> cpuFlags =
> fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,aperfmperf,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,x2apic,popcnt,tsc_deadline_timer,aes,xsave,avx,lahf_lm,ida,arat,epb,xsaveopt,pln,pts,dtherm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_coreduo,model_Conroe
> Thanks,
> Jithin
> On Thu, Jan 3, 2013 at 2:03 AM, Itamar Heim < iheim(a)redhat.com >
> wrote:
> > On 01/02/2013 03:37 PM, Jithin Raju wrote:
>
> > > Hi,
> >
>
> > > I have installed ovirt 3.1 on fedora 17.
> >
>
> > > I added my node wtih intel E5-2620 (sandy bridge) to cluster.
> >
>
> > > Even though model is detected properly,CPU name is shown as Intel
> > > Conroe
> >
>
> > > family instead of sandy bridge.
> >
>
> > > Since my cluster is configured as sandy bridge i got this error:
> >
>
> > > Host fig moved to Non-Operational state as host does not meet the
> >
>
> > > cluster's minimum CPU level. Missing CPU features :
> > > model_SandyBridge.
> >
>
> > > as per cpuinfo :model name : Intel(R) Xeon(R) CPU E5-2620 0 @
> > > 2.00GHz.
> >
>
> > > i moved the cluster to conroe ,got the host up.
> >
>
> > > reference:
> >
>
> > > http://en.wikipedia.org/wiki/ Sandy_Bridge_( microarchitecture)
> >
>
> > > http://ark.intel.com/products/ 64594
> >
>
> > > PFA for screenshot.
> >
>
> > > Thanks,
> >
>
> > > Jithin
> >
>
--
--
Adrián Gibanel
I.T. Manager
+34 675 683 301
www.btactic.com
Ens podeu seguir a/Nos podeis seguir en:
i
Abans d´imprimir aquest missatge, pensa en el medi ambient. El medi ambient és cosa de tothom. / Antes de imprimir el mensaje piensa en el medio ambiente. El medio ambiente es cosa de todos.
AVIS:
El contingut d'aquest missatge i els seus annexos és confidencial. Si no en sou el destinatari, us fem saber que està prohibit utilitzar-lo, divulgar-lo i/o copiar-lo sense tenir l'autorització corresponent. Si heu rebut aquest missatge per error, us agrairem que ho feu saber immediatament al remitent i que procediu a destruir el missatge .
AVISO:
El contenido de este mensaje y de sus anexos es confidencial. Si no es el destinatario, les hacemos saber que está prohibido utilizarlo, divulgarlo y/o copiarlo sin tener la autorización correspondiente. Si han recibido este mensaje por error, les agradeceríamos que lo hagan saber inmediatamente al remitente y que procedan a destruir el mensaje .
11 years, 8 months
[Users] When does oVirt auto-migrate, and what does HA do?
by Rob Zwissler
In what scenarios does oVirt auto-migrate VMs? I'm aware that it
currently migates VMs when putting a host into maintenance, or when
manually selecting migration via the web interface, but when else will
hosts be migrated? Is there any automatic compensation for resource
imbalances between hosts? I could find no documentation on this
subject, if I missed it I apologize!
In a related question, exactly what does enabling HA (Highly
Available) mode do? The only documentation I could find on this is at
http://www.ovirt.org/OVirt_3.0_Feature_Guide#High_availability but it
is a bit vague, and being from 3.0, possibly out of date. Can someone
briefly describe the HA migration algorithm?
Thanks,
Rob
11 years, 8 months
[Users] oVirt 3.1 - engine configuration problem
by Piotr Szubiakowski
Hi,
I'm building oVirt engine 3.1 from source on Scientific Linux 6.3. I'm
working on branch origin/engine_3.1. During startup the application
throws following error:
2013-02-21 11:53:33,768 WARN
[org.ovirt.engine.core.utils.ConfigUtilsBase] (MSC service thread 1-22)
Could not find enum value for option: CbcCheckOnVdsChange
2013-02-21 11:53:33,995 ERROR
[org.ovirt.engine.core.engineencryptutils.EncryptionUtils] (MSC service
thread 1-22) Failed to decryptData must start with zero
2013-02-21 11:53:33,996 ERROR
[org.ovirt.engine.core.dal.dbbroker.generic.DBConfigUtils] (MSC service
thread 1-22) Failed to decrypt value for property CertificatePassword
will be used encrypted value
2013-02-21 11:53:34,000 WARN
[org.ovirt.engine.core.utils.ConfigUtilsBase] (MSC service thread 1-22)
Could not find enum value for option: ENGINEEARLib
2013-02-21 11:53:34,001 WARN
[org.ovirt.engine.core.utils.ConfigUtilsBase] (MSC service thread 1-22)
Could not find enum value for option: CAEngineKey
2013-02-21 11:53:34,008 ERROR
[org.ovirt.engine.core.engineencryptutils.EncryptionUtils] (MSC service
thread 1-22) Failed to decryptData must start with zero
2013-02-21 11:53:34,009 ERROR
[org.ovirt.engine.core.dal.dbbroker.generic.DBConfigUtils] (MSC service
thread 1-22) Failed to decrypt value for property AdminPassword will be
used encrypted value
2013-02-21 11:53:34,015 ERROR
[org.ovirt.engine.core.engineencryptutils.EncryptionUtils] (MSC service
thread 1-22) Failed to decryptData must start with zero
2013-02-21 11:53:34,015 ERROR
[org.ovirt.engine.core.dal.dbbroker.generic.DBConfigUtils] (MSC service
thread 1-22) Failed to decrypt value for property LocalAdminPassword
will be used encrypted value
2013-02-21 11:53:34,020 WARN
[org.ovirt.engine.core.utils.ConfigUtilsBase] (MSC service thread 1-22)
Could not find enum value for option: MinimalETLVersion
2013-02-21 11:53:34,024 WARN
[org.ovirt.engine.core.utils.ConfigUtilsBase] (MSC service thread 1-22)
Could not find enum value for option: ScriptsPath
2013-02-21 11:53:34,025 WARN
[org.ovirt.engine.core.utils.ConfigUtilsBase] (MSC service thread 1-22)
Could not find enum value for option: SQLServerI18NPrefix
2013-02-21 11:53:34,071 ERROR
[org.ovirt.engine.core.engineencryptutils.EncryptionUtils] (MSC service
thread 1-22) Failed to decryptData must start with zero
2013-02-21 11:53:34,072 ERROR
[org.ovirt.engine.core.dal.dbbroker.generic.DBConfigUtils] (MSC service
thread 1-22) Failed to decrypt value for property TruststorePass will be
used encrypted value
It seams to be a configuration problem. I followed this manual
http://www.ovirt.org/Engine_Node_Integration (section Engine core
machine). Is this manual up to date?
Best regarts,
Piotr
11 years, 8 months
[Users] Migration issue Asking For Help
by xianghuadu
This is a multi-part message in MIME format.
------=_001_NextPart146280477306_=----
Content-Type: text/plain;
charset="gb2312"
Content-Transfer-Encoding: base64
aGkgIGFsbA0KICAgICAgICAgICAgIEkgcmVjZW50bHkgaW4gdGhlIHJlc2VhcmNoIG92aXJ0IGVu
Y291bnRlciBhIHByb2JsZW0uIA0KSW4gdGhlIHZtIG1pZ3JhdGlvbiBvY2N1cnMgd2hlbiB0aGUg
ZXJyb3I6IE1pZ3JhdGlvbiBmYWlsZWQgZHVlIHRvIEVycm9yOiBDb3VsZCBub3QgY29ubmVjdCB0
byBwZWVyIGhvc3QuDQpNeSBlbnZpcm9ubWVudCBpczoNCktWTSAgICAgICAgICAgICBkZWxsIDI5
NTAgKiAyDQpzdG9yYWdlICAgICAgICBpc2NzaS10YXJnZXQNCnZtIHN5c3RlbSAgICB3aW5kb3dz
IDIwMDggcjINCm92aXJ0LWxvZ6O6DQoyMDEzLTAzLTA1IDE0OjUyOjIzLDA3NCBJTkZPICBbb3Jn
Lm92aXJ0LmVuZ2luZS5jb3JlLnZkc2Jyb2tlci5WZHNVcGRhdGVSdW5UaW1lSW5mb10gKFF1YXJ0
elNjaGVkdWxlcl9Xb3JrZXItNDIpIFszMjNkN2NhOF0gVk0gY2VudG9zIDRjYzIzZDkyLTg2Njct
NDcxMC05NzE0LWE2N2MwZDE3OGZhMCBtb3ZlZCBmcm9tIE1pZ3JhdGluZ0Zyb20gLS0+IFVwDQoy
MDEzLTAzLTA1IDE0OjUyOjIzLDA3NiBJTkZPICBbb3JnLm92aXJ0LmVuZ2luZS5jb3JlLnZkc2Jy
b2tlci5WZHNVcGRhdGVSdW5UaW1lSW5mb10gKFF1YXJ0elNjaGVkdWxlcl9Xb3JrZXItNDIpIFsz
MjNkN2NhOF0gYWRkaW5nIFZNIDRjYzIzZDkyLTg2NjctNDcxMC05NzE0LWE2N2MwZDE3OGZhMCB0
byByZS1ydW4gbGlzdA0KMjAxMy0wMy0wNSAxNDo1MjoyMywwNzkgRVJST1IgW29yZy5vdmlydC5l
bmdpbmUuY29yZS52ZHNicm9rZXIuVmRzVXBkYXRlUnVuVGltZUluZm9dIChRdWFydHpTY2hlZHVs
ZXJfV29ya2VyLTQyKSBbMzIzZDdjYThdIFJlcnVuIHZtIDRjYzIzZDkyLTg2NjctNDcxMC05NzE0
LWE2N2MwZDE3OGZhMC4gQ2FsbGVkIGZyb20gdmRzIDIwNQ0KMjAxMy0wMy0wNSAxNDo1MjoyMyww
ODUgSU5GTyAgW29yZy5vdmlydC5lbmdpbmUuY29yZS52ZHNicm9rZXIudmRzYnJva2VyLk1pZ3Jh
dGVTdGF0dXNWRFNDb21tYW5kXSAocG9vbC0zLXRocmVhZC00OSkgWzMyM2Q3Y2E4XSBTVEFSVCwg
TWlncmF0ZVN0YXR1c1ZEU0NvbW1hbmQoSG9zdE5hbWUgPSAyMDUsIEhvc3RJZCA9IDRlN2QxYWUy
LTgyNGUtMTFlMi1iYjRjLTAwMTg4YmU0ZGUyOSwgdm1JZD00Y2MyM2Q5Mi04NjY3LTQ3MTAtOTcx
NC1hNjdjMGQxNzhmYTApLCBsb2cgaWQ6IDYxODA4NWQNCjIwMTMtMDMtMDUgMTQ6NTI6MjMsMTMx
IEVSUk9SIFtvcmcub3ZpcnQuZW5naW5lLmNvcmUudmRzYnJva2VyLnZkc2Jyb2tlci5Ccm9rZXJD
b21tYW5kQmFzZV0gKHBvb2wtMy10aHJlYWQtNDkpIFszMjNkN2NhOF0gRmFpbGVkIGluIE1pZ3Jh
dGVTdGF0dXNWRFMgbWV0aG9kDQoyMDEzLTAzLTA1IDE0OjUyOjIzLDEzMiBFUlJPUiBbb3JnLm92
aXJ0LmVuZ2luZS5jb3JlLnZkc2Jyb2tlci52ZHNicm9rZXIuQnJva2VyQ29tbWFuZEJhc2VdIChw
b29sLTMtdGhyZWFkLTQ5KSBbMzIzZDdjYThdIEVycm9yIGNvZGUgbm9Db25QZWVyIGFuZCBlcnJv
ciBtZXNzYWdlIFZEU0dlbmVyaWNFeGNlcHRpb246IFZEU0Vycm9yRXhjZXB0aW9uOiBGYWlsZWQg
dG8gTWlncmF0ZVN0YXR1c1ZEUywgZXJyb3IgPSBDb3VsZCBub3QgY29ubmVjdCB0byBwZWVyIFZE
Uw0KMjAxMy0wMy0wNSAxNDo1MjoyMywxMzQgSU5GTyAgW29yZy5vdmlydC5lbmdpbmUuY29yZS52
ZHNicm9rZXIudmRzYnJva2VyLkJyb2tlckNvbW1hbmRCYXNlXSAocG9vbC0zLXRocmVhZC00OSkg
WzMyM2Q3Y2E4XSBDb21tYW5kIG9yZy5vdmlydC5lbmdpbmUuY29yZS52ZHNicm9rZXIudmRzYnJv
a2VyLk1pZ3JhdGVTdGF0dXNWRFNDb21tYW5kIHJldHVybiB2YWx1ZSANCiBDbGFzcyBOYW1lOiBv
cmcub3ZpcnQuZW5naW5lLmNvcmUudmRzYnJva2VyLnZkc2Jyb2tlci5TdGF0dXNPbmx5UmV0dXJu
Rm9yWG1sUnBjDQptU3RhdHVzICAgICAgICAgICAgICAgICAgICAgICBDbGFzcyBOYW1lOiBvcmcu
b3ZpcnQuZW5naW5lLmNvcmUudmRzYnJva2VyLnZkc2Jyb2tlci5TdGF0dXNGb3JYbWxScGMNCm1D
b2RlICAgICAgICAgICAgICAgICAgICAgICAgIDEwDQptTWVzc2FnZSAgICAgICAgICAgICAgICAg
ICAgICBDb3VsZCBub3QgY29ubmVjdCB0byBwZWVyIFZEUw0KDQoNCjIwMTMtMDMtMDUgMTQ6NTI6
MjMsMTM4IElORk8gIFtvcmcub3ZpcnQuZW5naW5lLmNvcmUudmRzYnJva2VyLnZkc2Jyb2tlci5C
cm9rZXJDb21tYW5kQmFzZV0gKHBvb2wtMy10aHJlYWQtNDkpIFszMjNkN2NhOF0gSG9zdE5hbWUg
PSAyMDUNCjIwMTMtMDMtMDUgMTQ6NTI6MjMsMTM5IEVSUk9SIFtvcmcub3ZpcnQuZW5naW5lLmNv
cmUudmRzYnJva2VyLlZEU0NvbW1hbmRCYXNlXSAocG9vbC0zLXRocmVhZC00OSkgWzMyM2Q3Y2E4
XSBDb21tYW5kIE1pZ3JhdGVTdGF0dXNWRFMgZXhlY3V0aW9uIGZhaWxlZC4gRXhjZXB0aW9uOiBW
RFNFcnJvckV4Y2VwdGlvbjogVkRTR2VuZXJpY0V4Y2VwdGlvbjogVkRTRXJyb3JFeGNlcHRpb246
IEZhaWxlZCB0byBNaWdyYXRlU3RhdHVzVkRTLCBlcnJvciA9IENvdWxkIG5vdCBjb25uZWN0IHRv
IHBlZXIgVkRTDQoyMDEzLTAzLTA1IDE0OjUyOjIzLDE0MSBJTkZPICBbb3JnLm92aXJ0LmVuZ2lu
ZS5jb3JlLnZkc2Jyb2tlci52ZHNicm9rZXIuTWlncmF0ZVN0YXR1c1ZEU0NvbW1hbmRdIChwb29s
LTMtdGhyZWFkLTQ5KSBbMzIzZDdjYThdIEZJTklTSCwgTWlncmF0ZVN0YXR1c1ZEU0NvbW1hbmQs
IGxvZyANCg0KdmRzbS1sb2ejug0KVGhyZWFkLTU5Njk6OkRFQlVHOjoyMDEzLTAzLTA1IDE0OjUy
OjIxLDMxMjo6bGlidmlydHZtOjoyODM6OnZtLlZtOjooX2dldERpc2tMYXRlbmN5KSB2bUlkPWA0
Y2MyM2Q5Mi04NjY3LTQ3MTAtOTcxNC1hNjdjMGQxNzhmYTBgOjpEaXNrIHZkYSBsYXRlbmN5IG5v
dCBhdmFpbGFibGUNClRocmVhZC01NjIyOjpFUlJPUjo6MjAxMy0wMy0wNSAxNDo1MjoyMiw4OTA6
OnZtOjoyMDA6OnZtLlZtOjooX3JlY292ZXIpIHZtSWQ9YDRjYzIzZDkyLTg2NjctNDcxMC05NzE0
LWE2N2MwZDE3OGZhMGA6OkZhaWxlZCB0byBkZXN0cm95IHJlbW90ZSBWTQ0KVHJhY2ViYWNrICht
b3N0IHJlY2VudCBjYWxsIGxhc3QpOg0KICBGaWxlICIvdXNyL3NoYXJlL3Zkc20vdm0ucHkiLCBs
aW5lIDE5OCwgaW4gX3JlY292ZXINCiAgICBzZWxmLmRlc3RTZXJ2ZXIuZGVzdHJveShzZWxmLl92
bS5pZCkNCiAgRmlsZSAiL3Vzci9saWI2NC9weXRob24yLjYveG1scnBjbGliLnB5IiwgbGluZSAx
MTk5LCBpbiBfX2NhbGxfXw0KICAgIHJldHVybiBzZWxmLl9fc2VuZChzZWxmLl9fbmFtZSwgYXJn
cykNCiAgRmlsZSAiL3Vzci9saWI2NC9weXRob24yLjYveG1scnBjbGliLnB5IiwgbGluZSAxNDg5
LCBpbiBfX3JlcXVlc3QNCiAgICB2ZXJib3NlPXNlbGYuX192ZXJib3NlDQogIEZpbGUgIi91c3Iv
bGliNjQvcHl0aG9uMi42L3htbHJwY2xpYi5weSIsIGxpbmUgMTI1MywgaW4gcmVxdWVzdA0KICAg
IHJldHVybiBzZWxmLl9wYXJzZV9yZXNwb25zZShoLmdldGZpbGUoKSwgc29jaykNCiAgRmlsZSAi
L3Vzci9saWI2NC9weXRob24yLjYveG1scnBjbGliLnB5IiwgbGluZSAxMzgyLCBpbiBfcGFyc2Vf
cmVzcG9uc2UNCiAgICByZXNwb25zZSA9IGZpbGUucmVhZCgxMDI0KQ0KICBGaWxlICIvdXNyL2xp
YjY0L3B5dGhvbjIuNi9zb2NrZXQucHkiLCBsaW5lIDM4MywgaW4gcmVhZA0KICAgIGRhdGEgPSBz
ZWxmLl9zb2NrLnJlY3YobGVmdCkNCiAgRmlsZSAiL3Vzci9saWI2NC9weXRob24yLjYvc3NsLnB5
IiwgbGluZSAyMTUsIGluIHJlY3YNCiAgICByZXR1cm4gc2VsZi5yZWFkKGJ1ZmxlbikNCiAgRmls
ZSAiL3Vzci9saWI2NC9weXRob24yLjYvc3NsLnB5IiwgbGluZSAxMzYsIGluIHJlYWQNCiAgICBy
ZXR1cm4gc2VsZi5fc3Nsb2JqLnJlYWQobGVuKQ0KU1NMRXJyb3I6IFRoZSByZWFkIG9wZXJhdGlv
biB0aW1lZCBvdXQNClRocmVhZC01NjIyOjpFUlJPUjo6MjAxMy0wMy0wNSAxNDo1MjoyMiw5MDk6
OnZtOjoyODM6OnZtLlZtOjoocnVuKSB2bUlkPWA0Y2MyM2Q5Mi04NjY3LTQ3MTAtOTcxNC1hNjdj
MGQxNzhmYTBgOjpGYWlsZWQgdG8gbWlncmF0ZQ0KVHJhY2ViYWNrIChtb3N0IHJlY2VudCBjYWxs
IGxhc3QpOg0KICBGaWxlICIvdXNyL3NoYXJlL3Zkc20vdm0ucHkiLCBsaW5lIDI2OCwgaW4gcnVu
DQogICAgc2VsZi5fc3RhcnRVbmRlcmx5aW5nTWlncmF0aW9uKCkNCiAgRmlsZSAiL3Vzci9zaGFy
ZS92ZHNtL2xpYnZpcnR2bS5weSIsIGxpbmUgNDQzLCBpbiBfc3RhcnRVbmRlcmx5aW5nTWlncmF0
aW9uDQogICAgcmVzcG9uc2UgPSBzZWxmLmRlc3RTZXJ2ZXIubWlncmF0aW9uQ3JlYXRlKHNlbGYu
X21hY2hpbmVQYXJhbXMpDQogIEZpbGUgIi91c3IvbGliNjQvcHl0aG9uMi42L3htbHJwY2xpYi5w
eSIsIGxpbmUgMTE5OSwgaW4gX19jYWxsX18NCiAgICByZXR1cm4gc2VsZi5fX3NlbmQoc2VsZi5f
X25hbWUsIGFyZ3MpDQogIEZpbGUgIi91c3IvbGliNjQvcHl0aG9uMi42L3htbHJwY2xpYi5weSIs
IGxpbmUgMTQ4OSwgaW4gX19yZXF1ZXN0DQogICAgdmVyYm9zZT1zZWxmLl9fdmVyYm9zZQ0KICBG
aWxlICIvdXNyL2xpYjY0L3B5dGhvbjIuNi94bWxycGNsaWIucHkiLCBsaW5lIDEyNTMsIGluIHJl
cXVlc3QNCiAgICByZXR1cm4gc2VsZi5fcGFyc2VfcmVzcG9uc2UoaC5nZXRmaWxlKCksIHNvY2sp
DQogIEZpbGUgIi91c3IvbGliNjQvcHl0aG9uMi42L3htbHJwY2xpYi5weSIsIGxpbmUgMTM4Miwg
aW4gX3BhcnNlX3Jlc3BvbnNlDQogICAgcmVzcG9uc2UgPSBmaWxlLnJlYWQoMTAyNCkNCiAgRmls
ZSAiL3Vzci9saWI2NC9weXRob24yLjYvc29ja2V0LnB5IiwgbGluZSAzODMsIGluIHJlYWQNCiAg
ICBkYXRhID0gc2VsZi5fc29jay5yZWN2KGxlZnQpDQogIEZpbGUgIi91c3IvbGliNjQvcHl0aG9u
Mi42L3NzbC5weSIsIGxpbmUgMjE1LCBpbiByZWN2DQogICAgcmV0dXJuIHNlbGYucmVhZChidWZs
ZW4pDQogIEZpbGUgIi91c3IvbGliNjQvcHl0aG9uMi42L3NzbC5weSIsIGxpbmUgMTM2LCBpbiBy
ZWFkDQogICAgcmV0dXJuIHNlbGYuX3NzbG9iai5yZWFkKGxlbikNClNTTEVycm9yOiBUaGUgcmVh
ZCBvcGVyYXRpb24gdGltZWQgb3V0DQpUaHJlYWQtNTk3MTo6REVCVUc6OjIwMTMtMDMtMDUgMTQ6
NTI6MjMsMzg0OjpCaW5kaW5nWE1MUlBDOjo5MDM6OnZkczo6KHdyYXBwZXIpIGNsaWVudCBbMTky
LjE2OC4xLjIwMV06OmNhbGwgdm1HZXRTdGF0cyB3aXRoICgnNGNjMjNkOTItODY2Ny00NzEwLTk3
MTQtYTY3YzBkMTc4ZmEwJywpIHt9IGZsb3dJRCBbMzIzZDdjYThdDQpUaHJlYWQtNTk3MTo6REVC
VUc6OjIwMTMtMDMtMDUgMTQ6NTI6MjMsMzg1OjpsaWJ2aXJ0dm06OjI4Mzo6dm0uVm06OihfZ2V0
RGlza0xhdGVuY3kpIHZtSWQ9YDRjYzIzZDkyLTg2NjctNDcxMC05NzE0LWE2N2MwZDE3OGZhMGA6
OkRpc2sgdmRhIGxhdGVuY3kgbm90IGF2YWlsYWJsZQ0KVGhyZWFkLTU5NzE6OkRFQlVHOjoyMDEz
LTAzLTA1IDE0OjUyOjIzLDM4NTo6QmluZGluZ1hNTFJQQzo6OTEwOjp2ZHM6Oih3cmFwcGVyKSBy
ZXR1cm4gdm1HZXRTdGF0cyB3aXRoIHsnc3RhdHVzJzogeydtZXNzYWdlJzogJ0RvbmUnLCAnY29k
ZSc6IDB9LCAnc3RhdHNMaXN0JzogW3snc3RhdHVzJzogJ1VwJywgJ3VzZXJuYW1lJzogJ1Vua25v
d24nLCAnbWVtVXNhZ2UnOiAnMCcsICdhY3BpRW5hYmxlJzogJ3RydWUnLCAncGlkJzogJzMxMzUn
LCAnZGlzcGxheUlwJzogJzE5Mi4xNjguMS4yMzUnLCAnZGlzcGxheVBvcnQnOiB1JzU5MDAnLCAn
c2Vzc2lvbic6ICdVbmtub3duJywgJ2Rpc3BsYXlTZWN1cmVQb3J0JzogJy0xJywgJ3RpbWVPZmZz
ZXQnOiAnLTInLCAnaGFzaCc6ICctNzYxNTkzNTgzMjA1ODc3MTY0JywgJ2JhbGxvb25JbmZvJzog
eydiYWxsb29uX21heCc6IDUyNDI4OCwgJ2JhbGxvb25fY3VyJzogNTI0Mjg4fSwgJ3BhdXNlQ29k
ZSc6ICdOT0VSUicsICdjbGllbnRJcCc6ICcnLCAna3ZtRW5hYmxlJzogJ3RydWUnLCAnbmV0d29y
ayc6IHt1J3ZuZXQwJzogeydtYWNBZGRyJzogJzAwOjFhOjRhOmE4OjAxOjUyJywgJ3J4RHJvcHBl
ZCc6ICcwJywgJ3J4RXJyb3JzJzogJzAnLCAndHhEcm9wcGVkJzogJzAnLCAndHhSYXRlJzogJzAu
MCcsICdyeFJhdGUnOiAnMC4wJywgJ3R4RXJyb3JzJzogJzAnLCAnc3RhdGUnOiAndW5rbm93bics
ICdzcGVlZCc6ICcxMDAwJywgJ25hbWUnOiB1J3ZuZXQwJ319LCAndm1JZCc6ICc0Y2MyM2Q5Mi04
NjY3LTQ3MTAtOTcxNC1hNjdjMGQxNzhmYTAnLCAnZGlzcGxheVR5cGUnOiAndm5jJywgJ2NwdVVz
ZXInOiAnMS44NScsICdkaXNrcyc6IHt1J3ZkYSc6IHsncmVhZFJhdGUnOiAnMC4wMCcsICd0cnVl
c2l6ZSc6ICcyMTQ3NDgzNjQ4MCcsICdhcHBhcmVudHNpemUnOiAnMjE0NzQ4MzY0ODAnLCAnd3Jp
dGVSYXRlJzogJzQwNy4xNicsICdpbWFnZUlEJzogJzZiMjUyZWI4LWFiOWYtNDQ1Zi05MjJlLTUy
ZDg2YmM2ZDc5MCd9LCB1J2hkYyc6IHsncmVhZExhdGVuY3knOiAnMCcsICdhcHBhcmVudHNpemUn
OiAnMCcsICd3cml0ZUxhdGVuY3knOiAnMCcsICdmbHVzaExhdGVuY3knOiAnMCcsICdyZWFkUmF0
ZSc6ICcwLjAwJywgJ3RydWVzaXplJzogJzAnLCAnd3JpdGVSYXRlJzogJzAuMDAnfX0sICdtb25p
dG9yUmVzcG9uc2UnOiAnMCcsICdzdGF0c0FnZSc6ICcwLjY3JywgJ2VsYXBzZWRUaW1lJzogJzk2
ODQnLCAndm1UeXBlJzogJ2t2bScsICdjcHVTeXMnOiAnNS45MycsICdhcHBzTGlzdCc6IFtdLCAn
Z3Vlc3RJUHMnOiAnJ31dfQ0KVGhyZWFkLTU5NzI6OkRFQlVHOjoyMDEzLTAzLTA1IDE0OjUyOjIz
LDQwODo6QmluZGluZ1hNTFJQQzo6OTAzOjp2ZHM6Oih3cmFwcGVyKSBjbGllbnQgWzE5Mi4xNjgu
MS4yMDFdOjpjYWxsIHZtR2V0TWlncmF0aW9uU3RhdHVzIHdpdGggKCc0Y2MyM2Q5Mi04NjY3LTQ3
MTAtOTcxNC1hNjdjMGQxNzhmYTAnLCkge30gZmxvd0lEIFszMjNkN2NhOF0NClRocmVhZC01OTcy
OjpERUJVRzo6MjAxMy0wMy0wNSAxNDo1MjoyMyw0MDg6OkJpbmRpbmdYTUxSUEM6OjkxMDo6dmRz
Ojood3JhcHBlcikgcmV0dXJuIHZtR2V0TWlncmF0aW9uU3RhdHVzIHdpdGggeydzdGF0dXMnOiB7
J21lc3NhZ2UnOiAnQ291bGQgbm90IGNvbm5lY3QgdG8gcGVlciBWRFMnLCAnY29kZSc6IDEwfSwg
J3Byb2dyZXNzJzogMTB9DQpUaHJlYWQtMjE6OkRFQlVHOjoyMDEzLTAzLTA1IDE0OjUyOjI2LDg4
ODo6bWlzYzo6ODM6OlN0b3JhZ2UuTWlzYy5leGNDbWQ6Oig8bGFtYmRhPikgJy9iaW4vZGQgaWZs
YWc9ZGlyZWN0IGlmPS9kZXYvMGU1ODI3YTUtNmYzYy00OWJlLWJlOWItMGJmYjY1MTk4NjQ0L21l
dGFkYXRhIGJzPTQwOTYgY291bnQ9MScgKGN3ZCBOb25lKQ0KVGhyZWFkLTIxOjpERUJVRzo6MjAx
My0wMy0wNSAxNDo1MjoyNiw5MDA6Om1pc2M6OjgzOjpTdG9yYWdlLk1pc2MuZXhjQ21kOjooPGxh
bWJkYT4pIFNVQ0NFU1M6IDxlcnI+ID0gJzErMCByZWNvcmRzIGluXG4xKzAgcmVjb3JkcyBvdXRc
bjQwOTYgYnl0ZXMgKDQuMSBrQikgY29waWVkLCAwLjAwMDM2NTUyIHMsIDExLjIgTUIvc1xuJzsg
PHJjPiA9IDANClRocmVhZC01OTc2OjpERUJVRzo6MjAxMy0wMy0wNSAxNDo1MjozMSw1NTU6OnRh
c2s6OjU2ODo6VGFza01hbmFnZXIuVGFzazo6KF91cGRhdGVTdGF0ZSkgVGFzaz1gZGE1NDUyMzEt
OTUzOC00MTJkLTk2NmUtYTA1NmNhN2QwNzRhYDo6bW92aW5nIGZyb20gc3RhdGUgaW5pdCAtPiBz
dGF0ZSBwcmVwYXJpbmcNClRocmVhZC01OTc2OjpJTkZPOjoyMDEzLTAzLTA1IDE0OjUyOjMxLDU1
Njo6bG9nVXRpbHM6OjM3OjpkaXNwYXRjaGVyOjood3JhcHBlcikgUnVuIGFuZCBwcm90ZWN0OiBy
ZXBvU3RhdHMob3B0aW9ucz1Ob25lKQ0KVGhyZWFkLTU5NzY6OklORk86OjIwMTMtMDMtMDUgMTQ6
NTI6MzEsNTU2Ojpsb2dVdGlsczo6Mzk6OmRpc3BhdGNoZXI6Oih3cmFwcGVyKSBSdW4gYW5kIHBy
b3RlY3Q6IHJlcG9TdGF0cywgUmV0dXJuIHJlc3BvbnNlOiB7dScwZTU4MjdhNS02ZjNjLTQ5YmUt
YmU5Yi0wYmZiNjUxOTg2NDQnOiB7J2RlbGF5JzogJzAuMDEyOTU2ODU3NjgxMycsICdsYXN0Q2hl
Y2snOiAnNC43JywgJ2NvZGUnOiAwLCAndmFsaWQnOiBUcnVlfSwgdSc0MDA3ZjMwYS1mODg4LTQ1
ODctYjgyYy00MGJjZGU0MDFhY2InOiB7J2RlbGF5JzogJzAuMDAyMjE4OTYxNzE1NycsICdsYXN0
Q2hlY2snOiAnNC45JywgJ2NvZGUnOiAwLCAndmFsaWQnOiBUcnVlfX0NClRocmVhZC01OTc2OjpE
RUJVRzo6MjAxMy0wMy0wNSAxNDo1MjozMSw1NTY6OnRhc2s6OjExNTE6OlRhc2tNYW5hZ2VyLlRh
c2s6OihwcmVwYXJlKSBUYXNrPWBkYTU0NTIzMS05NTM4LTQxMmQtOTY2ZS1hMDU2Y2E3ZDA3NGFg
OjpmaW5pc2hlZDoge3UnMGU1ODI3YTUtNmYzYy00OWJlLWJlOWItMGJmYjY1MTk4NjQ0Jzogeydk
ZWxheSc6ICcwLjAxMjk1Njg1NzY4MTMnLCAnbGFzdENoZWNrJzogJzQuNycsICdjb2RlJzogMCwg
J3ZhbGlkJzogVHJ1ZX0sIHUnNDAwN2YzMGEtZjg4OC00NTg3LWI4MmMtNDBiY2RlNDAxYWNiJzog
eydkZWxheSc6ICcwLjAwMjIxODk2MTcxNTcnLCAnbGFzdENoZWNrJzogJzQuOScsICdjb2RlJzog
MCwgJ3ZhbGlkJzogVHJ1ZX19DQpUaHJlYWQtNTk3Njo6REVCVUc6OjIwMTMtMDMtMDUgMTQ6NTI6
MzEsNTU2Ojp0YXNrOjo1Njg6OlRhc2tNYW5hZ2VyLlRhc2s6OihfdXBkYXRlU3RhdGUpIFRhc2s9
YGRhNTQ1MjMxLTk1MzgtNDEyZC05NjZlLWEwNTZjYTdkMDc0YWA6Om1vdmluZyBmcm9tIHN0YXRl
IHByZXBhcmluZyAtPiBzdGF0ZSBmaW5pc2hlZA0KVGhyZWFkLTU5NzY6OkRFQlVHOjoyMDEzLTAz
LTA1IDE0OjUyOjMxLDU1Njo6cmVzb3VyY2VNYW5hZ2VyOjo4MDk6OlJlc291cmNlTWFuYWdlci5P
d25lcjo6KHJlbGVhc2VBbGwpIE93bmVyLnJlbGVhc2VBbGwgcmVxdWVzdHMge30gcmVzb3VyY2Vz
IHt9DQpUaHJlYWQtNTk3Njo6REVCVUc6OjIwMTMtMDMtMDUgMTQ6NTI6MzEsNTU2OjpyZXNvdXJj
ZU1hbmFnZXI6Ojg0NDo6UmVzb3VyY2VNYW5hZ2VyLk93bmVyOjooY2FuY2VsQWxsKSBPd25lci5j
YW5jZWxBbGwgcmVxdWVzdHMge30NClRocmVhZC01OTc2OjpERUJVRzo6MjAxMy0wMy0wNSAxNDo1
MjozMSw1NTc6OnRhc2s6Ojk1Nzo6VGFza01hbmFnZXIuVGFzazo6KF9kZWNyZWYpIFRhc2s9YGRh
NTQ1MjMxLTk1MzgtNDEyZC05NjZlLWEwNTZjYTdkMDc0YWA6OnJlZiAwIGFib3J0aW5nIEZhbHNl
DQpUaHJlYWQtNTk3Nzo6REVCVUc6OjIwMTMtMDMtMDUgMTQ6NTI6MzEsNTY1OjpsaWJ2aXJ0dm06
OjI4Mzo6dm0uVm06OihfZ2V0RGlza0xhdGVuY3kpIHZtSWQ9YDRjYzIzZDkyLTg2NjctNDcxMC05
NzE0LWE2N2MwZDE3OGZhMGA6OkRpc2sgdmRhIGxhdGVuY3kgbm90IGF2YWlsYWJsZQ0KVGhyZWFk
LTIxOjpERUJVRzo6MjAxMy0wMy0wNSAxNDo1MjozNiw5MDQ6Om1pc2M6OjgzOjpTdG9yYWdlLk1p
c2MuZXhjQ21kOjooPGxhbWJkYT4pICcvYmluL2RkIGlmbGFnPWRpcmVjdCBpZj0vZGV2LzBlNTgy
N2E1LTZmM2MtNDliZS1iZTliLTBiZmI2NTE5ODY0NC9tZXRhZGF0YSBicz00MDk2IGNvdW50PTEn
IChjd2QgTm9uZSkNClRocmVhZC0yMTo6REVCVUc6OjIwMTMtMDMtMDUgMTQ6NTI6MzYsOTE3Ojpt
aXNjOjo4Mzo6U3RvcmFnZS5NaXNjLmV4Y0NtZDo6KDxsYW1iZGE+KSBTVUNDRVNTOiA8ZXJyPiA9
ICcxKzAgcmVjb3JkcyBpblxuMSswIHJlY29yZHMgb3V0XG40MDk2IGJ5dGVzICg0LjEga0IpIGNv
cGllZCwgMC4wMDA0MDA2MjYgcywgMTAuMiBNQi9zXG4nOyA8cmM+ID0gMA0KVk0gQ2hhbm5lbHMg
TGlzdGVuZXI6OkRFQlVHOjoyMDEzLTAzLTA1IDE0OjUyOjQxLDMzNzo6dm1DaGFubmVsczo6NjA6
OnZkczo6KF9oYW5kbGVfdGltZW91dHMpIFRpbWVvdXQgb24gZmlsZW5vIDE4Lg0KVGhyZWFkLTU5
ODI6OkRFQlVHOjoyMDEzLTAzLTA1IDE0OjUyOjQxLDc4Njo6dGFzazo6NTY4OjpUYXNrTWFuYWdl
ci5UYXNrOjooX3VwZGF0ZVN0YXRlKSBUYXNrPWA5MWQ4OTI5ZS0xMzQ5LTRkNzQtOWJjZC1lMGRm
NDA2Y2U0NTVgOjptb3ZpbmcgZnJvbSBzdGF0ZSBpbml0IC0+IHN0YXRlIHByZXBhcmluZw0KVGhy
ZWFkLTU5ODI6OklORk86OjIwMTMtMDMtMDUgMTQ6NTI6NDEsNzg2Ojpsb2dVdGlsczo6Mzc6OmRp
c3BhdGNoZXI6Oih3cmFwcGVyKSBSdW4gYW5kIHByb3RlY3Q6IHJlcG9TdGF0cyhvcHRpb25zPU5v
bmUpDQpUaHJlYWQtNTk4Mjo6SU5GTzo6MjAxMy0wMy0wNSAxNDo1Mjo0MSw3ODY6OmxvZ1V0aWxz
OjozOTo6ZGlzcGF0Y2hlcjo6KHdyYXBwZXIpIFJ1biBhbmQgcHJvdGVjdDogcmVwb1N0YXRzLCBS
ZXR1cm4gcmVzcG9uc2U6IHt1JzBlNTgyN2E1LTZmM2MtNDliZS1iZTliLTBiZmI2NTE5ODY0NCc6
IHsnZGVsYXknOiAnMC4wMTMwODg5NDE1NzQxJywgJ2xhc3RDaGVjayc6ICc0LjknLCAnY29kZSc6
IDAsICd2YWxpZCc6IFRydWV9LCB1JzQwMDdmMzBhLWY4ODgtNDU4Ny1iODJjLTQwYmNkZTQwMWFj
Yic6IHsnZGVsYXknOiAnMC4wMDI3MDcwMDQ1NDcxMicsICdsYXN0Q2hlY2snOiAnNS4xJywgJ2Nv
ZGUnOiAwLCAndmFsaWQnOiBUcnVlfX0NClRocmVhZC01OTgyOjpERUJVRzo6MjAxMy0wMy0wNSAx
NDo1Mjo0MSw3ODc6OnRhc2s6OjExNTE6OlRhc2tNYW5hZ2VyLlRhc2s6OihwcmVwYXJlKSBUYXNr
PWA5MWQ4OTI5ZS0xMzQ5LTRkNzQtOWJjZC1lMGRmNDA2Y2U0NTVgOjpmaW5pc2hlZDoge3UnMGU1
ODI3YTUtNmYzYy00OWJlLWJlOWItMGJmYjY1MTk4NjQ0JzogeydkZWxheSc6ICcwLjAxMzA4ODk0
MTU3NDEnLCAnbGFzdENoZWNrJzogJzQuOScsICdjb2RlJzogMCwgJ3ZhbGlkJzogVHJ1ZX0sIHUn
NDAwN2YzMGEtZjg4OC00NTg3LWI4MmMtNDBiY2RlNDAxYWNiJzogeydkZWxheSc6ICcwLjAwMjcw
NzAwNDU0NzEyJywgJ2xhc3RDaGVjayc6ICc1LjEnLCAnY29kZSc6IDAsICd2YWxpZCc6IFRydWV9
fQ0KVGhyZWFkLTU5ODI6OkRFQlVHOjoyMDEzLTAzLTA1IDE0OjUyOjQxLDc4Nzo6dGFzazo6NTY4
OjpUYXNrTWFuYWdlci5UYXNrOjooX3VwZGF0ZVN0YXRlKSBUYXNrPWA5MWQ4OTI5ZS0xMzQ5LTRk
NzQtOWJjZC1lMGRmNDA2Y2U0NTVgOjptb3ZpbmcgZnJvbSBzdGF0ZSBwcmVwYXJpbmcgLT4gc3Rh
dGUgZmluaXNoZWQNClRocmVhZC01OTgyOjpERUJVRzo6MjAxMy0wMy0wNSAxNDo1Mjo0MSw3ODc6
OnJlc291cmNlTWFuYWdlcjo6ODA5OjpSZXNvdXJjZU1hbmFnZXIuT3duZXI6OihyZWxlYXNlQWxs
KSBPd25lci5yZWxlYXNlQWxsIHJlcXVlc3RzIHt9IHJlc291cmNlcyB7fQ0KVGhyZWFkLTU5ODI6
OkRFQlVHOjoyMDEzLTAzLTA1IDE0OjUyOjQxLDc4Nzo6cmVzb3VyY2VNYW5hZ2VyOjo4NDQ6OlJl
c291cmNlTWFuYWdlci5Pd25lcjo6KGNhbmNlbEFsbCkgT3duZXIuY2FuY2VsQWxsIHJlcXVlc3Rz
IHt9DQpUaHJlYWQtNTk4Mjo6REVCVUc6OjIwMTMtMDMtMDUgMTQ6NTI6NDEsNzg3Ojp0YXNrOjo5
NTc6OlRhc2tNYW5hZ2VyLlRhc2s6OihfZGVjcmVmKSBUYXNrPWA5MWQ4OTI5ZS0xMzQ5LTRkNzQt
OWJjZC1lMGRmNDA2Y2U0NTVgOjpyZWYgMCBhYm9ydGluZyBGYWxzZQ0KVGhyZWFkLTU5ODM6OkRF
QlVHOjoyMDEzLTAzLTA1IDE0OjUyOjQxLDc5NTo6bGlidmlydHZtOjoyODM6OnZtLlZtOjooX2dl
dERpc2tMYXRlbmN5KSB2bUlkPWA0Y2MyM2Q5Mi04NjY3LTQ3MTAtOTcxNC1hNjdjMGQxNzhmYTBg
OjpEaXNrIHZkYSBsYXRlbmN5IG5vdCBhdmFpbGFibGUNClRocmVhZC0yMTo6REVCVUc6OjIwMTMt
MDMtMDUgMTQ6NTI6NDYsOTIxOjptaXNjOjo4Mzo6U3RvcmFnZS5NaXNjLmV4Y0NtZDo6KDxsYW1i
ZGE+KSAnL2Jpbi9kZCBpZmxhZz1kaXJlY3QgaWY9L2Rldi8wZTU4MjdhNS02ZjNjLTQ5YmUtYmU5
Yi0wYmZiNjUxOTg2NDQvbWV0YWRhdGEgYnM9NDA5NiBjb3VudD0xJyAoY3dkIE5vbmUpDQpUaHJl
YWQtMjE6OkRFQlVHOjoyMDEzLTAzLTA1IDE0OjUyOjQ2LDkzMzo6bWlzYzo6ODM6OlN0b3JhZ2Uu
TWlzYy5leGNDbWQ6Oig8bGFtYmRhPikgU1VDQ0VTUzogPGVycj4gPSAnMSswIHJlY29yZHMgaW5c
bjErMCByZWNvcmRzIG91dFxuNDA5NiBieXRlcyAoNC4xIGtCKSBjb3BpZWQsIDAuMDAwMzg5MjI2
IHMsIDEwLjUgTUIvc1xuJzsgPHJjPiA9IDANClRocmVhZC01OTg4OjpERUJVRzo6MjAxMy0wMy0w
NSAxNDo1Mjo1MiwwMTU6OnRhc2s6OjU2ODo6VGFza01hbmFnZXIuVGFzazo6KF91cGRhdGVTdGF0
ZSkgVGFzaz1gZDhkOWNjYzEtZjE3Yi00MzcyLWEyN2UtYmJlZTRlZGExNzM2YDo6bW92aW5nIGZy
b20gc3RhdGUgaW5pdCAtPiBzdGF0ZSBwcmVwYXJpbmcNClRocmVhZC01OTg4OjpJTkZPOjoyMDEz
LTAzLTA1IDE0OjUyOjUyLDAxNjo6bG9nVXRpbHM6OjM3OjpkaXNwYXRjaGVyOjood3JhcHBlcikg
UnVuIGFuZCBwcm90ZWN0OiByZXBvU3RhdHMob3B0aW9ucz1Ob25lKQ0KVGhyZWFkLTU5ODg6OklO
Rk86OjIwMTMtMDMtMDUgMTQ6NTI6NTIsMDE2Ojpsb2dVdGlsczo6Mzk6OmRpc3BhdGNoZXI6Oih3
cmFwcGVyKSBSdW4gYW5kIHByb3RlY3Q6IHJlcG9TdGF0cywgUmV0dXJuIHJlc3BvbnNlOiB7dScw
ZTU4MjdhNS02ZjNjLTQ5YmUtYmU5Yi0wYmZiNjUxOTg2NDQnOiB7J2RlbGF5JzogJzAuMDEyOTcz
MDcwMTQ0NycsICdsYXN0Q2hlY2snOiAnNS4xJywgJ2NvZGUnOiAwLCAndmFsaWQnOiBUcnVlfSwg
dSc0MDA3ZjMwYS1mODg4LTQ1ODctYjgyYy00MGJjZGU0MDFhY2InOiB7J2RlbGF5JzogJzAuMDAy
MjIxMTA3NDgyOTEnLCAnbGFzdENoZWNrJzogJzUuMycsICdjb2RlJzogMCwgJ3ZhbGlkJzogVHJ1
ZX19DQpUaHJlYWQtNTk4ODo6REVCVUc6OjIwMTMtMDMtMDUgMTQ6NTI6NTIsMDE2Ojp0YXNrOjox
MTUxOjpUYXNrTWFuYWdlci5UYXNrOjoocHJlcGFyZSkgVGFzaz1gZDhkOWNjYzEtZjE3Yi00Mzcy
LWEyN2UtYmJlZTRlZGExNzM2YDo6ZmluaXNoZWQ6IHt1JzBlNTgyN2E1LTZmM2MtNDliZS1iZTli
LTBiZmI2NTE5ODY0NCc6IHsnZGVsYXknOiAnMC4wMTI5NzMwNzAxNDQ3JywgJ2xhc3RDaGVjayc6
ICc1LjEnLCAnY29kZSc6IDAsICd2YWxpZCc6IFRydWV9LCB1JzQwMDdmMzBhLWY4ODgtNDU4Ny1i
ODJjLTQwYmNkZTQwMWFjYic6IHsnZGVsYXknOiAnMC4wMDIyMjExMDc0ODI5MScsICdsYXN0Q2hl
Y2snOiAnNS4zJywgJ2NvZGUnOiAwLCAndmFsaWQnOiBUcnVlfX0NClRocmVhZC01OTg4OjpERUJV
Rzo6MjAxMy0wMy0wNSAxNDo1Mjo1MiwwMTY6OnRhc2s6OjU2ODo6VGFza01hbmFnZXIuVGFzazo6
KF91cGRhdGVTdGF0ZSkgVGFzaz1gZDhkOWNjYzEtZjE3Yi00MzcyLWEyN2UtYmJlZTRlZGExNzM2
YDo6bW92aW5nIGZyb20gc3RhdGUgcHJlcGFyaW5nIC0+IHN0YXRlIGZpbmlzaGVkDQoNCg0KDQoN
CnhpYW5naHVhZHU=
------=_001_NextPart146280477306_=----
Content-Type: text/html;
charset="gb2312"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content=3D"text/html; charset=3Dgb2312" http-equiv=3DContent-Type>
<STYLE>
BLOCKQUOTE {
MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em
}
OL {
MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
UL {
MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
P {
MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
BODY {
LINE-HEIGHT: 1.5; FONT-FAMILY: =CE=A2=C8=ED=D1=C5=BA=DA; COLOR: #000000; =
FONT-SIZE: 10.5pt
}
</STYLE>
<META name=3DGENERATOR content=3D"MSHTML 8.00.7601.17744"></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>
<DIV>hi all</DIV>
<DIV> &nb=
sp; I recently in the research ovirt en=
counter a problem. </DIV>
<DIV>In the vm migration occurs when the&nbs=
p;error: Migration failed due to Error: Coul=
d not connect to peer host.</DIV>
<DIV>My environment is:</DIV>
<DIV>KVM =
dell 2950 * 2</DIV>
<DIV>storage iscsi-target</=
DIV>
<DIV>vm system windows 2008 r2</DIV>
<DIV>ovirt-log=A3=BA</DIV>
<DIV>
<TABLE=20
style=3D"BORDER-BOTTOM: medium none; BORDER-LEFT: medium none; BORDER-COLL=
APSE: collapse; FONT-SIZE: 10pt; BORDER-TOP: medium none; BORDER-RIGHT: me=
dium none"=20
border=3D1 cellSpacing=3D0 borderColor=3D#000000 cellPadding=3D2 width=3D"=
50%">
<TBODY>
<TR>
<TD=20
style=3D"BORDER-BOTTOM: #000000 1px solid; BORDER-LEFT: #000000 1px so=
lid; BORDER-TOP: #000000 1px solid; BORDER-RIGHT: #000000 1px solid"=20
width=3D"100%" noWrap><FONT size=3D2 face=3DVerdana>
<DIV>
<DIV>2013-03-05 14:52:23,074 INFO [org.ovirt.en=
gine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-42)=
[323d7ca8] VM centos 4cc23d92-8667-4710-9714-a67c0d17=
8fa0 moved from MigratingFrom --> Up</DIV>
<DIV>2013-03-05 14:52:23,076 INFO [org.ovirt.en=
gine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-42)=
[323d7ca8] adding VM 4cc23d92-8667-4710-9714-a67c0d17=
8fa0 to re-run list</DIV>
<DIV>2013-03-05 14:52:23,079 ERROR [org.ovirt.engine.=
core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-42) =
;[323d7ca8] Rerun vm 4cc23d92-8667-4710-9714-a67c0d178fa0.&=
nbsp;Called from vds 205</DIV>
<DIV>2013-03-05 14:52:23,085 INFO [org.ovirt.en=
gine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread=
-49) [323d7ca8] START, MigrateStatusVDSCommand(HostName&nbs=
p;=3D 205, HostId =3D 4e7d1ae2-824e-11e2-bb4c-00188be4=
de29, vmId=3D4cc23d92-8667-4710-9714-a67c0d178fa0), log id:=
618085d</DIV>
<DIV>2013-03-05 14:52:23,131 ERROR [org.ovirt.engine.=
core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-49) [=
323d7ca8] Failed in MigrateStatusVDS method</DIV>
<DIV>2013-03-05 14:52:23,132 ERROR [org.ovirt.engine.=
core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-49) [=
323d7ca8] Error code noConPeer and error mes=
sage VDSGenericException: VDSErrorException: Failed to=
MigrateStatusVDS, error =3D Could not conne=
ct to peer VDS</DIV>
<DIV>2013-03-05 14:52:23,134 INFO [org.ovirt.en=
gine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-49)&n=
bsp;[323d7ca8] Command org.ovirt.engine.core.vdsbroker.vdsbroker=
.MigrateStatusVDSCommand return value </DIV>
<DIV> Class Name: org.ovirt.engine.core.vdsbroker.vds=
broker.StatusOnlyReturnForXmlRpc</DIV>
<DIV>mStatus &n=
bsp; &nbs=
p; Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.St=
atusForXmlRpc</DIV>
<DIV>mCode &nbs=
p; =
10</DIV>
<DIV>mMessage &=
nbsp; &nb=
sp;Could not connect to peer VDS</DIV>
<DIV> </DIV>
<DIV> </DIV>
<DIV>2013-03-05 14:52:23,138 INFO [org.ovirt.en=
gine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-3-thread-49)&n=
bsp;[323d7ca8] HostName =3D 205</DIV>
<DIV>2013-03-05 14:52:23,139 ERROR [org.ovirt.engine.=
core.vdsbroker.VDSCommandBase] (pool-3-thread-49) [323d7ca8]&nbs=
p;Command MigrateStatusVDS execution failed. Exception=
: VDSErrorException: VDSGenericException: VDSErrorException=
: Failed to MigrateStatusVDS, error =3D Coul=
d not connect to peer VDS</DIV>
<DIV>2013-03-05 14:52:23,141 INFO [org.ovirt.en=
gine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (pool-3-thread=
-49) [323d7ca8] FINISH, MigrateStatusVDSCommand, log&n=
bsp;</DIV></DIV></FONT></TD></TR></TBODY></TABLE></DIV></DIV>
<DIV>vdsm-log=A3=BA</DIV>
<DIV>
<TABLE=20
style=3D"BORDER-BOTTOM: medium none; BORDER-LEFT: medium none; BORDER-COLL=
APSE: collapse; FONT-SIZE: 10pt; BORDER-TOP: medium none; BORDER-RIGHT: me=
dium none"=20
border=3D1 cellSpacing=3D0 borderColor=3D#000000 cellPadding=3D2 width=3D"=
50%">
<TBODY>
<TR>
<TD=20
style=3D"BORDER-BOTTOM: #000000 1px solid; BORDER-LEFT: #000000 1px so=
lid; BORDER-TOP: #000000 1px solid; BORDER-RIGHT: #000000 1px solid"=20
width=3D"100%" noWrap><FONT size=3D2 face=3DVerdana>
<DIV>
<DIV>Thread-5969::DEBUG::2013-03-05 14:52:21,312::libvirtvm::28=
3::vm.Vm::(_getDiskLatency) vmId=3D`4cc23d92-8667-4710-9714-a67c0d178=
fa0`::Disk vda latency not available</DIV>
<DIV>Thread-5622::ERROR::2013-03-05 14:52:22,890::vm::200::vm.V=
m::(_recover) vmId=3D`4cc23d92-8667-4710-9714-a67c0d178fa0`::Failed&n=
bsp;to destroy remote VM</DIV>
<DIV>Traceback (most recent call last):</DIV>
<DIV> File "/usr/share/vdsm/vm.py", line 1=
98, in _recover</DIV>
<DIV> self.destServer.destroy(self._vm.id)</D=
IV>
<DIV> File "/usr/lib64/python2.6/xmlrpclib.py", =
;line 1199, in __call__</DIV>
<DIV> return self.__send(self.__name,&nb=
sp;args)</DIV>
<DIV> File "/usr/lib64/python2.6/xmlrpclib.py", =
;line 1489, in __request</DIV>
<DIV> verbose=3Dself.__verbose</DIV>
<DIV> File "/usr/lib64/python2.6/xmlrpclib.py", =
;line 1253, in request</DIV>
<DIV> return self._parse_response(h.getf=
ile(), sock)</DIV>
<DIV> File "/usr/lib64/python2.6/xmlrpclib.py", =
;line 1382, in _parse_response</DIV>
<DIV> response =3D file.read(1024)<=
/DIV>
<DIV> File "/usr/lib64/python2.6/socket.py", li=
ne 383, in read</DIV>
<DIV> data =3D self._sock.recv(left=
)</DIV>
<DIV> File "/usr/lib64/python2.6/ssl.py", line&=
nbsp;215, in recv</DIV>
<DIV> return self.read(buflen)</DIV>
<DIV> File "/usr/lib64/python2.6/ssl.py", line&=
nbsp;136, in read</DIV>
<DIV> return self._sslobj.read(len)</DIV=
>
<DIV>SSLError: The read operation timed out=
</DIV>
<DIV>Thread-5622::ERROR::2013-03-05 14:52:22,909::vm::283::vm.V=
m::(run) vmId=3D`4cc23d92-8667-4710-9714-a67c0d178fa0`::Failed t=
o migrate</DIV>
<DIV>Traceback (most recent call last):</DIV>
<DIV> File "/usr/share/vdsm/vm.py", line 2=
68, in run</DIV>
<DIV> self._startUnderlyingMigration()</DIV>
<DIV> File "/usr/share/vdsm/libvirtvm.py", line=
443, in _startUnderlyingMigration</DIV>
<DIV> response =3D self.destServer.=
migrationCreate(self._machineParams)</DIV>
<DIV> File "/usr/lib64/python2.6/xmlrpclib.py", =
;line 1199, in __call__</DIV>
<DIV> return self.__send(self.__name,&nb=
sp;args)</DIV>
<DIV> File "/usr/lib64/python2.6/xmlrpclib.py", =
;line 1489, in __request</DIV>
<DIV> verbose=3Dself.__verbose</DIV>
<DIV> File "/usr/lib64/python2.6/xmlrpclib.py", =
;line 1253, in request</DIV>
<DIV> return self._parse_response(h.getf=
ile(), sock)</DIV>
<DIV> File "/usr/lib64/python2.6/xmlrpclib.py", =
;line 1382, in _parse_response</DIV>
<DIV> response =3D file.read(1024)<=
/DIV>
<DIV> File "/usr/lib64/python2.6/socket.py", li=
ne 383, in read</DIV>
<DIV> data =3D self._sock.recv(left=
)</DIV>
<DIV> File "/usr/lib64/python2.6/ssl.py", line&=
nbsp;215, in recv</DIV>
<DIV> return self.read(buflen)</DIV>
<DIV> File "/usr/lib64/python2.6/ssl.py", line&=
nbsp;136, in read</DIV>
<DIV> return self._sslobj.read(len)</DIV=
>
<DIV>SSLError: The read operation timed out=
</DIV>
<DIV>Thread-5971::DEBUG::2013-03-05 14:52:23,384::BindingXMLRPC=
::903::vds::(wrapper) client [192.168.1.201]::call vmGetSta=
ts with ('4cc23d92-8667-4710-9714-a67c0d178fa0',) {} f=
lowID [323d7ca8]</DIV>
<DIV>Thread-5971::DEBUG::2013-03-05 14:52:23,385::libvirtvm::28=
3::vm.Vm::(_getDiskLatency) vmId=3D`4cc23d92-8667-4710-9714-a67c0d178=
fa0`::Disk vda latency not available</DIV>
<DIV>Thread-5971::DEBUG::2013-03-05 14:52:23,385::BindingXMLRPC=
::910::vds::(wrapper) return vmGetStats with {'status'=
: {'message': 'Done', 'code': 0}, 'statsList':&nb=
sp;[{'status': 'Up', 'username': 'Unknown', 'memUsage'=
: '0', 'acpiEnable': 'true', 'pid': '3135', =
'displayIp': '192.168.1.235', 'displayPort': u'5900', =
'session': 'Unknown', 'displaySecurePort': '-1', 'time=
Offset': '-2', 'hash': '-761593583205877164', 'balloon=
Info': {'balloon_max': 524288, 'balloon_cur': 524288},=
'pauseCode': 'NOERR', 'clientIp': '', 'kvmEnable=
': 'true', 'network': {u'vnet0': {'macAddr': '00:=
1a:4a:a8:01:52', 'rxDropped': '0', 'rxErrors': '0',&nb=
sp;'txDropped': '0', 'txRate': '0.0', 'rxRate': '=
0.0', 'txErrors': '0', 'state': 'unknown', 'speed=
': '1000', 'name': u'vnet0'}}, 'vmId': '4cc23d92-=
8667-4710-9714-a67c0d178fa0', 'displayType': 'vnc', 'cpuUse=
r': '1.85', 'disks': {u'vda': {'readRate': '0.00'=
, 'truesize': '21474836480', 'apparentsize': '21474836=
480', 'writeRate': '407.16', 'imageID': '6b252eb8-ab9f=
-445f-922e-52d86bc6d790'}, u'hdc': {'readLatency': '0',&nbs=
p;'apparentsize': '0', 'writeLatency': '0', 'flushLate=
ncy': '0', 'readRate': '0.00', 'truesize': '0',&n=
bsp;'writeRate': '0.00'}}, 'monitorResponse': '0', 'st=
atsAge': '0.67', 'elapsedTime': '9684', 'vmType': =
;'kvm', 'cpuSys': '5.93', 'appsList': [], 'guestI=
Ps': ''}]}</DIV>
<DIV>Thread-5972::DEBUG::2013-03-05 14:52:23,408::BindingXMLRPC=
::903::vds::(wrapper) client [192.168.1.201]::call vmGetMig=
rationStatus with ('4cc23d92-8667-4710-9714-a67c0d178fa0',) =
;{} flowID [323d7ca8]</DIV>
<DIV>Thread-5972::DEBUG::2013-03-05 14:52:23,408::BindingXMLRPC=
::910::vds::(wrapper) return vmGetMigrationStatus with =
;{'status': {'message': 'Could not connect to&nbs=
p;peer VDS', 'code': 10}, 'progress': 10}</DIV>
<DIV>Thread-21::DEBUG::2013-03-05 14:52:26,888::misc::83::Stora=
ge.Misc.excCmd::(<lambda>) '/bin/dd iflag=3Ddirect if=
=3D/dev/0e5827a5-6f3c-49be-be9b-0bfb65198644/metadata bs=3D4096 =
count=3D1' (cwd None)</DIV>
<DIV>Thread-21::DEBUG::2013-03-05 14:52:26,900::misc::83::Stora=
ge.Misc.excCmd::(<lambda>) SUCCESS: <err> =3D&n=
bsp;'1+0 records in\n1+0 records out\n4096 bytes&=
nbsp;(4.1 kB) copied, 0.00036552 s, 11.2 MB/=
s\n'; <rc> =3D 0</DIV>
<DIV>Thread-5976::DEBUG::2013-03-05 14:52:31,555::task::568::Ta=
skManager.Task::(_updateState) Task=3D`da545231-9538-412d-966e-a056ca=
7d074a`::moving from state init -> state =
preparing</DIV>
<DIV>Thread-5976::INFO::2013-03-05 14:52:31,556::logUtils::37::=
dispatcher::(wrapper) Run and protect: repoStats(optio=
ns=3DNone)</DIV>
<DIV>Thread-5976::INFO::2013-03-05 14:52:31,556::logUtils::39::=
dispatcher::(wrapper) Run and protect: repoStats, =
;Return response: {u'0e5827a5-6f3c-49be-be9b-0bfb65198644': =
;{'delay': '0.0129568576813', 'lastCheck': '4.7', 'cod=
e': 0, 'valid': True}, u'4007f30a-f888-4587-b82c-40bcd=
e401acb': {'delay': '0.0022189617157', 'lastCheck': '4=
.9', 'code': 0, 'valid': True}}</DIV>
<DIV>Thread-5976::DEBUG::2013-03-05 14:52:31,556::task::1151::T=
askManager.Task::(prepare) Task=3D`da545231-9538-412d-966e-a056ca7d07=
4a`::finished: {u'0e5827a5-6f3c-49be-be9b-0bfb65198644': {'delay=
': '0.0129568576813', 'lastCheck': '4.7', 'code': =
;0, 'valid': True}, u'4007f30a-f888-4587-b82c-40bcde401acb'=
: {'delay': '0.0022189617157', 'lastCheck': '4.9',&nbs=
p;'code': 0, 'valid': True}}</DIV>
<DIV>Thread-5976::DEBUG::2013-03-05 14:52:31,556::task::568::Ta=
skManager.Task::(_updateState) Task=3D`da545231-9538-412d-966e-a056ca=
7d074a`::moving from state preparing -> state&=
nbsp;finished</DIV>
<DIV>Thread-5976::DEBUG::2013-03-05 14:52:31,556::resourceManag=
er::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll re=
quests {} resources {}</DIV>
<DIV>Thread-5976::DEBUG::2013-03-05 14:52:31,556::resourceManag=
er::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requ=
ests {}</DIV>
<DIV>Thread-5976::DEBUG::2013-03-05 14:52:31,557::task::957::Ta=
skManager.Task::(_decref) Task=3D`da545231-9538-412d-966e-a056ca7d074=
a`::ref 0 aborting False</DIV>
<DIV>Thread-5977::DEBUG::2013-03-05 14:52:31,565::libvirtvm::28=
3::vm.Vm::(_getDiskLatency) vmId=3D`4cc23d92-8667-4710-9714-a67c0d178=
fa0`::Disk vda latency not available</DIV>
<DIV>Thread-21::DEBUG::2013-03-05 14:52:36,904::misc::83::Stora=
ge.Misc.excCmd::(<lambda>) '/bin/dd iflag=3Ddirect if=
=3D/dev/0e5827a5-6f3c-49be-be9b-0bfb65198644/metadata bs=3D4096 =
count=3D1' (cwd None)</DIV>
<DIV>Thread-21::DEBUG::2013-03-05 14:52:36,917::misc::83::Stora=
ge.Misc.excCmd::(<lambda>) SUCCESS: <err> =3D&n=
bsp;'1+0 records in\n1+0 records out\n4096 bytes&=
nbsp;(4.1 kB) copied, 0.000400626 s, 10.2 MB=
/s\n'; <rc> =3D 0</DIV>
<DIV>VM Channels Listener::DEBUG::2013-03-05 14:52:41=
,337::vmChannels::60::vds::(_handle_timeouts) Timeout on fi=
leno 18.</DIV>
<DIV>Thread-5982::DEBUG::2013-03-05 14:52:41,786::task::568::Ta=
skManager.Task::(_updateState) Task=3D`91d8929e-1349-4d74-9bcd-e0df40=
6ce455`::moving from state init -> state =
preparing</DIV>
<DIV>Thread-5982::INFO::2013-03-05 14:52:41,786::logUtils::37::=
dispatcher::(wrapper) Run and protect: repoStats(optio=
ns=3DNone)</DIV>
<DIV>Thread-5982::INFO::2013-03-05 14:52:41,786::logUtils::39::=
dispatcher::(wrapper) Run and protect: repoStats, =
;Return response: {u'0e5827a5-6f3c-49be-be9b-0bfb65198644': =
;{'delay': '0.0130889415741', 'lastCheck': '4.9', 'cod=
e': 0, 'valid': True}, u'4007f30a-f888-4587-b82c-40bcd=
e401acb': {'delay': '0.00270700454712', 'lastCheck': '=
5.1', 'code': 0, 'valid': True}}</DIV>
<DIV>Thread-5982::DEBUG::2013-03-05 14:52:41,787::task::1151::T=
askManager.Task::(prepare) Task=3D`91d8929e-1349-4d74-9bcd-e0df406ce4=
55`::finished: {u'0e5827a5-6f3c-49be-be9b-0bfb65198644': {'delay=
': '0.0130889415741', 'lastCheck': '4.9', 'code': =
;0, 'valid': True}, u'4007f30a-f888-4587-b82c-40bcde401acb'=
: {'delay': '0.00270700454712', 'lastCheck': '5.1',&nb=
sp;'code': 0, 'valid': True}}</DIV>
<DIV>Thread-5982::DEBUG::2013-03-05 14:52:41,787::task::568::Ta=
skManager.Task::(_updateState) Task=3D`91d8929e-1349-4d74-9bcd-e0df40=
6ce455`::moving from state preparing -> state&=
nbsp;finished</DIV>
<DIV>Thread-5982::DEBUG::2013-03-05 14:52:41,787::resourceManag=
er::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll re=
quests {} resources {}</DIV>
<DIV>Thread-5982::DEBUG::2013-03-05 14:52:41,787::resourceManag=
er::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requ=
ests {}</DIV>
<DIV>Thread-5982::DEBUG::2013-03-05 14:52:41,787::task::957::Ta=
skManager.Task::(_decref) Task=3D`91d8929e-1349-4d74-9bcd-e0df406ce45=
5`::ref 0 aborting False</DIV>
<DIV>Thread-5983::DEBUG::2013-03-05 14:52:41,795::libvirtvm::28=
3::vm.Vm::(_getDiskLatency) vmId=3D`4cc23d92-8667-4710-9714-a67c0d178=
fa0`::Disk vda latency not available</DIV>
<DIV>Thread-21::DEBUG::2013-03-05 14:52:46,921::misc::83::Stora=
ge.Misc.excCmd::(<lambda>) '/bin/dd iflag=3Ddirect if=
=3D/dev/0e5827a5-6f3c-49be-be9b-0bfb65198644/metadata bs=3D4096 =
count=3D1' (cwd None)</DIV>
<DIV>Thread-21::DEBUG::2013-03-05 14:52:46,933::misc::83::Stora=
ge.Misc.excCmd::(<lambda>) SUCCESS: <err> =3D&n=
bsp;'1+0 records in\n1+0 records out\n4096 bytes&=
nbsp;(4.1 kB) copied, 0.000389226 s, 10.5 MB=
/s\n'; <rc> =3D 0</DIV>
<DIV>Thread-5988::DEBUG::2013-03-05 14:52:52,015::task::568::Ta=
skManager.Task::(_updateState) Task=3D`d8d9ccc1-f17b-4372-a27e-bbee4e=
da1736`::moving from state init -> state =
preparing</DIV>
<DIV>Thread-5988::INFO::2013-03-05 14:52:52,016::logUtils::37::=
dispatcher::(wrapper) Run and protect: repoStats(optio=
ns=3DNone)</DIV>
<DIV>Thread-5988::INFO::2013-03-05 14:52:52,016::logUtils::39::=
dispatcher::(wrapper) Run and protect: repoStats, =
;Return response: {u'0e5827a5-6f3c-49be-be9b-0bfb65198644': =
;{'delay': '0.0129730701447', 'lastCheck': '5.1', 'cod=
e': 0, 'valid': True}, u'4007f30a-f888-4587-b82c-40bcd=
e401acb': {'delay': '0.00222110748291', 'lastCheck': '=
5.3', 'code': 0, 'valid': True}}</DIV>
<DIV>Thread-5988::DEBUG::2013-03-05 14:52:52,016::task::1151::T=
askManager.Task::(prepare) Task=3D`d8d9ccc1-f17b-4372-a27e-bbee4eda17=
36`::finished: {u'0e5827a5-6f3c-49be-be9b-0bfb65198644': {'delay=
': '0.0129730701447', 'lastCheck': '5.1', 'code': =
;0, 'valid': True}, u'4007f30a-f888-4587-b82c-40bcde401acb'=
: {'delay': '0.00222110748291', 'lastCheck': '5.3',&nb=
sp;'code': 0, 'valid': True}}</DIV>
<DIV>Thread-5988::DEBUG::2013-03-05 14:52:52,016::task::568::Ta=
skManager.Task::(_updateState) Task=3D`d8d9ccc1-f17b-4372-a27e-bbee4e=
da1736`::moving from state preparing -> state&=
nbsp;finished</DIV></DIV></FONT></TD></TR></TBODY></TABLE></DIV>
<DIV>
<HR style=3D"WIDTH: 210px; HEIGHT: 1px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>
</DIV>
<DIV><SPAN>xianghuadu</SPAN></DIV></BODY></HTML>
------=_001_NextPart146280477306_=------
11 years, 8 months
[Users] Ovirt node power off
by Jakub Bittner
Hello,
we found strange issue. We have 2 nodes node1 and node2, I created
virtual server called Debian with "High available" and started it on
node2. Than I unplug power from node2 (physically). Ovirt management
found, that node2 is down, but virtual called Debian which runs on it
(powered off node2) is still active in ovirt management and I can not
turn it off even I can not switch node2 to maintenance. Is there any way
how to force ovirt management console to shutdown the VM?
Thank you for help.
Jakub Bittner
11 years, 8 months
[Users] oVirt 3.2: impossiible to delete a snapshot?
by Gianluca Cecchi
oVirt 3.2 with node f18 and engine another f18 and ovirt stable repo.
After I create a snapshot of a powered off win xp VM, I then select
the snapshot and select "delete"
Are you sure you want to delete snapshot from Wed Feb 27 07:56:23
GMT+100 2013 with description 'test'?
ok
Error:
winxp:
Cannot remove Snapshot. Removing the VM active Snapshot is not allowed.
Is this expected? Why?
Thanks,
Gianluca
11 years, 8 months