4.0 - 2nd node fails on deploy
by Jason Jeffrey
This is a multipart message in MIME format.
------=_NextPart_000_07BF_01D21D07.1029FF10
Content-Type: text/plain;
charset="us-ascii"
Content-Transfer-Encoding: 7bit
Hi,
I am trying to build a x3 HC cluster, with a self hosted engine using
gluster.
I have successful built the 1st node, however when I attempt to run
hosted-engine -deploy on node 2, I get the following error
[WARNING] A configuration file must be supplied to deploy Hosted Engine on
an additional host.
[ ERROR ] 'version' is not stored in the HE configuration image
[ ERROR ] Unable to get the answer file from the shared storage
[ ERROR ] Failed to execute stage 'Environment customization': Unable to get
the answer file from the shared storage
[ INFO ] Stage: Clean up
[ INFO ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20161002232505.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed
Looking at the failure in the log file..
2016-10-02 23:25:05 WARNING
otopi.plugins.gr_he_common.core.remote_answerfile
remote_answerfile._customization:151 A configuration
file must be supplied to deploy Hosted Engine on an additional host.
2016-10-02 23:25:05 DEBUG otopi.plugins.gr_he_common.core.remote_answerfile
remote_answerfile._fetch_answer_file:61 _fetch_answer_f
ile
2016-10-02 23:25:05 DEBUG otopi.plugins.gr_he_common.core.remote_answerfile
remote_answerfile._fetch_answer_file:69 fetching from:
/rhev/data-center/mnt/glusterSD/dcastor02:engine/0a021563-91b5-4f49-9c6b-fff
45e85a025/images/f055216c-02f9-4cd1-a22c-d6b56a0a8e9b/7
8cb2527-a2e2-489a-9fad-465a72221b37
2016-10-02 23:25:05 DEBUG otopi.plugins.gr_he_common.core.remote_answerfile
heconflib._dd_pipe_tar:69 executing: 'sudo -u vdsm dd i
f=/rhev/data-center/mnt/glusterSD/dcastor02:engine/0a021563-91b5-4f49-9c6b-f
ff45e85a025/images/f055216c-02f9-4cd1-a22c-d6b56a0a8e9b
/78cb2527-a2e2-489a-9fad-465a72221b37 bs=4k'
2016-10-02 23:25:05 DEBUG otopi.plugins.gr_he_common.core.remote_answerfile
heconflib._dd_pipe_tar:70 executing: 'tar -tvf -'
2016-10-02 23:25:05 DEBUG otopi.plugins.gr_he_common.core.remote_answerfile
heconflib._dd_pipe_tar:88 stdout:
2016-10-02 23:25:05 DEBUG otopi.plugins.gr_he_common.core.remote_answerfile
heconflib._dd_pipe_tar:89 stderr:
2016-10-02 23:25:05 ERROR otopi.plugins.gr_he_common.core.remote_answerfile
heconflib.validateConfImage:111 'version' is not stored
in the HE configuration image
2016-10-02 23:25:05 ERROR otopi.plugins.gr_he_common.core.remote_answerfile
remote_answerfile._fetch_answer_file:73 Unable to get t
he answer file from the shared storage
Looking at the detected gluster path -
/rhev/data-center/mnt/glusterSD/dcastor02:engine/0a021563-91b5-4f49-9c6b-fff
45e85a025/images/f055216c-02f9-4cd1-a22c-d6b56a0a8e9b/
[root@dcasrv02 ~]# ls -al
/rhev/data-center/mnt/glusterSD/dcastor02:engine/0a021563-91b5-4f49-9c6b-fff
45e85a025/images/f055216c-02f9-4cd1-a22c-d6b56a0a8e9b/
total 1049609
drwxr-xr-x. 2 vdsm kvm 4096 Oct 2 04:46 .
drwxr-xr-x. 6 vdsm kvm 4096 Oct 2 04:46 ..
-rw-rw----. 1 vdsm kvm 1073741824 Oct 2 04:46
78cb2527-a2e2-489a-9fad-465a72221b37
-rw-rw----. 1 vdsm kvm 1048576 Oct 2 04:46
78cb2527-a2e2-489a-9fad-465a72221b37.lease
-rw-r--r--. 1 vdsm kvm 294 Oct 2 04:46
78cb2527-a2e2-489a-9fad-465a72221b37.meta
78cb2527-a2e2-489a-9fad-465a72221b37 is a 1 GB file, is this the engine VM
?
Copying the answers file form primary (/etc/ovirt-hosted-engine/answers.conf
) to node 2 and rerunning produces the same error : (
(hosted-engine --deploy --config-append=/root/answers.conf )
Also tried on node 3, same issues
Happy to provide logs and other debugs
Thanks
Jason
------=_NextPart_000_07BF_01D21D07.1029FF10
Content-Type: text/html;
charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta =
http-equiv=3DContent-Type content=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 15 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-GB =
link=3D"#0563C1" vlink=3D"#954F72"><div class=3DWordSection1><p =
class=3DMsoNormal><span lang=3DEN-US>Hi,<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US><o:p> </o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>I am trying to build a x3 HC =
cluster, with a self hosted engine using =
gluster.<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>I have successful built the 1<sup>st</sup> node, =
however when I attempt to run hosted-engine –deploy on node 2, I =
get the following error<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>[WARNING] A configuration file must be supplied to deploy =
Hosted Engine on an additional host.<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>[ ERROR ] 'version' is not stored =
in the HE configuration image<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>[ ERROR ] Unable to get the answer =
file from the shared storage<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>[ ERROR ] Failed to execute stage =
'Environment customization': Unable to get the answer file from the =
shared storage<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>[ INFO ] Stage: Clean up<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>[ INFO ] Generating answer =
file =
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20161002232505.conf'<=
o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US>[ =
INFO ] Stage: Pre-termination<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>[ INFO ] Stage: =
Termination<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>[ ERROR ] Hosted Engine deployment failed =
<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Looking at the failure in the log =
file..<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>2016-10-02 23:25:05 WARNING =
otopi.plugins.gr_he_common.core.remote_answerfile =
remote_answerfile._customization:151 A =
configuration<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>file must be supplied to deploy Hosted Engine on an =
additional host.<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>2016-10-02 23:25:05 DEBUG =
otopi.plugins.gr_he_common.core.remote_answerfile =
remote_answerfile._fetch_answer_file:61 =
_fetch_answer_f<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>ile<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>2016-10-02 23:25:05 DEBUG =
otopi.plugins.gr_he_common.core.remote_answerfile =
remote_answerfile._fetch_answer_file:69 fetching =
from:<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>/rhev/data-center/mnt/glusterSD/dcastor02:engine/0a021563-91=
b5-4f49-9c6b-fff45e85a025/images/f055216c-02f9-4cd1-a22c-d6b56a0a8e9b/7<o=
:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>8cb2527-a2e2-489a-9fad-465a72221b37<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>2016-10-02 23:25:05 DEBUG =
otopi.plugins.gr_he_common.core.remote_answerfile =
heconflib._dd_pipe_tar:69 executing: 'sudo -u vdsm dd =
i<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>f=3D/rhev/data-center/mnt/glusterSD/dcastor02:engine/0a02156=
3-91b5-4f49-9c6b-fff45e85a025/images/f055216c-02f9-4cd1-a22c-d6b56a0a8e9b=
<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>/78cb2527-a2e2-489a-9fad-465a72221b37 =
bs=3D4k'<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>2016-10-02 23:25:05 DEBUG =
otopi.plugins.gr_he_common.core.remote_answerfile =
heconflib._dd_pipe_tar:70 executing: 'tar -tvf =
-'<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>2016-10-02 23:25:05 DEBUG =
otopi.plugins.gr_he_common.core.remote_answerfile =
heconflib._dd_pipe_tar:88 stdout:<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>2016-10-02 23:25:05 DEBUG =
otopi.plugins.gr_he_common.core.remote_answerfile =
heconflib._dd_pipe_tar:89 stderr:<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>2016-10-02 23:25:05 ERROR =
otopi.plugins.gr_he_common.core.remote_answerfile =
heconflib.validateConfImage:111 'version' is not =
stored<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US> in =
the HE configuration image<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>2016-10-02 23:25:05 ERROR =
otopi.plugins.gr_he_common.core.remote_answerfile =
remote_answerfile._fetch_answer_file:73 Unable to get =
t<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US>he answer =
file from the shared storage<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US><o:p> </o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Looking at the detected gluster =
path - =
/rhev/data-center/mnt/glusterSD/dcastor02:engine/0a021563-91b5-4f49-9c6b-=
fff45e85a025/images/f055216c-02f9-4cd1-a22c-d6b56a0a8e9b/<o:p></o:p></spa=
n></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>[root@dcasrv02 ~]# ls -al =
/rhev/data-center/mnt/glusterSD/dcastor02:engine/0a021563-91b5-4f49-9c6b-=
fff45e85a025/images/f055216c-02f9-4cd1-a22c-d6b56a0a8e9b/<o:p></o:p></spa=
n></p><p class=3DMsoNormal><span lang=3DEN-US>total =
1049609<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>drwxr-xr-x. 2 vdsm kvm =
4096 Oct 2 04:46 .<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>drwxr-xr-x. 6 vdsm kvm =
4096 Oct 2 04:46 ..<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>-rw-rw----. 1 vdsm kvm 1073741824 =
Oct 2 04:46 =
78cb2527-a2e2-489a-9fad-465a72221b37<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>-rw-rw----. 1 vdsm =
kvm 1048576 Oct 2 04:46 =
78cb2527-a2e2-489a-9fad-465a72221b37.lease<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>-rw-r--r--. 1 vdsm =
kvm 294 Oct 2 04:46 =
78cb2527-a2e2-489a-9fad-465a72221b37.meta <o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US><o:p> </o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US>78cb2527-a2e2-489a-9fad-465a72221b37 is a 1 GB file, =
is this the engine VM ?<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Copying the answers file form primary =
(/etc/ovirt-hosted-engine/answers.conf ) to node 2 and rerunning =
produces the same error : (<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>(hosted-engine --deploy =
--config-append=3D/root/answers.conf )<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US><o:p> </o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Also tried on node 3, same issues =
<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Happy to provide logs and other =
debugs<o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Thanks <o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US>Jason <o:p></o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span =
lang=3DEN-US><o:p> </o:p></span></p></div></body></html>
------=_NextPart_000_07BF_01D21D07.1029FF10--
8 years, 1 month
libvirt-v2v error
by Saman Bandara
Dear sir,
I'm getting following error while trying to convert a VMWare RHEL6 server
to kvm.
Please give any suggestion to resolve this.
[root@kvm16 ~]# virt-v2v ic esx://10.16.32.12/?no_verify=1 -o rhev -os
10.16.32.16:/vm-images/export_domain --network rhevm
"10.16.32.36-db-slcloudcontrol"
virt-v2v: Failed to connect to qemu:///system: libvirt error code: 45,
message: authentication failed: Failed to step SASL negotiation: -7
(SASL(-7): invalid parameter supplied: Unexpectedly missing a prompt result)
--
*Saman K. Bandara,** Database Administrator*
*ShipXpres Technologies (Pvt) Ltd.*
2300 Marsh Point Road, Suite 101 || Neptune Beach, FL 32266
Phone: +94 71 8135485 <%2B94%20718135485> | +94 <%2B94%20777522730>76
6014001
*Company Website <http://www.shipxpress.com/>* || *LinkedIn
<http://www.linkedin.com/company/shipxpress> *|| *Facebook
<https://www.facebook.com/ShipXpressInc>* || *Twitter
<https://twitter.com/ShipXpressInc>*
8 years, 1 month
oVirt 3.6.4 / PXE guest boot issues
by Alan Griffiths
A explanation/work-around for this issue raised back in April.
It seems that if, in UCS, you configure a vNIC with a single native VLAN it
will still add an 802.1q header with tag 0 - possibly to do with QoS. And
this extra header prevents iPXE from parsing the DHCP response.
The solution for me was to present all VLANs on a single trunked vNIC to
the blade and configure VLAN tagging as per normal. The result is the tags
are stripped off the packets before being passed to the VM and DHCP now
works.
The same issue applies to VM-FEX as packets coming off the VF will have the
802.1q header. The only solution I can see here is to configure a bridged
interface for initial build of the VM and then switch to VM-FEX afterwards.
I found a discussion on the iPXE mailing list about addressing the vlan 0
issue, but I could see no agreed solution.
http://lists.ipxe.org/pipermail/ipxe-devel/2016-April/004901.html
Alan
8 years, 1 month
Re: [ovirt-users] DISCARD support?
by Nicolas Ecarnot
This is a multi-part message in MIME format.
--------------DFD541D40E712C15A95C6FB1
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hello,
Sending this here to share knowledge.
Here is what I learned from many BZ and mailing list posts readings. I'm
not working at Redhat, so please correct me if I'm wrong.
We are using thin-provisioned block storage LUNs (Equallogic), on which
oVirt is creating numerous Logical Volumes, and we're very happy with it.
When oVirt is removing a virtual disk, the SAN is not informed, because
the LVM layer is not sending the "issue_discard" flag.
/etc/lvm/lvm.conf is not the natural place to try to change this
parameter, as VDSM is not using it.
Efforts are presently made to include issue_discard setting support
directly into vdsm.conf, first on a datacenter scope (4.0.x), then per
storage domain (4.1.x) and maybe via a web GUI check-box. Part of the
effort is to make sure every bit of a planned to be removed LV get wiped
out. Part is to inform the block storage side about the deletion, in
case of thin provisioned LUNs.
https://bugzilla.redhat.com/show_bug.cgi?id=1342919
https://bugzilla.redhat.com/show_bug.cgi?id=981626
--
Nicolas ECARNOT
On Mon, Oct 3, 2016 at 2:24 PM, Nicolas Ecarnot <nicolas(a)ecarnot.net
<mailto:nicolas@ecarnot.net>> wrote:
Yaniv,
As a pure random way of web surfing, I found that you posted on
twitter an information about DISCARD support.
(https://twitter.com/YanivKaul/status/773513216664174592
<https://twitter.com/YanivKaul/status/773513216664174592>)
I did not dig any further, but has it any relation with the fact
that so far, oVirt did not reclaim lost storage space amongst its
logical volumes of its storage domains?
A BZ exist about this, but one was told no work would be done about
it until 4.x.y, so now we're there, I was wondering if you knew more?
Feel free to send such questions on the mailing list (ovirt users or
devel), so other will be able to both chime in and see the response.
We've supported a custom hook for enabling discard per disk (which is
only relevant for virtio-SCSI and IDE) for some versions now (3.5 I
believe).
We are planning to add this via a UI and API in 4.1.
In addition, we are looking into discard (instead of wipe after delete,
when discard is also zero'ing content) as well as discard when removing LVs.
See:
http://www.ovirt.org/develop/release-management/features/storage/pass-dis...
http://www.ovirt.org/develop/release-management/features/storage/wipe-vol...
http://www.ovirt.org/develop/release-management/features/storage/discard-...
Y.
Best,
--
Nicolas ECARNOT
--------------DFD541D40E712C15A95C6FB1
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div dir="ltr">Hello,<br>
<br>
Sending this here to share knowledge.<br>
<br>
Here is what I learned from many BZ and mailing list posts
readings. I'm not working at Redhat, so please correct me if I'm
wrong.<br>
<br>
We are using thin-provisioned block storage LUNs (Equallogic), on
which oVirt is creating numerous Logical Volumes, and we're very
happy with it.<br>
When oVirt is removing a virtual disk, the SAN is not informed,
because the LVM layer is not sending the "issue_discard" flag.<br>
<br>
/etc/lvm/lvm.conf is not the natural place to try to change this
parameter, as VDSM is not using it.<br>
<br>
Efforts are presently made to include issue_discard setting
support directly into vdsm.conf, first on a datacenter scope
(4.0.x), then per storage domain (4.1.x) and maybe via a web GUI
check-box. Part of the effort is to make sure every bit of a
planned to be removed LV get wiped out. Part is to inform the
block storage side about the deletion, in case of thin provisioned
LUNs.<br>
<br>
<a class="moz-txt-link-freetext" href="https://bugzilla.redhat.com/show_bug.cgi?id=1342919">https://bugzilla.redhat.com/show_bug.cgi?id=1342919</a><br>
<a class="moz-txt-link-freetext" href="https://bugzilla.redhat.com/show_bug.cgi?id=981626">https://bugzilla.redhat.com/show_bug.cgi?id=981626</a><br>
<br>
-- <br>
Nicolas ECARNOT<br>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Mon, Oct 3, 2016 at 2:24 PM, Nicolas
Ecarnot <span dir="ltr"><<a
href="mailto:nicolas@ecarnot.net" target="_blank">nicolas(a)ecarnot.net</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">Yaniv,<br>
<br>
As a pure random way of web surfing, I found that you posted
on twitter an information about DISCARD support. (<a
href="https://twitter.com/YanivKaul/status/773513216664174592"
rel="noreferrer" target="_blank">https://twitter.com/YanivKaul<wbr>/status/773513216664174592</a>)<br>
<br>
I did not dig any further, but has it any relation with the
fact that so far, oVirt did not reclaim lost storage space
amongst its logical volumes of its storage domains?<br>
<br>
A BZ exist about this, but one was told no work would be
done about it until 4.x.y, so now we're there, I was
wondering if you knew more?<br>
</blockquote>
<div><br>
</div>
<div>Feel free to send such questions on the mailing list
(ovirt users or devel), so other will be able to both chime
in and see the response.</div>
<div>We've supported a custom hook for enabling discard per
disk (which is only relevant for virtio-SCSI and IDE) for
some versions now (3.5 I believe).</div>
<div>We are planning to add this via a UI and API in 4.1.</div>
<div>In addition, we are looking into discard (instead of wipe
after delete, when discard is also zero'ing content) as well
as discard when removing LVs.</div>
<div>See:</div>
<div><a
href="http://www.ovirt.org/develop/release-management/features/storage/pass-dis...">http://www.ovirt.org/develop/release-management/features/storage/pass-dis...</a></div>
<div><a
href="http://www.ovirt.org/develop/release-management/features/storage/wipe-vol...">http://www.ovirt.org/develop/release-management/features/storage/wipe-vol...</a></div>
<div><a
href="http://www.ovirt.org/develop/release-management/features/storage/discard-...">http://www.ovirt.org/develop/release-management/features/storage/discard-...</a></div>
<div><br>
</div>
<div>Y.</div>
<div> <br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<br>
Best,<span class="gmail-HOEnZb"><font color="#888888"><br>
<br>
-- <br>
Nicolas ECARNOT<br>
</font></span></blockquote>
</div>
<br>
</div>
</div>
</body>
</html>
--------------DFD541D40E712C15A95C6FB1--
8 years, 1 month
Re: [ovirt-users] Migrate machines in unknown state?
by Ekin Meroğlu
Hi Yaniv,
On Sun, Aug 7, 2016 at 9:37 PM, Ekin Meroğlu <ekin.meroglu(a)linuxera.com>
> wrote:
>
>> Hi,
>>
>> Just a reminder, if you have power management configured, first turn that
>> off for the host - when you restart vdsmd with the power management
>> configured, engine finds it not responding and tries to fence (e.g. reboot)
>> the host.
>>
>
> That's not true - if it's a graceful restart, it should not happen.
>
Can you explain this a little more? Is there a mechanism to prevent
fencing on this scenario?
In two of our customers' production systems we've experienced this exact
behavior (i.e. engine fencing the host while restarting vdsm service
manually) for a number of times, and we were specifically advised by Red
Hat Support to turn off PM before restarting service. I'd like to to know
if we have a better / easier way to restart vdsm.
btw, b
oth of the environments were RHEV-H based RHEV 3.5 clusters, and both we
were busy systems, so restarting vdsm service took quite a long time. I'm
guessing this might be a factor.
Regards,
>
>
>
>>
>> Other than that, restarting vdsmd has been safe in my experience...
>>
>> Regards,
>>
>> On Thu, Aug 4, 2016 at 6:10 PM, Nicolás <nicolas(a)devels.es> wrote:
>>
>>>
>>>
>>> El 04/08/16 a las 15:25, Arik Hadas escribió:
>>>
>>>>
>>>> ----- Original Message -----
>>>>
>>>>> El 2016-08-04 08:24, Arik Hadas escribió:
>>>>>
>>>>>> ----- Original Message -----
>>>>>>
>>>>>>>
>>>>>>> El 04/08/16 a las 07:18, Arik Hadas escribió:
>>>>>>>
>>>>>>>> ----- Original Message -----
>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> We're running oVirt 4.0.1 and today I found out that one of our
>>>>>>>>> hosts
>>>>>>>>> has all its VMs in an unknown state. I actually don't know how (and
>>>>>>>>> when) did this happen, but I'd like to restore service possibly
>>>>>>>>> without
>>>>>>>>> turning off these machines. The host is up, the VMs are up, 'qemu'
>>>>>>>>> process exists, no errors, it's just the VMs running on it that
>>>>>>>>> have a
>>>>>>>>> '?' where status is defined.
>>>>>>>>>
>>>>>>>>> Is it safe in this case to simply modify database and set those
>>>>>>>>> VM's
>>>>>>>>> status to 'up'? I remember having to do this a time ago when we
>>>>>>>>> faced
>>>>>>>>> storage issues, it didn't break anything back then. If not, is
>>>>>>>>> there a
>>>>>>>>> "safe" way to migrate those VMs to a different host and restart the
>>>>>>>>> host
>>>>>>>>> that marked them as unknown?
>>>>>>>>>
>>>>>>>> Hi Nicolás,
>>>>>>>>
>>>>>>>> I assume that the host these VMs are running on is empty in the
>>>>>>>> webadmin,
>>>>>>>> right? if that is the case then you've probably hit [1]. Changing
>>>>>>>> their
>>>>>>>> status to up is not the way to go since these VMs will not be
>>>>>>>> monitored.
>>>>>>>>
>>>>>>> Hi Arik,
>>>>>>>
>>>>>>> By "empty" you mean the webadmin reports the host being running 0
>>>>>>> VMs?
>>>>>>> If so, that's not the case, actually the VM count seems to be correct
>>>>>>> in
>>>>>>> relation to "qemu-*" processes (about 32 VMs), I can even see the
>>>>>>> machines in the "Virtual machines" tab of the host, it's just they
>>>>>>> are
>>>>>>> all marked with the '?' mark.
>>>>>>>
>>>>>> No, I meant the 'Host' column in the Virtual Machines tab but if you
>>>>>> see
>>>>>> the VMs in the "Virtual machines" sub-tab of the host then run_on_vds
>>>>>> points to the right host..
>>>>>>
>>>>>> The host is up in the webadmin as well?
>>>>>> Can you share the engine log?
>>>>>>
>>>>>> Yes, the host is up in the webadmin, there are no issues with it, just
>>>>> the VMs running on it have the '?' mark. I've made 3 tests:
>>>>>
>>>>> 1) Restart engine: did not help
>>>>> 2) Check firewall, seems to be ok.
>>>>> 2) PostgreSQL: UPDATE vm_dynamic SET status = 1 WHERE status = 8; :
>>>>> After a while, I see lots of entries like this:
>>>>>
>>>>> 2016-08-04 09:23:10,910 WARN
>>>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>>>> (DefaultQuartzScheduler4) [6ad135b8] Correlation ID: null, Call Stack:
>>>>> null, Custom Event ID: -1, Message: VM xxx is not responding.
>>>>>
>>>>> I'm attaching the engine log, but I don't know when did this happen for
>>>>> the first time, though. If there's a manual way/command to migrate VMs
>>>>> to a different host I'd appreciate a hint about it.
>>>>>
>>>>> Is it safe to restart vdsmd on this host?
>>>>>
>>>> The engine log looks fine - the VMs are reported as not-responding for
>>>> some reason. I would restart libvirtd and vdsmd then
>>>>
>>>
>>> Is restarting those two daemons safe? I mean, will that stop all qemu-*
>>> processes, so the VMs marked as unknown will stop?
>>>
>>>
>>> Thanks.
>>>>>
>>>>> Thanks.
>>>>>>>
>>>>>>> Yes, there is no other way to resolve it other than changing the DB
>>>>>>>> but
>>>>>>>> the change should be to update run_on_vds field of these VMs to the
>>>>>>>> host
>>>>>>>> you know they are running on. Their status will then be updates in
>>>>>>>> 15
>>>>>>>> sec.
>>>>>>>>
>>>>>>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1354494
>>>>>>>>
>>>>>>>> Arik.
>>>>>>>>
>>>>>>>> Thanks.
>>>>>>>>>
>>>>>>>>> Nicolás
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> Users mailing list
>>>>>>>>> Users(a)ovirt.org
>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>>>
>>>>>>>>>
>>>>>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>>
>>
>> --
>> *Ekin Meroğlu** Red Hat Certified Architect*
>>
>> linuxera Özgür Yazılım Çözüm ve Hizmetleri
>> *T* +90 (850) 22 LINUX | *GSM* +90 (532) 137 77 04
>> www.linuxera.com | bilgi(a)linuxera.com
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
--
*Ekin Meroğlu** Red Hat Certified Architect*
linuxera Özgür Yazılım Çözüm ve Hizmetleri
*T* +90 (850) 22 LINUX | *GSM* +90 (532) 137 77 04
www.linuxera.com | bilgi(a)linuxera.com
8 years, 1 month
Ovirt 4.04 unable to umount LUNs on hosts
by Andrea Ghelardi
--_004_CY4PR14MB1687A42BA552152A8D40EE5FE9C50CY4PR14MB1687namp_
Content-Type: multipart/alternative;
boundary="_000_CY4PR14MB1687A42BA552152A8D40EE5FE9C50CY4PR14MB1687namp_"
--_000_CY4PR14MB1687A42BA552152A8D40EE5FE9C50CY4PR14MB1687namp_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hello ovirt gurus,
I'd like to bother you with a long lasting issue I face.
My (new) test setup (not production one) is running ovirt 4.04 hosted engin=
e on a Dell R510 server connected to SAN Compellent SC040 via iscsi.
All systems installed from scratch following guidelines.
Ovirt hosted engine installed using ovirt-appliance.
Everything is working "fine".
BUT
We are unable to perform a clear LUN removal.
We can perform all actions from web interface (storage add, maintenance, de=
tach, delete) with no error.
However, underlying device remains mapped to server. As a result, we are un=
able to unmap the LUN otherwise multipath fails, server fails, ovirt fails =
etc.
Steps to reproduce:
1) Create LUN on SAN, map it to server -> OK
2) Log in ovirt web interface, add a new storage targeting LUN -> OK
3) Create disk, create VM setup VM, (optional) -> OK
4) Shutdown VM -> OK
5) Ovirt: Put storage in maintenance -> OK
6) Ovirt: Detach storage -> OK
7) Ovirt: Delete storage -> OK
Expected result:
Ovirt unmaps and remove device from multipath so that it can be destroyed a=
t SAN storage level.
It should be possible to perform this action even when host is not in maint=
enance.
Current result: volume remain locked in multipath (map in use).
Trying vgchange -a n <DEVICE> and then multipath -f <DEVICE> removes device=
, but then it is automatically re-added after a while (???)
Please note: this is the same behavior we always faced also with our ovirt =
production environment.
Details attached. Volume to be removed: LUN 000064
Any ideas?
Thanks
AG
--_000_CY4PR14MB1687A42BA552152A8D40EE5FE9C50CY4PR14MB1687namp_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph
{mso-style-priority:34;
margin-top:0cm;
margin-right:0cm;
margin-bottom:0cm;
margin-left:36.0pt;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
{page:WordSection1;}
/* List Definitions */
@list l0
{mso-list-id:1357193226;
mso-list-type:hybrid;
mso-list-template-ids:-1855798236 67698705 67698713 67698715 67698703 6769=
8713 67698715 67698703 67698713 67698715;}
@list l0:level1
{mso-level-text:"%1\)";
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l0:level2
{mso-level-number-format:alpha-lower;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l0:level3
{mso-level-number-format:roman-lower;
mso-level-tab-stop:none;
mso-level-number-position:right;
text-indent:-9.0pt;}
@list l0:level4
{mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l0:level5
{mso-level-number-format:alpha-lower;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l0:level6
{mso-level-number-format:roman-lower;
mso-level-tab-stop:none;
mso-level-number-position:right;
text-indent:-9.0pt;}
@list l0:level7
{mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l0:level8
{mso-level-number-format:alpha-lower;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l0:level9
{mso-level-number-format:roman-lower;
mso-level-tab-stop:none;
mso-level-number-position:right;
text-indent:-9.0pt;}
ol
{margin-bottom:0cm;}
ul
{margin-bottom:0cm;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hello ovirt gurus,<o:p></o:p></p>
<p class=3D"MsoNormal">I’d like to bother you with a long lasting iss=
ue I face.<o:p></o:p></p>
<p class=3D"MsoNormal">My (new) test setup (not production one) is running =
ovirt 4.04 hosted engine on a Dell R510 server connected to SAN Compellent =
SC040 via iscsi.<o:p></o:p></p>
<p class=3D"MsoNormal">All systems installed from scratch following guideli=
nes.<o:p></o:p></p>
<p class=3D"MsoNormal">Ovirt hosted engine installed using ovirt-appliance.=
<o:p></o:p></p>
<p class=3D"MsoNormal">Everything is working “fine”.<o:p></o:p>=
</p>
<p class=3D"MsoNormal">BUT<o:p></o:p></p>
<p class=3D"MsoNormal">We are unable to perform a clear LUN removal.<o:p></=
o:p></p>
<p class=3D"MsoNormal">We can perform all actions from web interface (stora=
ge add, maintenance, detach, delete) with no error.<o:p></o:p></p>
<p class=3D"MsoNormal">However, underlying device remains mapped to server.=
As a result, we are unable to unmap the LUN otherwise multipath fails, ser=
ver fails, ovirt fails etc.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Steps to reproduce:<o:p></o:p></p>
<p class=3D"MsoListParagraph" style=3D"text-indent:-18.0pt;mso-list:l0 leve=
l1 lfo1"><![if !supportLists]><span style=3D"mso-list:Ignore">1)<span style=
=3D"font:7.0pt "Times New Roman"">
</span></span><![endif]>Create LUN on SAN, map it to server -> OK<o:p></=
o:p></p>
<p class=3D"MsoListParagraph" style=3D"text-indent:-18.0pt;mso-list:l0 leve=
l1 lfo1"><![if !supportLists]><span style=3D"mso-list:Ignore">2)<span style=
=3D"font:7.0pt "Times New Roman"">
</span></span><![endif]>Log in ovirt web interface, add a new storage targe=
ting LUN -> OK<o:p></o:p></p>
<p class=3D"MsoListParagraph" style=3D"text-indent:-18.0pt;mso-list:l0 leve=
l1 lfo1"><![if !supportLists]><span style=3D"mso-list:Ignore">3)<span style=
=3D"font:7.0pt "Times New Roman"">
</span></span><![endif]>Create disk, create VM setup VM, (optional) -> O=
K<o:p></o:p></p>
<p class=3D"MsoListParagraph" style=3D"text-indent:-18.0pt;mso-list:l0 leve=
l1 lfo1"><![if !supportLists]><span style=3D"mso-list:Ignore">4)<span style=
=3D"font:7.0pt "Times New Roman"">
</span></span><![endif]>Shutdown VM -> OK<o:p></o:p></p>
<p class=3D"MsoListParagraph" style=3D"text-indent:-18.0pt;mso-list:l0 leve=
l1 lfo1"><![if !supportLists]><span style=3D"mso-list:Ignore">5)<span style=
=3D"font:7.0pt "Times New Roman"">
</span></span><![endif]>Ovirt: Put storage in maintenance -> OK<o:p></o:=
p></p>
<p class=3D"MsoListParagraph" style=3D"text-indent:-18.0pt;mso-list:l0 leve=
l1 lfo1"><![if !supportLists]><span style=3D"mso-list:Ignore">6)<span style=
=3D"font:7.0pt "Times New Roman"">
</span></span><![endif]>Ovirt: Detach storage -> OK<o:p></o:p></p>
<p class=3D"MsoListParagraph" style=3D"text-indent:-18.0pt;mso-list:l0 leve=
l1 lfo1"><![if !supportLists]><span style=3D"mso-list:Ignore">7)<span style=
=3D"font:7.0pt "Times New Roman"">
</span></span><![endif]>Ovirt: Delete storage -> OK<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Expected result:<o:p></o:p></p>
<p class=3D"MsoNormal">Ovirt unmaps and remove device from multipath so tha=
t it can be destroyed at SAN storage level.<o:p></o:p></p>
<p class=3D"MsoNormal">It should be possible to perform this action even wh=
en host is not in maintenance.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Current result: volume remain locked in multipath (m=
ap in use).<o:p></o:p></p>
<p class=3D"MsoNormal">Trying vgchange -a n <DEVICE> and then multipa=
th -f <DEVICE> removes device, but then it is automatically re-added =
after a while (???)<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Please note: this is the same behavior we always fac=
ed also with our ovirt production environment.<o:p></o:p></p>
<p class=3D"MsoNormal">Details attached. Volume to be removed: LUN 000064<o=
:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Any ideas?<o:p></o:p></p>
<p class=3D"MsoNormal">Thanks<o:p></o:p></p>
<p class=3D"MsoNormal">AG<o:p></o:p></p>
</div>
</body>
</html>
--_000_CY4PR14MB1687A42BA552152A8D40EE5FE9C50CY4PR14MB1687namp_--
--_004_CY4PR14MB1687A42BA552152A8D40EE5FE9C50CY4PR14MB1687namp_
Content-Type: text/plain; name="ovirt-multipath.txt"
Content-Description: ovirt-multipath.txt
Content-Disposition: attachment; filename="ovirt-multipath.txt"; size=17919;
creation-date="Tue, 04 Oct 2016 13:16:00 GMT";
modification-date="Tue, 04 Oct 2016 13:16:00 GMT"
Content-Transfer-Encoding: base64
W3Jvb3RAYmFyYmVyYSBsb2ddIyBtdWx0aXBhdGggLWxsDQozNjAwMGQzMTAwMDRlZGMwMDAwMDAw
MDAwMDAwMDAwNjQgZG0tMCBDT01QRUxOVCxDb21wZWxsZW50IFZvbA0Kc2l6ZT0xMDBHIGZlYXR1
cmVzPScwJyBod2hhbmRsZXI9JzAnIHdwPXJ3DQpgLSstIHBvbGljeT0nc2VydmljZS10aW1lIDAn
IHByaW89MSBzdGF0dXM9YWN0aXZlDQogIHwtIDE4OjA6MDoyIHNkayAgODoxNjAgIGFjdGl2ZSBy
ZWFkeSBydW5uaW5nDQogIHwtIDE5OjA6MDoyIHNkbSAgODoxOTIgIGFjdGl2ZSByZWFkeSBydW5u
aW5nDQogIHwtIDIyOjA6MDoyIHNkcSAgNjU6MCAgIGFjdGl2ZSByZWFkeSBydW5uaW5nDQogIHwt
IDIzOjA6MDoyIHNkcyAgNjU6MzIgIGFjdGl2ZSByZWFkeSBydW5uaW5nDQogIHwtIDI0OjA6MDoy
IHNkdSAgNjU6NjQgIGFjdGl2ZSByZWFkeSBydW5uaW5nDQogIHwtIDI4OjA6MDoyIHNkYWEgNjU6
MTYwIGFjdGl2ZSByZWFkeSBydW5uaW5nDQogIHwtIDI5OjA6MDoyIHNkYWIgNjU6MTc2IGFjdGl2
ZSByZWFkeSBydW5uaW5nDQogIHwtIDU6MDowOjIgIHNkYyAgODozMiAgIGFjdGl2ZSByZWFkeSBy
dW5uaW5nDQogIGAtIDY6MDowOjIgIHNkZyAgODo5NiAgIGFjdGl2ZSByZWFkeSBydW5uaW5nDQoz
NjAwMGQzMTAwMDRlZGMwMDAwMDAwMDAwMDAwMDAwNjIgZG0tMSBDT01QRUxOVCxDb21wZWxsZW50
IFZvbA0Kc2l6ZT0xLjBUIGZlYXR1cmVzPScwJyBod2hhbmRsZXI9JzAnIHdwPXJ3DQpgLSstIHBv
bGljeT0nc2VydmljZS10aW1lIDAnIHByaW89MSBzdGF0dXM9YWN0aXZlDQogIHwtIDU6MDowOjEg
IHNkYiAgODoxNiAgIGFjdGl2ZSByZWFkeSBydW5uaW5nDQogIHwtIDY6MDowOjEgIHNkZCAgODo0
OCAgIGFjdGl2ZSByZWFkeSBydW5uaW5nDQogIHwtIDE5OjA6MDoxIHNkbCAgODoxNzYgIGFjdGl2
ZSByZWFkeSBydW5uaW5nDQogIHwtIDE4OjA6MDoxIHNkaiAgODoxNDQgIGFjdGl2ZSByZWFkeSBy
dW5uaW5nDQogIHwtIDIyOjA6MDoxIHNkcCAgODoyNDAgIGFjdGl2ZSByZWFkeSBydW5uaW5nDQog
IHwtIDIzOjA6MDoxIHNkciAgNjU6MTYgIGFjdGl2ZSByZWFkeSBydW5uaW5nDQogIHwtIDI0OjA6
MDoxIHNkdCAgNjU6NDggIGFjdGl2ZSByZWFkeSBydW5uaW5nDQogIHwtIDI5OjA6MDoxIHNkeiAg
NjU6MTQ0IGFjdGl2ZSByZWFkeSBydW5uaW5nDQogIGAtIDI4OjA6MDoxIHNkeSAgNjU6MTI4IGFj
dGl2ZSByZWFkeSBydW5uaW5nDQozNjAwMGQzMTAwMDRlZGMwMDAwMDAwMDAwMDAwMDAwNjEgZG0t
MTkgQ09NUEVMTlQsQ29tcGVsbGVudCBWb2wNCnNpemU9MTIwRyBmZWF0dXJlcz0nMCcgaHdoYW5k
bGVyPScwJyB3cD1ydw0KYC0rLSBwb2xpY3k9J3NlcnZpY2UtdGltZSAwJyBwcmlvPTEgc3RhdHVz
PWFjdGl2ZQ0KICB8LSA4OjA6MDoxICBzZGYgIDg6ODAgICBhY3RpdmUgcmVhZHkgcnVubmluZw0K
ICB8LSA3OjA6MDoxICBzZGUgIDg6NjQgICBhY3RpdmUgcmVhZHkgcnVubmluZw0KICB8LSAxMTow
OjA6MSBzZGggIDg6MTEyICBhY3RpdmUgcmVhZHkgcnVubmluZw0KICB8LSAxMjowOjA6MSBzZGkg
IDg6MTI4ICBhY3RpdmUgcmVhZHkgcnVubmluZw0KICB8LSAyMDowOjA6MSBzZG4gIDg6MjA4ICBh
Y3RpdmUgcmVhZHkgcnVubmluZw0KICB8LSAyMTowOjA6MSBzZG8gIDg6MjI0ICBhY3RpdmUgcmVh
ZHkgcnVubmluZw0KICB8LSAyNTowOjA6MSBzZHYgIDY1OjgwICBhY3RpdmUgcmVhZHkgcnVubmlu
Zw0KICB8LSAyNjowOjA6MSBzZHcgIDY1Ojk2ICBhY3RpdmUgcmVhZHkgcnVubmluZw0KICBgLSAy
NzowOjA6MSBzZHggIDY1OjExMiBhY3RpdmUgcmVhZHkgcnVubmluZw0KW3Jvb3RAYmFyYmVyYSBs
b2ddIw0KW3Jvb3RAYmFyYmVyYSBsb2ddIyBsc3Njc2kNClswOjA6MDowXSAgICBkaXNrICAgIFNF
QUdBVEUgIFNUMzMwMDY1N1NTICAgICAgRVM2NiAgLQ0KWzA6MDoxOjBdICAgIGRpc2sgICAgU0VB
R0FURSAgU1QzMzAwNjU3U1MgICAgICBFUzY2ICAtDQpbMDoxOjA6MF0gICAgZGlzayAgICBEZWxs
ICAgICBWaXJ0dWFsIERpc2sgICAgIDEwMjggIC9kZXYvc2RhDQpbMzowOjA6MF0gICAgY2QvZHZk
ICBQTERTICAgICBEVkQrLVJXIERTLThBOFNIIEtENTEgIC9kZXYvc3IwDQpbNTowOjA6MV0gICAg
ZGlzayAgICBDT01QRUxOVCBDb21wZWxsZW50IFZvbCAgIDA2MDYgIC9kZXYvc2RiDQpbNTowOjA6
Ml0gICAgZGlzayAgICBDT01QRUxOVCBDb21wZWxsZW50IFZvbCAgIDA2MDYgIC9kZXYvc2RjDQpb
NjowOjA6MV0gICAgZGlzayAgICBDT01QRUxOVCBDb21wZWxsZW50IFZvbCAgIDA2MDYgIC9kZXYv
c2RkDQpbNjowOjA6Ml0gICAgZGlzayAgICBDT01QRUxOVCBDb21wZWxsZW50IFZvbCAgIDA2MDYg
IC9kZXYvc2RnDQpbNzowOjA6MV0gICAgZGlzayAgICBDT01QRUxOVCBDb21wZWxsZW50IFZvbCAg
IDA2MDYgIC9kZXYvc2RlDQpbODowOjA6MV0gICAgZGlzayAgICBDT01QRUxOVCBDb21wZWxsZW50
IFZvbCAgIDA2MDYgIC9kZXYvc2RmDQpbMTE6MDowOjFdICAgZGlzayAgICBDT01QRUxOVCBDb21w
ZWxsZW50IFZvbCAgIDA2MDYgIC9kZXYvc2RoDQpbMTI6MDowOjFdICAgZGlzayAgICBDT01QRUxO
VCBDb21wZWxsZW50IFZvbCAgIDA2MDYgIC9kZXYvc2RpDQpbMTg6MDowOjFdICAgZGlzayAgICBD
T01QRUxOVCBDb21wZWxsZW50IFZvbCAgIDA2MDYgIC9kZXYvc2RqDQpbMTg6MDowOjJdICAgZGlz
ayAgICBDT01QRUxOVCBDb21wZWxsZW50IFZvbCAgIDA2MDYgIC9kZXYvc2RrDQpbMTk6MDowOjFd
ICAgZGlzayAgICBDT01QRUxOVCBDb21wZWxsZW50IFZvbCAgIDA2MDYgIC9kZXYvc2RsDQpbMTk6
MDowOjJdICAgZGlzayAgICBDT01QRUxOVCBDb21wZWxsZW50IFZvbCAgIDA2MDYgIC9kZXYvc2Rt
DQpbMjA6MDowOjFdICAgZGlzayAgICBDT01QRUxOVCBDb21wZWxsZW50IFZvbCAgIDA2MDYgIC9k
ZXYvc2RuDQpbMjE6MDowOjFdICAgZGlzayAgICBDT01QRUxOVCBDb21wZWxsZW50IFZvbCAgIDA2
MDYgIC9kZXYvc2RvDQpbMjI6MDowOjFdICAgZGlzayAgICBDT01QRUxOVCBDb21wZWxsZW50IFZv
bCAgIDA2MDYgIC9kZXYvc2RwDQpbMjI6MDowOjJdICAgZGlzayAgICBDT01QRUxOVCBDb21wZWxs
ZW50IFZvbCAgIDA2MDYgIC9kZXYvc2RxDQpbMjM6MDowOjFdICAgZGlzayAgICBDT01QRUxOVCBD
b21wZWxsZW50IFZvbCAgIDA2MDYgIC9kZXYvc2RyDQpbMjM6MDowOjJdICAgZGlzayAgICBDT01Q
RUxOVCBDb21wZWxsZW50IFZvbCAgIDA2MDYgIC9kZXYvc2RzDQpbMjQ6MDowOjFdICAgZGlzayAg
ICBDT01QRUxOVCBDb21wZWxsZW50IFZvbCAgIDA2MDYgIC9kZXYvc2R0DQpbMjQ6MDowOjJdICAg
ZGlzayAgICBDT01QRUxOVCBDb21wZWxsZW50IFZvbCAgIDA2MDYgIC9kZXYvc2R1DQpbMjU6MDow
OjFdICAgZGlzayAgICBDT01QRUxOVCBDb21wZWxsZW50IFZvbCAgIDA2MDYgIC9kZXYvc2R2DQpb
MjY6MDowOjFdICAgZGlzayAgICBDT01QRUxOVCBDb21wZWxsZW50IFZvbCAgIDA2MDYgIC9kZXYv
c2R3DQpbMjc6MDowOjFdICAgZGlzayAgICBDT01QRUxOVCBDb21wZWxsZW50IFZvbCAgIDA2MDYg
IC9kZXYvc2R4DQpbMjg6MDowOjFdICAgZGlzayAgICBDT01QRUxOVCBDb21wZWxsZW50IFZvbCAg
IDA2MDYgIC9kZXYvc2R5DQpbMjg6MDowOjJdICAgZGlzayAgICBDT01QRUxOVCBDb21wZWxsZW50
IFZvbCAgIDA2MDYgIC9kZXYvc2RhYQ0KWzI5OjA6MDoxXSAgIGRpc2sgICAgQ09NUEVMTlQgQ29t
cGVsbGVudCBWb2wgICAwNjA2ICAvZGV2L3Nkeg0KWzI5OjA6MDoyXSAgIGRpc2sgICAgQ09NUEVM
TlQgQ29tcGVsbGVudCBWb2wgICAwNjA2ICAvZGV2L3NkYWINCltyb290QGJhcmJlcmEgbG9nXSMN
Cltyb290QGJhcmJlcmEgbG9nXSMgdmdzDQogIFZHICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAjUFYgI0xWICNTTiBBdHRyICAgVlNpemUgICAgVkZyZWUNCiAgNDA5MDc0NjItNWI1
OC00MGMyLWFkZTctMzAzMWM4YTAxOTQ5ICAgMSAgIDkgICAwIHd6LS1uLSAgIDk5LjYyZyAgIDM1
LjUwZw0KICA0ZTRhZmIyZi0yYzg2LTQ0ZWUtOTBlZC01ODIyMDY2NGM4YzQgICAxICAxMiAgIDAg
d3otLW4tICAxMTkuNjJnICAxMDQuMjVnDQogIDYyZDhjZmRkLWVlODAtNDE1Yi1iMmMzLWE3ZTZi
ZGEwZGQ5NyAgIDEgICA4ICAgMCB3ei0tbi0gMTAyMy42MmcgMTAxOS41MGcNCltyb290QGJhcmJl
cmEgbG9nXSMgbHZzDQogIExWICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBWRyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgQXR0ciAgICAgICBMU2l6ZSAgIFBvb2wg
T3JpZ2luIERhdGElICBNZXRhJSAgTW92ZSBMb2cgQ3B5JVN5bmMgQ29udmVydA0KICAzOTdjMGY0
MC1lZDcwLTQ2NzAtYWY4OS01NWMzNWRlOTI5ZDYgNDA5MDc0NjItNWI1OC00MGMyLWFkZTctMzAz
MWM4YTAxOTQ5IC13aS1hLS0tLS0gMTI4LjAwbQ0KICBhYWMzMmNmOS1hZjBkLTQyYzQtODRmNy02
YzYyZmE2MmYyM2MgNDA5MDc0NjItNWI1OC00MGMyLWFkZTctMzAzMWM4YTAxOTQ5IC13aS1hLS0t
LS0gIDYwLjAwZw0KICBkNTA2NWNjZC1jNTZhLTQyYmItYWNjZC02ZTA5YzhhNDRlYmUgNDA5MDc0
NjItNWI1OC00MGMyLWFkZTctMzAzMWM4YTAxOTQ5IC13aS1hLS0tLS0gMTI4LjAwbQ0KICBpZHMg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgNDA5MDc0NjItNWI1OC00MGMyLWFkZTct
MzAzMWM4YTAxOTQ5IC13aS1hLS0tLS0gMTI4LjAwbQ0KICBpbmJveCAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgNDA5MDc0NjItNWI1OC00MGMyLWFkZTctMzAzMWM4YTAxOTQ5IC13aS1h
LS0tLS0gMTI4LjAwbQ0KICBsZWFzZXMgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgNDA5
MDc0NjItNWI1OC00MGMyLWFkZTctMzAzMWM4YTAxOTQ5IC13aS1hLS0tLS0gICAyLjAwZw0KICBt
YXN0ZXIgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgNDA5MDc0NjItNWI1OC00MGMyLWFk
ZTctMzAzMWM4YTAxOTQ5IC13aS1hLS0tLS0gICAxLjAwZw0KICBtZXRhZGF0YSAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgNDA5MDc0NjItNWI1OC00MGMyLWFkZTctMzAzMWM4YTAxOTQ5IC13
aS1hLS0tLS0gNTEyLjAwbQ0KICBvdXRib3ggICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
NDA5MDc0NjItNWI1OC00MGMyLWFkZTctMzAzMWM4YTAxOTQ5IC13aS1hLS0tLS0gMTI4LjAwbQ0K
ICAxMmI2YjJmMC0xMTFjLTQ0MWYtYmE4NS0yMDk5NmEwNDkzNzggNGU0YWZiMmYtMmM4Ni00NGVl
LTkwZWQtNTgyMjA2NjRjOGM0IC13aS1hLS0tLS0gICAxLjAwZw0KICAzMjQwZTFmNC1mMTZiLTQ3
NmEtODBlYS1lYzRmMTRiZGFiODUgNGU0YWZiMmYtMmM4Ni00NGVlLTkwZWQtNTgyMjA2NjRjOGM0
IC13aS1hby0tLS0gMTI4LjAwbQ0KICA3YTY1ZTBlMi0wNWY1LTQwYmYtOWE0MC0wOWFjNTBhYjA2
NzMgNGU0YWZiMmYtMmM4Ni00NGVlLTkwZWQtNTgyMjA2NjRjOGM0IC13aS1hLS0tLS0gMTI4LjAw
bQ0KICA5ZDFlZDNmZC1kNmVmLTRiYmMtYjIyMi03Njg3MTM0YjhmYzMgNGU0YWZiMmYtMmM4Ni00
NGVlLTkwZWQtNTgyMjA2NjRjOGM0IC13aS1hLS0tLS0gMTI4LjAwbQ0KICBiODUxMDc3Yi0yZjJk
LTQ5NzAtYjQzMy00MjdjYWQ3OTZkNDYgNGU0YWZiMmYtMmM4Ni00NGVlLTkwZWQtNTgyMjA2NjRj
OGM0IC13aS1hby0tLS0gIDEwLjAwZw0KICBlY2M0N2U3OC1iMDUyLTQwZmYtOGVhZi00MTQ1YjQx
ZGY4NTEgNGU0YWZiMmYtMmM4Ni00NGVlLTkwZWQtNTgyMjA2NjRjOGM0IC13aS1hLS0tLS0gMTI4
LjAwbQ0KICBpZHMgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgNGU0YWZiMmYtMmM4
Ni00NGVlLTkwZWQtNTgyMjA2NjRjOGM0IC13aS1hby0tLS0gMTI4LjAwbQ0KICBpbmJveCAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgNGU0YWZiMmYtMmM4Ni00NGVlLTkwZWQtNTgyMjA2
NjRjOGM0IC13aS1hLS0tLS0gMTI4LjAwbQ0KICBsZWFzZXMgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgNGU0YWZiMmYtMmM4Ni00NGVlLTkwZWQtNTgyMjA2NjRjOGM0IC13aS1hLS0tLS0g
ICAyLjAwZw0KICBtYXN0ZXIgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgNGU0YWZiMmYt
MmM4Ni00NGVlLTkwZWQtNTgyMjA2NjRjOGM0IC13aS1hLS0tLS0gICAxLjAwZw0KICBtZXRhZGF0
YSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgNGU0YWZiMmYtMmM4Ni00NGVlLTkwZWQtNTgy
MjA2NjRjOGM0IC13aS1hLS0tLS0gNTEyLjAwbQ0KICBvdXRib3ggICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgNGU0YWZiMmYtMmM4Ni00NGVlLTkwZWQtNTgyMjA2NjRjOGM0IC13aS1hLS0t
LS0gMTI4LjAwbQ0KICA0Mjg3ZjRhYy1jNmM0LTQ0NDUtYWU0ZC0xOGI5ZTg3OTFkYjYgNjJkOGNm
ZGQtZWU4MC00MTViLWIyYzMtYTdlNmJkYTBkZDk3IC13aS0tLS0tLS0gMTI4LjAwbQ0KICA4ZjVi
OTBlNy1mMWIzLTQ3OWEtODNhZS04Y2JhMDM4NDlmOGUgNjJkOGNmZGQtZWU4MC00MTViLWIyYzMt
YTdlNmJkYTBkZDk3IC13aS0tLS0tLS0gMTI4LjAwbQ0KICBpZHMgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgNjJkOGNmZGQtZWU4MC00MTViLWIyYzMtYTdlNmJkYTBkZDk3IC13aS1h
by0tLS0gMTI4LjAwbQ0KICBpbmJveCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgNjJk
OGNmZGQtZWU4MC00MTViLWIyYzMtYTdlNmJkYTBkZDk3IC13aS1hLS0tLS0gMTI4LjAwbQ0KICBs
ZWFzZXMgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgNjJkOGNmZGQtZWU4MC00MTViLWIy
YzMtYTdlNmJkYTBkZDk3IC13aS1hLS0tLS0gICAyLjAwZw0KICBtYXN0ZXIgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgNjJkOGNmZGQtZWU4MC00MTViLWIyYzMtYTdlNmJkYTBkZDk3IC13
aS1hby0tLS0gICAxLjAwZw0KICBtZXRhZGF0YSAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
NjJkOGNmZGQtZWU4MC00MTViLWIyYzMtYTdlNmJkYTBkZDk3IC13aS1hLS0tLS0gNTEyLjAwbQ0K
ICBvdXRib3ggICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgNjJkOGNmZGQtZWU4MC00MTVi
LWIyYzMtYTdlNmJkYTBkZDk3IC13aS1hLS0tLS0gMTI4LjAwbQ0KW3Jvb3RAYmFyYmVyYSBsb2dd
Iw0KW3Jvb3RAYmFyYmVyYSBsb2ddIyBkbXNldHVwIGluZm8NCk5hbWU6ICAgICAgICAgICAgICA0
ZTRhZmIyZi0tMmM4Ni0tNDRlZS0tOTBlZC0tNTgyMjA2NjRjOGM0LTMyNDBlMWY0LS1mMTZiLS00
NzZhLS04MGVhLS1lYzRmMTRiZGFiODUNClN0YXRlOiAgICAgICAgICAgICBBQ1RJVkUNClJlYWQg
QWhlYWQ6ICAgICAgICA4MTkyDQpUYWJsZXMgcHJlc2VudDogICAgTElWRQ0KT3BlbiBjb3VudDog
ICAgICAgIDENCkV2ZW50IG51bWJlcjogICAgICAwDQpNYWpvciwgbWlub3I6ICAgICAgMjUzLCAy
Ng0KTnVtYmVyIG9mIHRhcmdldHM6IDENClVVSUQ6IExWTS1raEJNdTIxUjFwdDRLZFVIY1RJUVFq
S0FJWFZFcTNWeEdMYThNeFhMakdacmlWekl3WmloOG5WRzN3YnlmMGpuDQoNCk5hbWU6ICAgICAg
ICAgICAgICA2MmQ4Y2ZkZC0tZWU4MC0tNDE1Yi0tYjJjMy0tYTdlNmJkYTBkZDk3LWxlYXNlcw0K
U3RhdGU6ICAgICAgICAgICAgIEFDVElWRQ0KUmVhZCBBaGVhZDogICAgICAgIDgxOTINClRhYmxl
cyBwcmVzZW50OiAgICBMSVZFDQpPcGVuIGNvdW50OiAgICAgICAgMA0KRXZlbnQgbnVtYmVyOiAg
ICAgIDANCk1ham9yLCBtaW5vcjogICAgICAyNTMsIDEzDQpOdW1iZXIgb2YgdGFyZ2V0czogMQ0K
VVVJRDogTFZNLWN6aUw4aFI2UEpjdVZJbkRuUHhFbzBlQlpJaFdCUkwyN2cyR0c2N1AwTmZZOUUw
OWhoajFoWHVLZkJxUWpJYjMNCg0KTmFtZTogICAgICAgICAgICAgIDRlNGFmYjJmLS0yYzg2LS00
NGVlLS05MGVkLS01ODIyMDY2NGM4YzQtYjg1MTA3N2ItLTJmMmQtLTQ5NzAtLWI0MzMtLTQyN2Nh
ZDc5NmQ0Ng0KU3RhdGU6ICAgICAgICAgICAgIEFDVElWRQ0KUmVhZCBBaGVhZDogICAgICAgIDgx
OTINClRhYmxlcyBwcmVzZW50OiAgICBMSVZFDQpPcGVuIGNvdW50OiAgICAgICAgMQ0KRXZlbnQg
bnVtYmVyOiAgICAgIDANCk1ham9yLCBtaW5vcjogICAgICAyNTMsIDE4DQpOdW1iZXIgb2YgdGFy
Z2V0czogMQ0KVVVJRDogTFZNLWtoQk11MjFSMXB0NEtkVUhjVElRUWpLQUlYVkVxM1Z4RzhEUkg1
NjZwUGx6MExhcjZVdUxTem9UYU5vTUpBSXoNCg0KTmFtZTogICAgICAgICAgICAgIDRlNGFmYjJm
LS0yYzg2LS00NGVlLS05MGVkLS01ODIyMDY2NGM4YzQtZWNjNDdlNzgtLWIwNTItLTQwZmYtLThl
YWYtLTQxNDViNDFkZjg1MQ0KU3RhdGU6ICAgICAgICAgICAgIEFDVElWRQ0KUmVhZCBBaGVhZDog
ICAgICAgIDgxOTINClRhYmxlcyBwcmVzZW50OiAgICBMSVZFDQpPcGVuIGNvdW50OiAgICAgICAg
MA0KRXZlbnQgbnVtYmVyOiAgICAgIDANCk1ham9yLCBtaW5vcjogICAgICAyNTMsIDkNCk51bWJl
ciBvZiB0YXJnZXRzOiAxDQpVVUlEOiBMVk0ta2hCTXUyMVIxcHQ0S2RVSGNUSVFRaktBSVhWRXEz
VnhxdVdWM0RPdHRNMkQwbGd5WGdZZUhORzhiY0NlT29UTw0KDQpOYW1lOiAgICAgICAgICAgICAg
NDA5MDc0NjItLTViNTgtLTQwYzItLWFkZTctLTMwMzFjOGEwMTk0OS1tZXRhZGF0YQ0KU3RhdGU6
ICAgICAgICAgICAgIEFDVElWRQ0KUmVhZCBBaGVhZDogICAgICAgIDI1Ng0KVGFibGVzIHByZXNl
bnQ6ICAgIExJVkUNCk9wZW4gY291bnQ6ICAgICAgICAwDQpFdmVudCBudW1iZXI6ICAgICAgMA0K
TWFqb3IsIG1pbm9yOiAgICAgIDI1MywgMg0KTnVtYmVyIG9mIHRhcmdldHM6IDENClVVSUQ6IExW
TS1OalJIVnNRd0hOcmxyUE1wc0FleXBoU2xHdUpEWjc2TFd2WHlKaWpPMTRCcVpwUDE5Vloya20w
UmZHVVZKcUtMDQoNCk5hbWU6ICAgICAgICAgICAgICA0ZTRhZmIyZi0tMmM4Ni0tNDRlZS0tOTBl
ZC0tNTgyMjA2NjRjOGM0LTEyYjZiMmYwLS0xMTFjLS00NDFmLS1iYTg1LS0yMDk5NmEwNDkzNzgN
ClN0YXRlOiAgICAgICAgICAgICBBQ1RJVkUNClJlYWQgQWhlYWQ6ICAgICAgICA4MTkyDQpUYWJs
ZXMgcHJlc2VudDogICAgTElWRQ0KT3BlbiBjb3VudDogICAgICAgIDANCkV2ZW50IG51bWJlcjog
ICAgICAwDQpNYWpvciwgbWlub3I6ICAgICAgMjUzLCAxNw0KTnVtYmVyIG9mIHRhcmdldHM6IDEN
ClVVSUQ6IExWTS1raEJNdTIxUjFwdDRLZFVIY1RJUVFqS0FJWFZFcTNWeEx3cTNmMnNFbmdXeTMx
b1R4ajJyZjlCMWFYbWhCSzhoDQoNCk5hbWU6ICAgICAgICAgICAgICA0MDkwNzQ2Mi0tNWI1OC0t
NDBjMi0tYWRlNy0tMzAzMWM4YTAxOTQ5LWxlYXNlcw0KU3RhdGU6ICAgICAgICAgICAgIEFDVElW
RQ0KUmVhZCBBaGVhZDogICAgICAgIDI1Ng0KVGFibGVzIHByZXNlbnQ6ICAgIExJVkUNCk9wZW4g
Y291bnQ6ICAgICAgICAwDQpFdmVudCBudW1iZXI6ICAgICAgMA0KTWFqb3IsIG1pbm9yOiAgICAg
IDI1MywgNA0KTnVtYmVyIG9mIHRhcmdldHM6IDENClVVSUQ6IExWTS1OalJIVnNRd0hOcmxyUE1w
c0FleXBoU2xHdUpEWjc2TERGWmgzZHRNWUwyVk1STFFtN0hNd0FsNDNrOE9tTUZvDQoNCk5hbWU6
ICAgICAgICAgICAgICA2MmQ4Y2ZkZC0tZWU4MC0tNDE1Yi0tYjJjMy0tYTdlNmJkYTBkZDk3LW1l
dGFkYXRhDQpTdGF0ZTogICAgICAgICAgICAgQUNUSVZFDQpSZWFkIEFoZWFkOiAgICAgICAgODE5
Mg0KVGFibGVzIHByZXNlbnQ6ICAgIExJVkUNCk9wZW4gY291bnQ6ICAgICAgICAwDQpFdmVudCBu
dW1iZXI6ICAgICAgMA0KTWFqb3IsIG1pbm9yOiAgICAgIDI1MywgMTENCk51bWJlciBvZiB0YXJn
ZXRzOiAxDQpVVUlEOiBMVk0tY3ppTDhoUjZQSmN1VkluRG5QeEVvMGVCWkloV0JSTDJRVGhHcHRR
T0VTR2Fmdm1ybGozOXl5WlhTdFlFTzBSbQ0KDQpOYW1lOiAgICAgICAgICAgICAgNGU0YWZiMmYt
LTJjODYtLTQ0ZWUtLTkwZWQtLTU4MjIwNjY0YzhjNC03YTY1ZTBlMi0tMDVmNS0tNDBiZi0tOWE0
MC0tMDlhYzUwYWIwNjczDQpTdGF0ZTogICAgICAgICAgICAgQUNUSVZFDQpSZWFkIEFoZWFkOiAg
ICAgICAgODE5Mg0KVGFibGVzIHByZXNlbnQ6ICAgIExJVkUNCk9wZW4gY291bnQ6ICAgICAgICAw
DQpFdmVudCBudW1iZXI6ICAgICAgMA0KTWFqb3IsIG1pbm9yOiAgICAgIDI1MywgMTANCk51bWJl
ciBvZiB0YXJnZXRzOiAxDQpVVUlEOiBMVk0ta2hCTXUyMVIxcHQ0S2RVSGNUSVFRaktBSVhWRXEz
VnhTM0dzYXY2YXBPeW1FRFNZa0xwUUVrQlFYM3g4VDExVg0KDQpOYW1lOiAgICAgICAgICAgICAg
MzYwMDBkMzEwMDA0ZWRjMDAwMDAwMDAwMDAwMDAwMDY0DQpTdGF0ZTogICAgICAgICAgICAgQUNU
SVZFDQpSZWFkIEFoZWFkOiAgICAgICAgMjU2DQpUYWJsZXMgcHJlc2VudDogICAgTElWRQ0KT3Bl
biBjb3VudDogICAgICAgIDkNCkV2ZW50IG51bWJlcjogICAgICAxDQpNYWpvciwgbWlub3I6ICAg
ICAgMjUzLCAwDQpOdW1iZXIgb2YgdGFyZ2V0czogMQ0KVVVJRDogbXBhdGgtMzYwMDBkMzEwMDA0
ZWRjMDAwMDAwMDAwMDAwMDAwMDY0DQoNCk5hbWU6ICAgICAgICAgICAgICA0MDkwNzQ2Mi0tNWI1
OC0tNDBjMi0tYWRlNy0tMzAzMWM4YTAxOTQ5LWQ1MDY1Y2NkLS1jNTZhLS00MmJiLS1hY2NkLS02
ZTA5YzhhNDRlYmUNClN0YXRlOiAgICAgICAgICAgICBBQ1RJVkUNClJlYWQgQWhlYWQ6ICAgICAg
ICAyNTYNClRhYmxlcyBwcmVzZW50OiAgICBMSVZFDQpPcGVuIGNvdW50OiAgICAgICAgMA0KRXZl
bnQgbnVtYmVyOiAgICAgIDANCk1ham9yLCBtaW5vcjogICAgICAyNTMsIDI5DQpOdW1iZXIgb2Yg
dGFyZ2V0czogMQ0KVVVJRDogTFZNLU5qUkhWc1F3SE5ybHJQTXBzQWV5cGhTbEd1SkRaNzZMaUIy
QlNmRFh4RmdUa1Y3bDNocU1zb0xLeGpUMll2WEkNCg0KTmFtZTogICAgICAgICAgICAgIDYyZDhj
ZmRkLS1lZTgwLS00MTViLS1iMmMzLS1hN2U2YmRhMGRkOTctbWFzdGVyDQpTdGF0ZTogICAgICAg
ICAgICAgQUNUSVZFDQpSZWFkIEFoZWFkOiAgICAgICAgODE5Mg0KVGFibGVzIHByZXNlbnQ6ICAg
IExJVkUNCk9wZW4gY291bnQ6ICAgICAgICAxDQpFdmVudCBudW1iZXI6ICAgICAgMA0KTWFqb3Is
IG1pbm9yOiAgICAgIDI1MywgMTYNCk51bWJlciBvZiB0YXJnZXRzOiAxDQpVVUlEOiBMVk0tY3pp
TDhoUjZQSmN1VkluRG5QeEVvMGVCWkloV0JSTDJ2eWJHU1dGY1pJOTVaMjY4VVVGMTVBMGp4Y2dO
VzZMUg0KDQpOYW1lOiAgICAgICAgICAgICAgNjJkOGNmZGQtLWVlODAtLTQxNWItLWIyYzMtLWE3
ZTZiZGEwZGQ5Ny1pZHMNClN0YXRlOiAgICAgICAgICAgICBBQ1RJVkUNClJlYWQgQWhlYWQ6ICAg
ICAgICA4MTkyDQpUYWJsZXMgcHJlc2VudDogICAgTElWRQ0KT3BlbiBjb3VudDogICAgICAgIDEN
CkV2ZW50IG51bWJlcjogICAgICAwDQpNYWpvciwgbWlub3I6ICAgICAgMjUzLCAxNA0KTnVtYmVy
IG9mIHRhcmdldHM6IDENClVVSUQ6IExWTS1jemlMOGhSNlBKY3VWSW5EblB4RW8wZUJaSWhXQlJM
MmwzSTlOOFFyYW81cnkyTUdHM0xoVU9XRmYxM2s3VnFMDQoNCk5hbWU6ICAgICAgICAgICAgICA0
ZTRhZmIyZi0tMmM4Ni0tNDRlZS0tOTBlZC0tNTgyMjA2NjRjOGM0LW1ldGFkYXRhDQpTdGF0ZTog
ICAgICAgICAgICAgQUNUSVZFDQpSZWFkIEFoZWFkOiAgICAgICAgODE5Mg0KVGFibGVzIHByZXNl
bnQ6ICAgIExJVkUNCk9wZW4gY291bnQ6ICAgICAgICAwDQpFdmVudCBudW1iZXI6ICAgICAgMA0K
TWFqb3IsIG1pbm9yOiAgICAgIDI1MywgMjANCk51bWJlciBvZiB0YXJnZXRzOiAxDQpVVUlEOiBM
Vk0ta2hCTXUyMVIxcHQ0S2RVSGNUSVFRaktBSVhWRXEzVnh0Q2RnQ2Y2c1hNVWtWQTAxYTNHTEVT
QVYzMjNqR2xHdA0KDQpOYW1lOiAgICAgICAgICAgICAgNDA5MDc0NjItLTViNTgtLTQwYzItLWFk
ZTctLTMwMzFjOGEwMTk0OS1tYXN0ZXINClN0YXRlOiAgICAgICAgICAgICBBQ1RJVkUNClJlYWQg
QWhlYWQ6ICAgICAgICAyNTYNClRhYmxlcyBwcmVzZW50OiAgICBMSVZFDQpPcGVuIGNvdW50OiAg
ICAgICAgMA0KRXZlbnQgbnVtYmVyOiAgICAgIDANCk1ham9yLCBtaW5vcjogICAgICAyNTMsIDcN
Ck51bWJlciBvZiB0YXJnZXRzOiAxDQpVVUlEOiBMVk0tTmpSSFZzUXdITnJsclBNcHNBZXlwaFNs
R3VKRFo3Nkw1QjlDbmFjck1oOWZONzNNTzBtV0ZBV25mWm5NZnJJdg0KDQpOYW1lOiAgICAgICAg
ICAgICAgNjJkOGNmZGQtLWVlODAtLTQxNWItLWIyYzMtLWE3ZTZiZGEwZGQ5Ny1vdXRib3gNClN0
YXRlOiAgICAgICAgICAgICBBQ1RJVkUNClJlYWQgQWhlYWQ6ICAgICAgICA4MTkyDQpUYWJsZXMg
cHJlc2VudDogICAgTElWRQ0KT3BlbiBjb3VudDogICAgICAgIDANCkV2ZW50IG51bWJlcjogICAg
ICAwDQpNYWpvciwgbWlub3I6ICAgICAgMjUzLCAxMg0KTnVtYmVyIG9mIHRhcmdldHM6IDENClVV
SUQ6IExWTS1jemlMOGhSNlBKY3VWSW5EblB4RW8wZUJaSWhXQlJMMllUZ2RDNzljdjhuRWdEVGpn
bDQyRW9JcXpuUEIwemxtDQoNCk5hbWU6ICAgICAgICAgICAgICAzNjAwMGQzMTAwMDRlZGMwMDAw
MDAwMDAwMDAwMDAwNjINClN0YXRlOiAgICAgICAgICAgICBBQ1RJVkUNClJlYWQgQWhlYWQ6ICAg
ICAgICA4MTkyDQpUYWJsZXMgcHJlc2VudDogICAgTElWRQ0KT3BlbiBjb3VudDogICAgICAgIDYN
CkV2ZW50IG51bWJlcjogICAgICAwDQpNYWpvciwgbWlub3I6ICAgICAgMjUzLCAxDQpOdW1iZXIg
b2YgdGFyZ2V0czogMQ0KVVVJRDogbXBhdGgtMzYwMDBkMzEwMDA0ZWRjMDAwMDAwMDAwMDAwMDAw
MDYyDQoNCk5hbWU6ICAgICAgICAgICAgICA0ZTRhZmIyZi0tMmM4Ni0tNDRlZS0tOTBlZC0tNTgy
MjA2NjRjOGM0LWxlYXNlcw0KU3RhdGU6ICAgICAgICAgICAgIEFDVElWRQ0KUmVhZCBBaGVhZDog
ICAgICAgIDgxOTINClRhYmxlcyBwcmVzZW50OiAgICBMSVZFDQpPcGVuIGNvdW50OiAgICAgICAg
MA0KRXZlbnQgbnVtYmVyOiAgICAgIDANCk1ham9yLCBtaW5vcjogICAgICAyNTMsIDIyDQpOdW1i
ZXIgb2YgdGFyZ2V0czogMQ0KVVVJRDogTFZNLWtoQk11MjFSMXB0NEtkVUhjVElRUWpLQUlYVkVx
M1Z4SDN5ZUxsMmZmMnE1c2FPRTVaaDUwVnhMSjF4dktqbngNCg0KTmFtZTogICAgICAgICAgICAg
IDQwOTA3NDYyLS01YjU4LS00MGMyLS1hZGU3LS0zMDMxYzhhMDE5NDktaW5ib3gNClN0YXRlOiAg
ICAgICAgICAgICBBQ1RJVkUNClJlYWQgQWhlYWQ6ICAgICAgICAyNTYNClRhYmxlcyBwcmVzZW50
OiAgICBMSVZFDQpPcGVuIGNvdW50OiAgICAgICAgMA0KRXZlbnQgbnVtYmVyOiAgICAgIDANCk1h
am9yLCBtaW5vcjogICAgICAyNTMsIDYNCk51bWJlciBvZiB0YXJnZXRzOiAxDQpVVUlEOiBMVk0t
TmpSSFZzUXdITnJsclBNcHNBZXlwaFNsR3VKRFo3NkxkZndWaG53T24zakt1endQekZ6M1lEODlJ
VGlMZkVTUA0KDQpOYW1lOiAgICAgICAgICAgICAgNjJkOGNmZGQtLWVlODAtLTQxNWItLWIyYzMt
LWE3ZTZiZGEwZGQ5Ny1pbmJveA0KU3RhdGU6ICAgICAgICAgICAgIEFDVElWRQ0KUmVhZCBBaGVh
ZDogICAgICAgIDgxOTINClRhYmxlcyBwcmVzZW50OiAgICBMSVZFDQpPcGVuIGNvdW50OiAgICAg
ICAgMA0KRXZlbnQgbnVtYmVyOiAgICAgIDANCk1ham9yLCBtaW5vcjogICAgICAyNTMsIDE1DQpO
dW1iZXIgb2YgdGFyZ2V0czogMQ0KVVVJRDogTFZNLWN6aUw4aFI2UEpjdVZJbkRuUHhFbzBlQlpJ
aFdCUkwyZGNCOWFWWlVJWUVGZFRFeWNEWjZkekZVS0FaT0lMcjANCg0KTmFtZTogICAgICAgICAg
ICAgIDQwOTA3NDYyLS01YjU4LS00MGMyLS1hZGU3LS0zMDMxYzhhMDE5NDktb3V0Ym94DQpTdGF0
ZTogICAgICAgICAgICAgQUNUSVZFDQpSZWFkIEFoZWFkOiAgICAgICAgMjU2DQpUYWJsZXMgcHJl
c2VudDogICAgTElWRQ0KT3BlbiBjb3VudDogICAgICAgIDANCkV2ZW50IG51bWJlcjogICAgICAw
DQpNYWpvciwgbWlub3I6ICAgICAgMjUzLCAzDQpOdW1iZXIgb2YgdGFyZ2V0czogMQ0KVVVJRDog
TFZNLU5qUkhWc1F3SE5ybHJQTXBzQWV5cGhTbEd1SkRaNzZMeTdCOWhZNUN2MVIzT1pNQnZ2ZE9l
SlltamZWcVRPWUENCg0KTmFtZTogICAgICAgICAgICAgIDM2MDAwZDMxMDAwNGVkYzAwMDAwMDAw
MDAwMDAwMDA2MQ0KU3RhdGU6ICAgICAgICAgICAgIEFDVElWRQ0KUmVhZCBBaGVhZDogICAgICAg
IDgxOTINClRhYmxlcyBwcmVzZW50OiAgICBMSVZFDQpPcGVuIGNvdW50OiAgICAgICAgMTINCkV2
ZW50IG51bWJlcjogICAgICAwDQpNYWpvciwgbWlub3I6ICAgICAgMjUzLCAxOQ0KTnVtYmVyIG9m
IHRhcmdldHM6IDENClVVSUQ6IG1wYXRoLTM2MDAwZDMxMDAwNGVkYzAwMDAwMDAwMDAwMDAwMDA2
MQ0KDQpOYW1lOiAgICAgICAgICAgICAgNGU0YWZiMmYtLTJjODYtLTQ0ZWUtLTkwZWQtLTU4MjIw
NjY0YzhjNC1pbmJveA0KU3RhdGU6ICAgICAgICAgICAgIEFDVElWRQ0KUmVhZCBBaGVhZDogICAg
ICAgIDgxOTINClRhYmxlcyBwcmVzZW50OiAgICBMSVZFDQpPcGVuIGNvdW50OiAgICAgICAgMA0K
RXZlbnQgbnVtYmVyOiAgICAgIDANCk1ham9yLCBtaW5vcjogICAgICAyNTMsIDI0DQpOdW1iZXIg
b2YgdGFyZ2V0czogMQ0KVVVJRDogTFZNLWtoQk11MjFSMXB0NEtkVUhjVElRUWpLQUlYVkVxM1Z4
RFh3NklvMEoza2VHUjNVWDA1eXZNWFJ2VzB1WGVxVmoNCg0KTmFtZTogICAgICAgICAgICAgIDQw
OTA3NDYyLS01YjU4LS00MGMyLS1hZGU3LS0zMDMxYzhhMDE5NDktMzk3YzBmNDAtLWVkNzAtLTQ2
NzAtLWFmODktLTU1YzM1ZGU5MjlkNg0KU3RhdGU6ICAgICAgICAgICAgIEFDVElWRQ0KUmVhZCBB
aGVhZDogICAgICAgIDI1Ng0KVGFibGVzIHByZXNlbnQ6ICAgIExJVkUNCk9wZW4gY291bnQ6ICAg
ICAgICAwDQpFdmVudCBudW1iZXI6ICAgICAgMA0KTWFqb3IsIG1pbm9yOiAgICAgIDI1MywgMjgN
Ck51bWJlciBvZiB0YXJnZXRzOiAxDQpVVUlEOiBMVk0tTmpSSFZzUXdITnJsclBNcHNBZXlwaFNs
R3VKRFo3NkwxcW5iSVpaSzVoSTRyaWZyakY2N0EwaldWNDBBYkRwRA0KDQpOYW1lOiAgICAgICAg
ICAgICAgNDA5MDc0NjItLTViNTgtLTQwYzItLWFkZTctLTMwMzFjOGEwMTk0OS1pZHMNClN0YXRl
OiAgICAgICAgICAgICBBQ1RJVkUNClJlYWQgQWhlYWQ6ICAgICAgICAyNTYNClRhYmxlcyBwcmVz
ZW50OiAgICBMSVZFDQpPcGVuIGNvdW50OiAgICAgICAgMA0KRXZlbnQgbnVtYmVyOiAgICAgIDAN
Ck1ham9yLCBtaW5vcjogICAgICAyNTMsIDUNCk51bWJlciBvZiB0YXJnZXRzOiAxDQpVVUlEOiBM
Vk0tTmpSSFZzUXdITnJsclBNcHNBZXlwaFNsR3VKRFo3NkxuWGNRaEVaWEhUQzhCQkRjWEVTWHBP
c2VnMTBkbG9TTA0KDQpOYW1lOiAgICAgICAgICAgICAgNGU0YWZiMmYtLTJjODYtLTQ0ZWUtLTkw
ZWQtLTU4MjIwNjY0YzhjNC05ZDFlZDNmZC0tZDZlZi0tNGJiYy0tYjIyMi0tNzY4NzEzNGI4ZmMz
DQpTdGF0ZTogICAgICAgICAgICAgQUNUSVZFDQpSZWFkIEFoZWFkOiAgICAgICAgODE5Mg0KVGFi
bGVzIHByZXNlbnQ6ICAgIExJVkUNCk9wZW4gY291bnQ6ICAgICAgICAwDQpFdmVudCBudW1iZXI6
ICAgICAgMA0KTWFqb3IsIG1pbm9yOiAgICAgIDI1MywgOA0KTnVtYmVyIG9mIHRhcmdldHM6IDEN
ClVVSUQ6IExWTS1raEJNdTIxUjFwdDRLZFVIY1RJUVFqS0FJWFZFcTNWeGlOQjVjdFJheGRXY29N
S2tlaENnRDYzcGpkQW15QVZZDQoNCk5hbWU6ICAgICAgICAgICAgICA0ZTRhZmIyZi0tMmM4Ni0t
NDRlZS0tOTBlZC0tNTgyMjA2NjRjOGM0LW1hc3Rlcg0KU3RhdGU6ICAgICAgICAgICAgIEFDVElW
RQ0KUmVhZCBBaGVhZDogICAgICAgIDgxOTINClRhYmxlcyBwcmVzZW50OiAgICBMSVZFDQpPcGVu
IGNvdW50OiAgICAgICAgMA0KRXZlbnQgbnVtYmVyOiAgICAgIDANCk1ham9yLCBtaW5vcjogICAg
ICAyNTMsIDI1DQpOdW1iZXIgb2YgdGFyZ2V0czogMQ0KVVVJRDogTFZNLWtoQk11MjFSMXB0NEtk
VUhjVElRUWpLQUlYVkVxM1Z4NUdJZjNaejg3YzZRQ3dYUWF2RFc4b0NBcmVOT1Nrc2oNCg0KTmFt
ZTogICAgICAgICAgICAgIDQwOTA3NDYyLS01YjU4LS00MGMyLS1hZGU3LS0zMDMxYzhhMDE5NDkt
YWFjMzJjZjktLWFmMGQtLTQyYzQtLTg0ZjctLTZjNjJmYTYyZjIzYw0KU3RhdGU6ICAgICAgICAg
ICAgIEFDVElWRQ0KUmVhZCBBaGVhZDogICAgICAgIDgxOTINClRhYmxlcyBwcmVzZW50OiAgICBM
SVZFDQpPcGVuIGNvdW50OiAgICAgICAgMA0KRXZlbnQgbnVtYmVyOiAgICAgIDANCk1ham9yLCBt
aW5vcjogICAgICAyNTMsIDI3DQpOdW1iZXIgb2YgdGFyZ2V0czogMQ0KVVVJRDogTFZNLU5qUkhW
c1F3SE5ybHJQTXBzQWV5cGhTbEd1SkRaNzZMN1pCejB2QTBkYkpCcWhWbnpWYWp6OTE3QnhwaWxV
YmMNCg0KTmFtZTogICAgICAgICAgICAgIDRlNGFmYjJmLS0yYzg2LS00NGVlLS05MGVkLS01ODIy
MDY2NGM4YzQtaWRzDQpTdGF0ZTogICAgICAgICAgICAgQUNUSVZFDQpSZWFkIEFoZWFkOiAgICAg
ICAgODE5Mg0KVGFibGVzIHByZXNlbnQ6ICAgIExJVkUNCk9wZW4gY291bnQ6ICAgICAgICAxDQpF
dmVudCBudW1iZXI6ICAgICAgMA0KTWFqb3IsIG1pbm9yOiAgICAgIDI1MywgMjMNCk51bWJlciBv
ZiB0YXJnZXRzOiAxDQpVVUlEOiBMVk0ta2hCTXUyMVIxcHQ0S2RVSGNUSVFRaktBSVhWRXEzVngx
Q1d5T3dRYkttMjZ2UzlZQnJEb1MxUjdDTG9tbGVyOA0KDQpOYW1lOiAgICAgICAgICAgICAgNGU0
YWZiMmYtLTJjODYtLTQ0ZWUtLTkwZWQtLTU4MjIwNjY0YzhjNC1vdXRib3gNClN0YXRlOiAgICAg
ICAgICAgICBBQ1RJVkUNClJlYWQgQWhlYWQ6ICAgICAgICA4MTkyDQpUYWJsZXMgcHJlc2VudDog
ICAgTElWRQ0KT3BlbiBjb3VudDogICAgICAgIDANCkV2ZW50IG51bWJlcjogICAgICAwDQpNYWpv
ciwgbWlub3I6ICAgICAgMjUzLCAyMQ0KTnVtYmVyIG9mIHRhcmdldHM6IDENClVVSUQ6IExWTS1r
aEJNdTIxUjFwdDRLZFVIY1RJUVFqS0FJWFZFcTNWeE0zblFnZmRaMlVYUlRaR3BacnZPZ3dQQUZP
UkJJZDM4DQoNCltyb290QGJhcmJlcmEgbG9nXSMgY2F0IC9ldGMvbXVsdGlwYXRoLmNvbmYNCiMg
VkRTTSBSRVZJU0lPTiAxLjMNCg0KZGVmYXVsdHMgew0KICAgIHBvbGxpbmdfaW50ZXJ2YWwgICAg
ICAgICAgICA1DQogICAgbm9fcGF0aF9yZXRyeSAgICAgICAgICAgICAgIGZhaWwNCiAgICB1c2Vy
X2ZyaWVuZGx5X25hbWVzICAgICAgICAgbm8NCiAgICBmbHVzaF9vbl9sYXN0X2RlbCAgICAgICAg
ICAgeWVzDQogICAgZmFzdF9pb19mYWlsX3RtbyAgICAgICAgICAgIDUNCiAgICBkZXZfbG9zc190
bW8gICAgICAgICAgICAgICAgMzANCiAgICBtYXhfZmRzICAgICAgICAgICAgICAgICAgICAgNDA5
Ng0KfQ0KDQojIFJlbW92ZSBkZXZpY2VzIGVudHJpZXMgd2hlbiBvdmVycmlkZXMgc2VjdGlvbiBp
cyBhdmFpbGFibGUuDQpkZXZpY2VzIHsNCiAgICBkZXZpY2Ugew0KICAgICAgICAjIFRoZXNlIHNl
dHRpbmdzIG92ZXJyaWRlcyBidWlsdC1pbiBkZXZpY2VzIHNldHRpbmdzLiBJdCBkb2VzIG5vdCBh
cHBseQ0KICAgICAgICAjIHRvIGRldmljZXMgd2l0aG91dCBidWlsdC1pbiBzZXR0aW5ncyAodGhl
c2UgdXNlIHRoZSBzZXR0aW5ncyBpbiB0aGUNCiAgICAgICAgIyAiZGVmYXVsdHMiIHNlY3Rpb24p
LCBvciB0byBkZXZpY2VzIGRlZmluZWQgaW4gdGhlICJkZXZpY2VzIiBzZWN0aW9uLg0KICAgICAg
ICAjIE5vdGU6IFRoaXMgaXMgbm90IGF2YWlsYWJsZSB5ZXQgb24gRmVkb3JhIDIxLiBGb3IgbW9y
ZSBpbmZvIHNlZQ0KICAgICAgICAjIGh0dHBzOi8vYnVnemlsbGEucmVkaGF0LmNvbS8xMjUzNzk5
DQogICAgICAgIGFsbF9kZXZzICAgICAgICAgICAgICAgIHllcw0KICAgICAgICBub19wYXRoX3Jl
dHJ5ICAgICAgICAgICBmYWlsDQogICAgfQ0KfQ0KDQojIEVuYWJsZSB3aGVuIHRoaXMgc2VjdGlv
biBpcyBhdmFpbGFibGUgb24gYWxsIHN1cHBvcnRlZCBwbGF0Zm9ybXMuDQojIE9wdGlvbnMgZGVm
aW5lZCBoZXJlIG92ZXJyaWRlIGRldmljZSBzcGVjaWZpYyBvcHRpb25zIGVtYmVkZGVkIGludG8N
CiMgbXVsdGlwYXRoZC4NCiMNCiMgb3ZlcnJpZGVzIHsNCiMgICAgICBub19wYXRoX3JldHJ5ICAg
ICAgICAgICBmYWlsDQojIH0NCg0K
--_004_CY4PR14MB1687A42BA552152A8D40EE5FE9C50CY4PR14MB1687namp_--
8 years, 1 month
Can't boot VM: "At least one numa node has to be configured when enabling memory hotplug"
by gregor
Hi,
yesterday I exported all my VM's and reinstalled the host fresh as
hosted-engine installation. After importing one VM and starting this, it
gaves me the error message (engine.log) see below.
After researching what "numa" means I don't have a clue what this should
mean or what was configured wrong. It's only one Server followed this
[1] instruction.
...
2016-10-04 06:34:30,719 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler4) [66e4bd80] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VM webserver is down with error.
Exit message: unsupported configuration: At least one numa node has to
be configured when enabling memory hotplug.
2016-10-04 06:34:30,731 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring]
(DefaultQuartzScheduler4) [66e4bd80] Rerun VM
'b5947250-77d3-46d2-a8c6-cdd9072bb558'. Called from VDS 'host'
2016-10-04 06:34:30,933 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-49) [66e4bd80] Correlation ID: 7e9dded6,
Job ID: 63c37b21-c3f4-40da-9eb2-fe98e824827c, Call Stack: null, Custom
Event ID: -1, Message: Failed to run VM webserver (User:
admin@internal-authz).
...
cheers
gregor
[1] https://www.ovirt.org/documentation/how-to/hosted-engine/
8 years, 1 month
Tracebacks in vdsm.log file
by knarra
Hi,
I see below trace back in my vdsm.log. Can some one help me
understand why these are logged?
is free, finding out if anyone is waiting for it.
Thread-557::DEBUG::2016-09-30
18:20:25,064::resourceManager::661::Storage.ResourceManager::(releaseResource)
No one is waiting for resource 'Storage.upgrade_57ee3a08-004b-02
7b-0395-0000000001d6', Clearing records.
Thread-557::ERROR::2016-09-30
18:20:25,064::utils::375::Storage.StoragePool::(wrapper) Unhandled exception
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 372, in
wrapper
return f(*a, **kw)
File "/usr/lib/python2.7/site-packages/vdsm/concurrent.py", line 177,
in run
return func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py",
line 78, in wrapper
return method(self, *args, **kwargs)
File "/usr/share/vdsm/storage/sp.py", line 207, in _upgradePoolDomain
self._finalizePoolUpgradeIfNeeded()
File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py",
line 76, in wrapper
raise SecureError("Secured object is not in safe state")
SecureError: Secured object is not in safe state
b38e7a14-f880-4259-a7dd-3994bae2dbc2::DEBUG::2016-09-30
18:20:25,065::__init__::398::IOProcessClient::(_startCommunication)
Communication thread for client ioprocess-7 started
ioprocess communication (22325)::INFO::2016-09-30
18:20:25,067::__init__::447::IOProcess::(_processLogs) Starting ioprocess
ioprocess communication (22325)::INFO::2016-09-30
18:20:25,067::__init__::447::IOProcess::(_processLogs) Starting ioprocess
Thanks
kasturi
8 years, 1 month
global vs local maintenance with single host
by Gianluca Cecchi
Hello,
how do the two modes apply in case of single host?
During an upgrade phase, after having upgraded the self hosted engine and
leaving global maintenance and having checked all is ok, what is the
correct mode then to put host if I want finally to update it too?
Thanks,
Gianluca
8 years, 1 month