Upgrade instructions 3.3.3 to 3.4.<latest>
by Jim Rippon
--=_cf606adf744477e1b1994ba334f31f60
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset=UTF-8
Hi all,
I'm running a production stack on 3.3.3 with Three
datacentres (one in my DMZ with two hosts with NFS, one in my DMZ on the
engine host with local storage and one internal with NFS storage).
Could you point me in the direction of the upgrade instructions I
should follow in order to go from where I am to where I need to be in
terms of upgrading, and what downtime I might need to incur so I can
plan it?
Thanks in advance,
Jim
--=_cf606adf744477e1b1994ba334f31f60
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html; charset=UTF-8
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN">
<html><body style=3D'font-family: Verdana,Geneva,sans-serif'>
<p>Hi all,</p>
<p>I'm running a production stack on 3.3.3 with Three datacentres (one in m=
y DMZ with two hosts with NFS, one in my DMZ on the engine host with local =
storage and one internal with NFS storage).</p>
<p>Could you point me in the direction of the upgrade instructions I should=
follow in order to go from where I am to where I need to be in terms of up=
grading, and what downtime I might need to incur so I can plan it?</p>
<p>Thanks in advance,</p>
<p>Jim</p>
<div> </div>
</body></html>
--=_cf606adf744477e1b1994ba334f31f60--
10 years, 6 months
Re: [ovirt-users] gluster performance oVirt 3.4
by Vadims Korsaks
Citējot Humble Devassy Chirammal
<humble.devassy(a)gmail.com> :
>
> |
> | Citējot Vijay Bellur <vbellur(a)redhat.com> :
> | > On 05/11/2014 02:04 AM, Vadims Korsaks wrote:
> | > > HI!
> | > >
> | > > Created 2 node setup with oVirt 3.4 and
> | CentOS 6.5, for storage created
> | > > 2 node replicated gluster (3.5) fs on same
> | hosts with oVirt.
> | > > mount looks like this:
> | > > 127.0.0.1:/gluster01 on
> | > >
> |
/rhev/data-center/mnt/glusterSD/127.0.0.1:_gluster01
> | type fuse.glusterfs
> | > >
> |
(rw,default_permissions,allow_other,max_read=131072)
> | > >
> | > > when i making gluster test with dd,
something
> | like
> | > > dd if=/dev/zero bs=1M count=20000
> | > >
> |
of=/rhev/data-center/mnt/glusterSD/127.0.0.1\:_gluster01/kaka
> | > > i'm gettting speed ~ 110 MB/s, so this is
> | 1Gbps speed of ethernet adapter
> | > >
> | > > but with in VM created in oVirt speed is
> | lower than 20 MB/s
> | > >
> | > > why there is so huge difference?
> | > > how can improve VMs disks speed?
> | > >
> | >
> | > What are your gluster volume settings?
Have you
> | applied the following
> | > performance tunables in gluster's virt
profile:
> | >
> | > eager-lock=enable
> | > remote-dio=enable
> | >
> | > Regards,
> | > Vijay
> | >
> | setting were:
> | [root@centos155 ~]# gluster volume info gluster01
> |
> | Volume Name: gluster01
> | Type: Replicate
> | Volume ID: 436edaa3-ac8b-421f-aa35-68b5bd7064b6
> | Status: Started
> | Number of Bricks: 1 x 2 = 2
> | Transport-type: tcp
> | Bricks:
> | Brick1: 10.2.75.152:/mnt/gluster01/brick
> | Brick2: 10.2.75.155:/mnt/gluster01/brick
> | Options Reconfigured:
> | storage.owner-gid: 36
> | storage.owner-uid: 36
> |
> |
> | add your settings settings now it looks
> |
> | [root@centos155 ~]# gluster volume info gluster01
> |
> | Volume Name: gluster01
> | Type: Replicate
> | Volume ID: 436edaa3-ac8b-421f-aa35-68b5bd7064b6
> | Status: Started
> | Number of Bricks: 1 x 2 = 2
> | Transport-type: tcp
> | Bricks:
> | Brick1: 10.2.75.152:/mnt/gluster01/brick
> | Brick2: 10.2.75.155:/mnt/gluster01/brick
> | Options Reconfigured:
> | network.remote-dio: enable
> | cluster.eager-lock: enable
> | storage.owner-gid: 36
> | storage.owner-uid: 36
> |
> |
> | but this didn't affected performace in any
big way
> | should hosts to be restarted?
> |
>
> glusterfs storage domain configuration GUI got
""Optimize for virt. store" option which have to
be enabled when configuring for virt store.
>
>
Ref#http://www.ovirt.org/Features/GlusterFS_Storage_Domain
>
> If the configuration is manual, you need to
set the group to 'virt' as shown below
>
> #gluster volume set VOLNAME group virt
>
> This will enable below options on gluster volume :
>
> quick-read=off
> read-ahead=off
> io-cache=off
> stat-prefetch=off
> eager-lock=enable
> remote-dio=on
>
>
> Can you please make sure group has set properly ?
>
> Also, invoke "dd" with oflag=direct option and
check whether it helps ..
>
>
> --Humble
tnx a lot!! now it's much better - from VM i can
get dd with ~ 60 MB/s
this is still ~ x2 lower than from host, but x3
better than it was before :)
BTW could not found "GUI got ""Optimize for virt.
store" option" in oVirt 3.5
10 years, 6 months
Re: [ovirt-users] gluster performance oVirt 3.4
by Vadims Korsaks
Citējot Sahina Bose <sabose(a)redhat.com> :
>
> On 05/13/2014 07:27 PM, Vadims Korsaks wrote:
> > Citējot Humble Devassy Chirammal
> > <humble.devassy(a)gmail.com> :
> >>
> >> |
> >> | Citējot Vijay Bellur <vbellur(a)redhat.com> :
> >> | > On 05/11/2014 02:04 AM, Vadims Korsaks
wrote:
> >> | > > HI!
> >> | > >
> >> | > > Created 2 node setup with oVirt 3.4 and
> >> | CentOS 6.5, for storage created
> >> | > > 2 node replicated gluster (3.5) fs on same
> >> | hosts with oVirt.
> >> | > > mount looks like this:
> >> | > > 127.0.0.1:/gluster01 on
> >> | > >
> >> |
> >
/rhev/data-center/mnt/glusterSD/127.0.0.1:_gluster01
> >> | type fuse.glusterfs
> >> | > >
> >> |
> >
(rw,default_permissions,allow_other,max_read=131072)
> >> | > >
> >> | > > when i making gluster test with dd,
> > something
> >> | like
> >> | > > dd if=/dev/zero bs=1M count=20000
> >> | > >
> >> |
> >
of=/rhev/data-center/mnt/glusterSD/127.0.0.1\:_gluster01/kaka
> >> | > > i'm gettting speed ~ 110 MB/s, so this is
> >> | 1Gbps speed of ethernet adapter
> >> | > >
> >> | > > but with in VM created in oVirt speed is
> >> | lower than 20 MB/s
> >> | > >
> >> | > > why there is so huge difference?
> >> | > > how can improve VMs disks speed?
> >> | > >
> >> | >
> >> | > What are your gluster volume settings?
> > Have you
> >> | applied the following
> >> | > performance tunables in gluster's virt
> > profile:
> >> | >
> >> | > eager-lock=enable
> >> | > remote-dio=enable
> >> | >
> >> | > Regards,
> >> | > Vijay
> >> | >
> >> | setting were:
> >> | [root@centos155 ~]# gluster volume info
gluster01
> >> |
> >> | Volume Name: gluster01
> >> | Type: Replicate
> >> | Volume ID:
436edaa3-ac8b-421f-aa35-68b5bd7064b6
> >> | Status: Started
> >> | Number of Bricks: 1 x 2 = 2
> >> | Transport-type: tcp
> >> | Bricks:
> >> | Brick1: 10.2.75.152:/mnt/gluster01/brick
> >> | Brick2: 10.2.75.155:/mnt/gluster01/brick
> >> | Options Reconfigured:
> >> | storage.owner-gid: 36
> >> | storage.owner-uid: 36
> >> |
> >> |
> >> | add your settings settings now it looks
> >> |
> >> | [root@centos155 ~]# gluster volume info
gluster01
> >> |
> >> | Volume Name: gluster01
> >> | Type: Replicate
> >> | Volume ID:
436edaa3-ac8b-421f-aa35-68b5bd7064b6
> >> | Status: Started
> >> | Number of Bricks: 1 x 2 = 2
> >> | Transport-type: tcp
> >> | Bricks:
> >> | Brick1: 10.2.75.152:/mnt/gluster01/brick
> >> | Brick2: 10.2.75.155:/mnt/gluster01/brick
> >> | Options Reconfigured:
> >> | network.remote-dio: enable
> >> | cluster.eager-lock: enable
> >> | storage.owner-gid: 36
> >> | storage.owner-uid: 36
> >> |
> >> |
> >> | but this didn't affected performace in any
> > big way
> >> | should hosts to be restarted?
> >> |
> >>
> >> glusterfs storage domain configuration GUI got
> > ""Optimize for virt. store" option which have to
> > be enabled when configuring for virt store.
> >>
> >>
> >
Ref#http://www.ovirt.org/Features/GlusterFS_Storage_Domain
> >>
> >> If the configuration is manual, you need to
> > set the group to 'virt' as shown below
> >>
> >> #gluster volume set VOLNAME group virt
> >>
> >> This will enable below options on gluster
volume :
> >>
> >> quick-read=off
> >> read-ahead=off
> >> io-cache=off
> >> stat-prefetch=off
> >> eager-lock=enable
> >> remote-dio=on
> >>
> >>
> >> Can you please make sure group has set
properly ?
> >>
> >> Also, invoke "dd" with oflag=direct option and
> > check whether it helps ..
> >>
> >>
> >> --Humble
> > tnx a lot!! now it's much better - from VM i can
> > get dd with ~ 60 MB/s
> > this is still ~ x2 lower than from host, but x3
> > better than it was before :)
> >
> > BTW could not found "GUI got ""Optimize for virt.
> > store" option" in oVirt 3.5
>
>
> The option "Optimize for Virt store" is
available when you select a
> volume in Ovirt - both as a right click menu
option as well as a button
> in the top sub navigation bar.
>
> You also can check this option while creating a
gluster volume using the
> oVirt GUI
>
>
i have glusterfs as my master storage, but there
is nothing in volumes, empty. And i can't create
gluster volume, there is no choise in Data Center
and Volume Cluster.
10 years, 6 months
[QE] oVirt 3.5.0 Alpha status
by Sandro Bonazzola
Hi,
We're going to start composing oVirt 3.5.0 Alpha on 2014-05-16 08:00 UTC from master branches.
The bug tracker [1] shows the following proposed blockers to be reviewed:
Bug ID Whiteboard Status Summary
1001100 integration NEW Add log gathering for a new ovirt module (External scheduler)
1073944 integration ASSIGNED Add log gathering for a new ovirt module (External scheduler)
1060198 integration NEW [RFE] add support for Fedora 20
Feature freeze has been postponed to 2014-05-30 and the following features should be testable in 3.5.0 Alpha according to Features Status Table [2]
Group oVirt BZ Title
gluster 1096713 Monitoring (UI plugin) Dashboard (Integrated with Nagios monitoring)
infra 1090530 [RFE] Please add host count and guest count columns to "Clusters" tab in webadmin
infra 1054778 [RFE] Allow to perform fence operations from a host in another DC
infra 1090803 [RFE] Change the "Slot" field to "Service Profile" when cisco_ucs is selected as the fencing type
infra 1090511 [RFE] Improve fencing robustness by retrying failed attempts
infra 1090794 [RFE] Search VMs based on MAC address from web-admin portal
infra 1090793 consider the event type while printing events to engine.log
infra 1090796 [RFE] Re-work engine ovirt-node host-deploy sequence
infra 1090798 [RFE] Admin GUI - Add host uptime information to the "General" tab
infra 1090808 [RFE] Ability to dismiss alerts and events from web-admin portal
infra-api 1090797 [RFE] RESTAPI: Add /tags sub-collection for Template resource
infra-dwh 1091686 prevent OutOfMemoryError after starting the dwh service.
network 1078836 Add a warning when adding display network
network 1079719 Display of NIC Slave/Bond fault on Event Log
network 1080987 Support ethtool_opts functionality within oVirt
storage 1054241 Store OVF on any domains
storage 1083312 Disk alias recycling in web-admin portal
ux 1064543 oVirt new look and feel [PatternFly adoption] - phase #1
virt 1058832 Allow to clone a (down) VM without snapshot/template
virt 1031040 can't set different keymap for vnc via runonce option
virt 1043471 oVirt guest agent for SLES
virt 1083049 add progress bar for vm migration
virt 1083065 EL 7 guest compatibility
virt 1083059 "Instance types (new template handling) - adding flavours"
virt Allow guest serial number to be configurable
virt 1047624 [RFE] support BIOS boot device menu
virt 1083129 allows setting netbios name, locale, language and keyboard settings for windows vm's
virt 1038632 spice-html5 button to show debug console/output window
virt 1080002 [RFE] Enable user defined Windows Sysprep file done
Some more features may be included since they were near to be completed on last sync meeting.
The table will be updated on next sync meeting scheduled for 2014-05-14.
There are still 383 bugs [3] targeted to 3.5.0.
Excluding node and documentation bugs we still have 321 bugs [4] targeted to 3.5.0.
Maintainers / Assignee:
- Please remember to rebuild your packages before 2014-05-16 08:00 UTC if needed, otherwise nightly snapshot will be taken.
- If you find a blocker bug please remember to add it to the tracker [1]
- Please start filling release notes, the page has been created here [5]
All users:
- You're welcome to join us testing this alpha release and getting involved in oVirt Quality Assurance[6]!
[1] http://bugzilla.redhat.com/1073943
[2] http://bit.ly/17qBn6F
[3] http://red.ht/1pVEk7H
[4] http://red.ht/1rLCJwF
[5] http://www.ovirt.org/OVirt_3.5_Release_Notes
[6] http://www.ovirt.org/OVirt_Quality_Assurance
Thanks,
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 6 months
Hitachi hus110 and ovirt
by Dan Yasny
Hi all
I wonder if anyone has a working setup with a HDS device. Right now I am
seeing a weird performance issue.
With default multipath settings dd of zeros into the /dev/mapper lun shows
at 0.2kbps while writing directly into a single path at /dev/sdb provides
900Mbps.
I am using 10g iscsi with 4 ports on the HDS and dual port cnas on the hosts
Thanks
Dan
10 years, 6 months
Snapshot removal
by Maurice "Moe" James
Is it it me or does it take a very long time to delete a snapshot?
Upwards of 30 minutes to delete a snapshot of a 7 GB drive
10 years, 6 months
stateless is false, but state changes do not persist after a poweroff.
by Zhong Qiang
oVirt:3.4.1
Guest_OS: win7_x64
VMpool_Name:normal
[oVirt shell (connected)]# list vms --query "name=normal-8" --show-all
id : 92d11bbd-61dd-49a2-a18a-aeed44f06bb6
name : normal-8
cluster-id : ecd1f4cd-a3f4-46ea-a2ad-dc604218d47a
cpu-architecture : X86_64
cpu-topology-cores : 1
cpu-topology-sockets : 2
cpu_shares : 0
creation_time : 2014-05-09 14:25:04+08:00
delete_protected : False
display-address : ovirt.vdi.com
display-allow_override : False
display-monitors : 1
display-port : 5900
display-secure_port : 5901
display-single_qxl_pci : False
display-smartcard_enabled : False
display-type : spice
high_availability-enabled : False
high_availability-priority : 0
host-id : 867b2e10-45fe-4305-98af-b7aa6fddb2a6
memory : 1073741824
memory_policy-guaranteed : 536870912
migration_downtime : -1
origin : ovirt
os-boot-dev : hd
os-type : windows_7x64
placement_policy-affinity : migratable
placement_policy-host-id : 867b2e10-45fe-4305-98af-b7aa6fddb2a6
sso-methods-method-id : GUEST_AGENT
start_time : 2014-05-13 15:40:20.855000+08:00
stateless : False
status-state : up
stop_time : 2014-05-13 15:40:02.807000+08:00
template-id : 3f72cea1-9b2f-4025-862a-8acd3010fcb1
type : desktop
usb-enabled : False
vmpool-id : f12de2e0-1b68-4f77-a428-a9a83a5ac458
*I create a file called vmuser1 and put it in the desktop on
vm(normal-8:win7),then shut down and power on*. *This file disappeared*
Thanks,
qiang
10 years, 6 months
How to install spice-xpi-2.8 on ubuntu12.04?
by Zhong Qiang
*when i complie spice-xpi-2.8 on ubuntu12.04 with firefox29,i receive this
error:*
################################################################################################
root@user:~/src/spice-xpi-2.8# make
make all-recursive
make[1]: Entering directory `/root/src/spice-xpi-2.8'
Making all in SpiceXPI
make[2]: Entering directory `/root/src/spice-xpi-2.8/SpiceXPI'
Making all in src
make[3]: Entering directory `/root/src/spice-xpi-2.8/SpiceXPI/src'
Making all in plugin
make[4]: Entering directory `/root/src/spice-xpi-2.8/SpiceXPI/src/plugin'
GEN nsISpicec.xpt
make all-am
make[5]: Entering directory `/root/src/spice-xpi-2.8/SpiceXPI/src/plugin'
CXX libnsISpicec_la-controller.lo
controller.cpp: In destructor 'SpiceController::~SpiceController()':
controller.cpp:73:5: warning: format not a string literal and no format
arguments [-Wformat-security]
CXX libnsISpicec_la-np_entry.lo
CXX libnsISpicec_la-npn_gate.lo
CXX libnsISpicec_la-npp_gate.lo
CXX libnsISpicec_la-nsScriptablePeer.lo
In file included from nsScriptablePeer.cpp:54:0:
/root/src/xulrunner-sdk/include/nsError.h:186:14: error: expected
constructor, destructor, or type conversion before '(' token
/root/src/xulrunner-sdk/include/nsError.h:188:14: error: expected
constructor, destructor, or type conversion before '(' token
make[5]: *** [libnsISpicec_la-nsScriptablePeer.lo] Error 1
make[5]: Leaving directory `/root/src/spice-xpi-2.8/SpiceXPI/src/plugin'
make[4]: *** [all] Error 2
make[4]: Leaving directory `/root/src/spice-xpi-2.8/SpiceXPI/src/plugin'
make[3]: *** [all-recursive] Error 1
make[3]: Leaving directory `/root/src/spice-xpi-2.8/SpiceXPI/src'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/root/src/spice-xpi-2.8/SpiceXPI'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/root/src/spice-xpi-2.8'
make: *** [all] Error 2
#############################################################################################
*Any help is greatly appreciated. Thank you*
10 years, 6 months
Ovirt-3.4.1: How to: Upgrading Hosted Engine Cluster
by Daniel Helgenberger
--=-wlqg4Mk7qpqjBQhvuW3U
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Hello,
failing to find a procedure how to actually upgrade a HA cluster, I did
the following witch turned out to be working pretty well.
I am somewhat new to oVirt and was amazed how well actually; I did not
need to shutdown a single VM (well, one because of mem usage; many of my
running VMs have fancy stuff like iscsi and FC luns via a Quantum
Stornext HA Cluster):
1. Set cluster to global maintance
2. Login to ovit engine and to the upgrade according to the release
nodes.
3. After the upgrade is finished and the engine running, set the
first Node in local maintenance.
4. Login the first node and yum update (with the removal of
ovirt-release as mentioned in release notes).* I rebooted the
node because of the kernel update.
5. Return to oVirt and reinstall the Node from GUI, it will be set
to operational automatically**
6. Repeat steps 3-6 for the rest of the Nodes.
7. Remove global maintenance.
8. Update the last Node.***
* I first tried to do this with re-install from GUI. This failed; so I
used the yum - update method to update all relevant services
** I do not know if this was necessary. I did this because the
hosted-engine --deploy does the same thing when adding a host.
*** I found this to be necessary because I had all my Nodes in local
maintenance and could not migrate the Hosted engine from the last node
any more. The host activation in oVirt did not remove the local
maintenance set prior to the update (witch it should, IMHO). It might be
desirable to have a hosted-engine command option to remove local
maintenance for that reason.
--=20
Daniel Helgenberger=20
m box bewegtbild GmbH=20
P: +49/30/2408781-22
F: +49/30/2408781-10
ACKERSTR. 19=20
D-10115 BERLIN=20
www.m-box.de www.monkeymen.tv=20
Gesch=C3=A4ftsf=C3=BChrer: Martin Retschitzegger / Michaela G=C3=B6llner
Handeslregister: Amtsgericht Charlottenburg / HRB 112767=20
--=-wlqg4Mk7qpqjBQhvuW3U
Content-Type: application/x-pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64
MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIINtjCCBBYw
ggL+oAMCAQICCwQAAAAAAS9O4S9SMA0GCSqGSIb3DQEBBQUAMFcxCzAJBgNVBAYTAkJFMRkwFwYD
VQQKExBHbG9iYWxTaWduIG52LXNhMRAwDgYDVQQLEwdSb290IENBMRswGQYDVQQDExJHbG9iYWxT
aWduIFJvb3QgQ0EwHhcNMTEwNDEzMTAwMDAwWhcNMTkwNDEzMTAwMDAwWjBUMQswCQYDVQQGEwJC
RTEZMBcGA1UEChMQR2xvYmFsU2lnbiBudi1zYTEqMCgGA1UEAxMhR2xvYmFsU2lnbiBQZXJzb25h
bFNpZ24gMiBDQSAtIEcyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwWtB+TXs+BJ9
3SJRaV+3uRNGJ3cUO+MTgW8+5HQXfgy19CzkDI1T1NwwICi/bo4R/mYR5FEWx91//eE0ElC/89iY
7GkL0tDasmVx4TOXnrqrsziUcxEPPqHRE8x4NhtBK7+8o0nsMIJMA1gyZ2FA5To2Ew1BBuvovvDJ
+Nua3qOCNBNu+8A+eNpJlVnlu/qB7+XWaPXtUMlsIikxD+gREFVUgYE4VzBuLa2kkg0VLd09XkE2
ceRDm6YgRATuDk6ogUyX4OLxCGIJF8yi6Z37M0wemDA6Uff0EuqdwDQd5HwG/rernUjt1grLdAxq
8BwywRRg0eFHmE+ShhpyO3Fi+wIDAQABo4HlMIHiMA4GA1UdDwEB/wQEAwIBBjASBgNVHRMBAf8E
CDAGAQH/AgEAMB0GA1UdDgQWBBQ/FdJtfC/nMZ5DCgaolGwsO8XuZTBHBgNVHSAEQDA+MDwGBFUd
IAAwNDAyBggrBgEFBQcCARYmaHR0cHM6Ly93d3cuZ2xvYmFsc2lnbi5jb20vcmVwb3NpdG9yeS8w
MwYDVR0fBCwwKjAooCagJIYiaHR0cDovL2NybC5nbG9iYWxzaWduLm5ldC9yb290LmNybDAfBgNV
HSMEGDAWgBRge2YaRQ2XyolQL30EzTSo//z9SzANBgkqhkiG9w0BAQUFAAOCAQEAQ3N5zKTMSTED
HGFAgd/gu91Kb8AxPHgjq+7dhf7mkCinMqqrLai2XOrz8CP63BPaAx7oGOUBI0MyASBGk5zej9L3
oHtiF2BL01m1sBnT8rQxT2CJd/+jqjUl0p2ew8T3HSyatrsooGvDwf00yCB2JHTNvtQxNO8t6x/+
048A1Q+0i7uf0nTnyrJLjD04zhL89ytetZspltOpJVYbmwiFjq6PxsdUNthUDme/9pOLmKDnQU0p
W/JEwLs2TYCBNKwdgSGAk8/z+s2SCltKIG0Uh5U6t6j7JPuwNP/znImwMrlHDJ1YpW0rkF2PGraV
CgDBf9dOB+IIpnwHfIi+LD+eITCCBMowggOyoAMCAQICEQCWaWbA3qWpL+Qmn6I16DynMA0GCSqG
SIb3DQEBBQUAMFQxCzAJBgNVBAYTAkJFMRkwFwYDVQQKExBHbG9iYWxTaWduIG52LXNhMSowKAYD
VQQDEyFHbG9iYWxTaWduIFBlcnNvbmFsU2lnbiAyIENBIC0gRzIwHhcNMTMwODI3MTY1NzU4WhcN
MTYwODI3MTY1NzU4WjBYMQswCQYDVQQGEwJERTEcMBoGA1UEAxMTRGFuaWVsIEhlbGdlbmJlcmdl
cjErMCkGCSqGSIb3DQEJARYcZGFuaWVsLmhlbGdlbmJlcmdlckBtLWJveC5kZTCCASIwDQYJKoZI
hvcNAQEBBQADggEPADCCAQoCggEBAM4BQ5vPknk1OGLd1qKSUIKmQLrjccjJcYj7qtAtA+fNYKF8
9p1VY4UwiFcF9jKlmA9Q8o8tYSx16LYYFoGWokNRAeKFXZiBZiHyI0ekpEfxo8N5cTMCcxKcSYWV
8sqzmBPCoMNpmiVoC8ec8Nv5SqXH34VVtDmNLfiVlsTyomBXAJkJ2/n5XqJzPLFGWWREtPLkVVS+
u426vt/hNsQi5akNoidYeXo98JcrmeApFJ3zB2KxvMziHx8LD4q1gAl9NumtX5YLbCpdWL9AkWdX
Oaro3D9zj6Q6LyGwa/UQUrZdg3BXc07hjHZn6d9vet1SzpbyqQpTzM63yXiX1meEMlMCAwEAAaOC
AZEwggGNMA4GA1UdDwEB/wQEAwIFoDBMBgNVHSAERTBDMEEGCSsGAQQBoDIBKDA0MDIGCCsGAQUF
BwIBFiZodHRwczovL3d3dy5nbG9iYWxzaWduLmNvbS9yZXBvc2l0b3J5LzAnBgNVHREEIDAegRxk
YW5pZWwuaGVsZ2VuYmVyZ2VyQG0tYm94LmRlMAkGA1UdEwQCMAAwHQYDVR0lBBYwFAYIKwYBBQUH
AwIGCCsGAQUFBwMEMEMGA1UdHwQ8MDowOKA2oDSGMmh0dHA6Ly9jcmwuZ2xvYmFsc2lnbi5jb20v
Z3MvZ3NwZXJzb25hbHNpZ24yZzIuY3JsMFUGCCsGAQUFBwEBBEkwRzBFBggrBgEFBQcwAoY5aHR0
cDovL3NlY3VyZS5nbG9iYWxzaWduLmNvbS9jYWNlcnQvZ3NwZXJzb25hbHNpZ24yZzIuY3J0MB0G
A1UdDgQWBBS8NFA/upd+Wipw2nj8RD/Ct+R2GTAfBgNVHSMEGDAWgBQ/FdJtfC/nMZ5DCgaolGws
O8XuZTANBgkqhkiG9w0BAQUFAAOCAQEAXVTpu4fhOLETAW0zdbQiIwBIMZgeVNJnWV3GsMxByycU
63P+WBQTBl9qj47vHLmVdeF7MzH0QSXZSc9Tnfr6CYIImpyIZxRAGpAsWmtZf3JieRA0+j4GQJF2
zAea1NXYXoG9+ZSSZHBSxKUdrRdVdE320nuVGTT2HjEI2LEYbOvaXyi6HhpuHUiyu4LD0+RIT3fi
T8jUiKKLTsApTD+Ak8SLF0IESOSA6htirv69mDDC7Klg9dT7QBPO7dpoKIUOldV3VhahndVfsDff
KD7pkUUvG5XftYEQOxlWDJzuTBeqf/4hxXMtzFU9OaI6oKJjLfr6B+XBc6xwOtc/NMWmejCCBMow
ggOyoAMCAQICEQCWaWbA3qWpL+Qmn6I16DynMA0GCSqGSIb3DQEBBQUAMFQxCzAJBgNVBAYTAkJF
MRkwFwYDVQQKExBHbG9iYWxTaWduIG52LXNhMSowKAYDVQQDEyFHbG9iYWxTaWduIFBlcnNvbmFs
U2lnbiAyIENBIC0gRzIwHhcNMTMwODI3MTY1NzU4WhcNMTYwODI3MTY1NzU4WjBYMQswCQYDVQQG
EwJERTEcMBoGA1UEAxMTRGFuaWVsIEhlbGdlbmJlcmdlcjErMCkGCSqGSIb3DQEJARYcZGFuaWVs
LmhlbGdlbmJlcmdlckBtLWJveC5kZTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM4B
Q5vPknk1OGLd1qKSUIKmQLrjccjJcYj7qtAtA+fNYKF89p1VY4UwiFcF9jKlmA9Q8o8tYSx16LYY
FoGWokNRAeKFXZiBZiHyI0ekpEfxo8N5cTMCcxKcSYWV8sqzmBPCoMNpmiVoC8ec8Nv5SqXH34VV
tDmNLfiVlsTyomBXAJkJ2/n5XqJzPLFGWWREtPLkVVS+u426vt/hNsQi5akNoidYeXo98JcrmeAp
FJ3zB2KxvMziHx8LD4q1gAl9NumtX5YLbCpdWL9AkWdXOaro3D9zj6Q6LyGwa/UQUrZdg3BXc07h
jHZn6d9vet1SzpbyqQpTzM63yXiX1meEMlMCAwEAAaOCAZEwggGNMA4GA1UdDwEB/wQEAwIFoDBM
BgNVHSAERTBDMEEGCSsGAQQBoDIBKDA0MDIGCCsGAQUFBwIBFiZodHRwczovL3d3dy5nbG9iYWxz
aWduLmNvbS9yZXBvc2l0b3J5LzAnBgNVHREEIDAegRxkYW5pZWwuaGVsZ2VuYmVyZ2VyQG0tYm94
LmRlMAkGA1UdEwQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMEMEMGA1UdHwQ8MDow
OKA2oDSGMmh0dHA6Ly9jcmwuZ2xvYmFsc2lnbi5jb20vZ3MvZ3NwZXJzb25hbHNpZ24yZzIuY3Js
MFUGCCsGAQUFBwEBBEkwRzBFBggrBgEFBQcwAoY5aHR0cDovL3NlY3VyZS5nbG9iYWxzaWduLmNv
bS9jYWNlcnQvZ3NwZXJzb25hbHNpZ24yZzIuY3J0MB0GA1UdDgQWBBS8NFA/upd+Wipw2nj8RD/C
t+R2GTAfBgNVHSMEGDAWgBQ/FdJtfC/nMZ5DCgaolGwsO8XuZTANBgkqhkiG9w0BAQUFAAOCAQEA
XVTpu4fhOLETAW0zdbQiIwBIMZgeVNJnWV3GsMxByycU63P+WBQTBl9qj47vHLmVdeF7MzH0QSXZ
Sc9Tnfr6CYIImpyIZxRAGpAsWmtZf3JieRA0+j4GQJF2zAea1NXYXoG9+ZSSZHBSxKUdrRdVdE32
0nuVGTT2HjEI2LEYbOvaXyi6HhpuHUiyu4LD0+RIT3fiT8jUiKKLTsApTD+Ak8SLF0IESOSA6hti
rv69mDDC7Klg9dT7QBPO7dpoKIUOldV3VhahndVfsDffKD7pkUUvG5XftYEQOxlWDJzuTBeqf/4h
xXMtzFU9OaI6oKJjLfr6B+XBc6xwOtc/NMWmejGCAucwggLjAgEBMGkwVDELMAkGA1UEBhMCQkUx
GTAXBgNVBAoTEEdsb2JhbFNpZ24gbnYtc2ExKjAoBgNVBAMTIUdsb2JhbFNpZ24gUGVyc29uYWxT
aWduIDIgQ0EgLSBHMgIRAJZpZsDepakv5CafojXoPKcwCQYFKw4DAhoFAKCCAVMwGAYJKoZIhvcN
AQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTQwNTA5MTU0NTM2WjAjBgkqhkiG9w0B
CQQxFgQUh4SYqCls4KDw7Nqrkc2z4amOSccweAYJKwYBBAGCNxAEMWswaTBUMQswCQYDVQQGEwJC
RTEZMBcGA1UEChMQR2xvYmFsU2lnbiBudi1zYTEqMCgGA1UEAxMhR2xvYmFsU2lnbiBQZXJzb25h
bFNpZ24gMiBDQSAtIEcyAhEAlmlmwN6lqS/kJp+iNeg8pzB6BgsqhkiG9w0BCRACCzFroGkwVDEL
MAkGA1UEBhMCQkUxGTAXBgNVBAoTEEdsb2JhbFNpZ24gbnYtc2ExKjAoBgNVBAMTIUdsb2JhbFNp
Z24gUGVyc29uYWxTaWduIDIgQ0EgLSBHMgIRAJZpZsDepakv5CafojXoPKcwDQYJKoZIhvcNAQEB
BQAEggEAxi+0oO90JtVaCtiXXazuNXIq6kkZQfGovMfzYbS0koowg65fyM14d6hzODNGJOFWib7P
RBsQEFpx2OBs+Ckk77NByk6r0VnWqb+vE9AtyODk6UJx0yQ6nEKVwiQ1uJlOf1MNpRThrSNYP0GK
xq2YGvVvCYN6f+VW+h+YvisaTN8p3/a6lwOszImvFvAYmNafTnBCYug54j7ujUZQ5JgM5cLlwL6T
Nv5mblgnujoO+uVi7oQOXiGNoIOU7TIUKxH28QdJmu26GbYDGuemZcFgiTnku9bDguULXVKHp2co
OyuoNqmMnvwkQK9agYRK5n2Punr62kekcWvzHyOXfovO4gAAAAAAAA==
--=-wlqg4Mk7qpqjBQhvuW3U--
10 years, 6 months