1009100 likely be be fixed for 3.5.0?
by Paul Jansen
--1060583355-420566928-1400730682=:1403
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hello.=0AThere are a few bugs that are related to live snapshots/storage mi=
grations not working.=0Ahttps://bugzilla.redhat.com/show_bug.cgi?id=3D10091=
00 is one of them and is targeted for 3.5.0.=0AAccording to the bug there i=
s some engine work still required.=0A=0AI understand that with EL6 based ho=
sts live storage migration will still not work (due to a too old QEMU versi=
on), but it should work with F20/21 hosts (and EL7 hosts when that comes on=
line).=0AAm I correct in assuming that in a cluster with both EL6 hosts and=
new hosts (described above) that ovirt will allow live storage migration o=
n hosts that support it and prevent the option from appearing on hosts that=
do not?=0A=0AThe possibility of getting a newer QEMU on EL6 appears to be =
tied up in the Centos virt SIG and their proposed repository, which appears=
to be moving quite slowly.=0A=0A=0AI'm looking forward to ovirt closing th=
e gap to vmware vcenter in regard to live storage migrations.=0A
--1060583355-420566928-1400730682=:1403
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"color:#000; background-color:#fff; font-family:He=
lveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, Sans-Serif;fo=
nt-size:12pt"><div style=3D"" class=3D"">Hello.</div><div style=3D"" class=
=3D""><span style=3D"" class=3D"">There are a few bugs that are related to =
live snapshots/storage migrations not working.</span></div><div class=3D"" =
style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: HelveticaNeue,H=
elvetica Neue,Helvetica,Arial,Lucida Grande,Sans-Serif; background-color: t=
ransparent; font-style: normal;"><span style=3D"" class=3D"">https://bugzil=
la.redhat.com/show_bug.cgi?id=3D1009100 is one of them and is targeted for =
3.5.0.</span></div><div class=3D"" style=3D"color: rgb(0, 0, 0); font-size:=
16px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Gra=
nde,Sans-Serif; background-color: transparent; font-style: normal;">Accordi=
ng to the bug there is some engine work still required.</div><div class=3D"=
" style=3D"color: rgb(0,
0, 0); font-size: 16px; font-family: HelveticaNeue,Helvetica Neue,Helvetic=
a,Arial,Lucida Grande,Sans-Serif; background-color: transparent; font-style=
: normal;"><br></div><div class=3D"" style=3D"color: rgb(0, 0, 0); font-siz=
e: 16px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida G=
rande,Sans-Serif; background-color: transparent; font-style: normal;">I und=
erstand that with EL6 based hosts live storage migration will still not wor=
k (due to a too old QEMU version), but it should work with F20/21 hosts (an=
d EL7 hosts when that comes online).</div><div class=3D"" style=3D"color: r=
gb(0, 0, 0); font-size: 16px; font-family: HelveticaNeue,Helvetica Neue,Hel=
vetica,Arial,Lucida Grande,Sans-Serif; background-color: transparent; font-=
style: normal;">Am I correct in assuming that in a cluster with both EL6 ho=
sts and new hosts (described above) that ovirt will allow live storage migr=
ation on hosts that support it and prevent the option from appearing on
hosts that do not?</div><div class=3D"" style=3D"color: rgb(0, 0, 0); font=
-size: 16px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Luci=
da Grande,Sans-Serif; background-color: transparent; font-style: normal;"><=
br></div><div class=3D"" style=3D"color: rgb(0, 0, 0); font-size: 16px; fon=
t-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,Sans-S=
erif; background-color: transparent; font-style: normal;">The possibility o=
f getting a newer QEMU on EL6 appears to be tied up in the Centos virt SIG =
and their proposed repository, which appears to be moving quite slowly.<br>=
</div><div class=3D"" style=3D"color: rgb(0, 0, 0); font-size: 16px; font-f=
amily: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,Sans-Seri=
f; background-color: transparent; font-style: normal;"><br></div><div class=
=3D"" style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: Helvetica=
Neue,Helvetica Neue,Helvetica,Arial,Lucida Grande,Sans-Serif; background-co=
lor:
transparent; font-style: normal;">I'm looking forward to ovirt closing the=
gap to vmware vcenter in regard to live storage migrations.</div><div clas=
s=3D"" style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: Helvetic=
aNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,Sans-Serif; background-c=
olor: transparent; font-style: normal;"><br><span style=3D"" class=3D""></s=
pan></div></div></body></html>
--1060583355-420566928-1400730682=:1403--
10 years, 6 months
Re: [ovirt-users] Ovirt WebAdmin Portal
by Karli Sjöberg
--_000_5F9E965F5A80BC468BE5F40576769F0979FBE79Bexchange21_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
DQpEZW4gMjEgbWFqIDIwMTQgMjI6MTggc2tyZXYgQ2FybG9zIENhc3RpbGxvIDxjYXJsb3MuY2Fz
dGlsbG9AZ2xvYmFsci5uZXQ+Og0KPg0KPiByZWdhcmRzLA0KPg0KPiBNeSBOYW1lIGlzIENhcmxv
cyBhbmQgSSdtIGEgSTpUIEFkbWluLiBJIHJlY2VudGx5IG5vdGljZWQgdGhhdCBteSBvVmlydCBX
ZWIgQWRtaW4gcG9ydGFsIHNob3dzIDAlIG1lbW9yeSB1c2FnZSBmb3IgYWxsIG15IHZpcnR1YWwg
bWFjaGluZXMsIGEgc2l0dWF0aW9uIHRoYXQgZG9lcyBub3Qgc2VlbSByaWdodCB0byBtZS4NCj4N
Cj4gSSBoYXZlIHRyaWVkIHRvIHNlYXJjaCBmb3IgaW5mb3JtYXRpb24gYWJvdXQgd2h5IGl0IG1h
eSBiZSBoYXBwZW5pbmcgdGhhdCwgYnV0IEkgaGF2ZSBub3QgZm91bmQgYW55dGhpbmcgdXNlZnVs
LCBzbyBJIGdvIHRvIHRoaXMgbGlzdCBsb29raW5nIGZvciBpZGVhcz8NCg0KWW91IG5lZWQgdG8g
aW5zdGFsbCB0aGUgIm92aXJ0LWd1ZXN0LWFnZW50IiBpbiBhbGwgVk0ncyBmb3IgdGhhdCB0byBz
aG93LiBZb3Ugd2lsbCBhbHNvIGdldCB0aGUgVk3igJlzIElQLCBmcWRuLCBzb21lIGluc3RhbGxl
ZCBwYWNrYWdlcyBhbmQgd2hv4oCZcyBjdXJyZW50bHkgbG9nZ2VkIGludG8gaXQsIGFtb25nIG90
aGVyIHRoaW5ncy4NCg0KL0sNCg0KPg0KPiBNeSBvVmlydCBFbmdpbmUgVmVyc2lvbjogMy4zLjEt
Mi5lbDYsDQo+IE8uUy4gQ2VudE9TIHJlbGVhc2UgNi40IChGaW5hbCkNCj4NCj4NCj4gLS0NCj4g
Q2FybG9zIEouIENhc3RpbGxvDQo+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4gSW5nZW5p
ZXJvIGRlIFNvbHVjaW9uZXMgZGUgVEkNCj4gKzU4IDQyNiAyNTQyMzEzDQo+IEBEcjRnMG5LbjFn
aHQNCj4NCg==
--_000_5F9E965F5A80BC468BE5F40576769F0979FBE79Bexchange21_
Content-Type: text/html; charset="utf-8"
Content-ID: <410BBA8BDEF0134AA1B2906E41753CA1(a)ad.slu.se>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHAgZGlyPSJsdHIi
Pjxicj4NCkRlbiAyMSBtYWogMjAxNCAyMjoxOCBza3JldiBDYXJsb3MgQ2FzdGlsbG8gJmx0O2Nh
cmxvcy5jYXN0aWxsb0BnbG9iYWxyLm5ldCZndDs6PGJyPg0KJmd0Ozxicj4NCiZndDsgcmVnYXJk
cywmbmJzcDs8YnI+DQomZ3Q7PGJyPg0KJmd0OyBNeSBOYW1lIGlzIENhcmxvcyBhbmQgSSdtIGEg
STpUIEFkbWluLiBJIHJlY2VudGx5IG5vdGljZWQgdGhhdCBteSBvVmlydCBXZWIgQWRtaW4gcG9y
dGFsIHNob3dzIDAlIG1lbW9yeSB1c2FnZSBmb3IgYWxsIG15IHZpcnR1YWwgbWFjaGluZXMsIGEg
c2l0dWF0aW9uIHRoYXQgZG9lcyBub3Qgc2VlbSByaWdodCB0byBtZS48YnI+DQomZ3Q7PGJyPg0K
Jmd0OyBJIGhhdmUgdHJpZWQgdG8gc2VhcmNoIGZvciBpbmZvcm1hdGlvbiBhYm91dCB3aHkgaXQg
bWF5IGJlIGhhcHBlbmluZyB0aGF0LCBidXQgSSBoYXZlIG5vdCBmb3VuZCBhbnl0aGluZyB1c2Vm
dWwsIHNvIEkgZ28gdG8gdGhpcyBsaXN0IGxvb2tpbmcgZm9yIGlkZWFzPzwvcD4NCjxwIGRpcj0i
bHRyIj5Zb3UgbmVlZCB0byBpbnN0YWxsIHRoZSAmcXVvdDtvdmlydC1ndWVzdC1hZ2VudCZxdW90
OyBpbiBhbGwgVk0ncyBmb3IgdGhhdCB0byBzaG93LiBZb3Ugd2lsbCBhbHNvIGdldCB0aGUgVk3i
gJlzIElQLCBmcWRuLCBzb21lIGluc3RhbGxlZCBwYWNrYWdlcyBhbmQgd2hv4oCZcyBjdXJyZW50
bHkgbG9nZ2VkIGludG8gaXQsIGFtb25nIG90aGVyIHRoaW5ncy48L3A+DQo8cCBkaXI9Imx0ciI+
L0s8L3A+DQo8cCBkaXI9Imx0ciI+Jmd0Ozxicj4NCiZndDsgTXkgb1ZpcnQgRW5naW5lIFZlcnNp
b246IDMuMy4xLTIuZWw2LDxicj4NCiZndDsgTy5TLiBDZW50T1MgcmVsZWFzZSA2LjQgKEZpbmFs
KTxicj4NCiZndDs8YnI+DQomZ3Q7PGJyPg0KJmd0OyAtLSA8YnI+DQomZ3Q7IENhcmxvcyBKLiBD
YXN0aWxsbzxicj4NCiZndDsgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLTxicj4NCiZndDsgSW5n
ZW5pZXJvIGRlIFNvbHVjaW9uZXMgZGUgVEk8YnI+DQomZ3Q7ICYjNDM7NTggNDI2IDI1NDIzMTM8
YnI+DQomZ3Q7IEBEcjRnMG5LbjFnaHQ8YnI+DQomZ3Q7PGJyPg0KPC9wPg0KPC9ib2R5Pg0KPC9o
dG1sPg0K
--_000_5F9E965F5A80BC468BE5F40576769F0979FBE79Bexchange21_--
10 years, 6 months
Ovirt WebAdmin Portal
by Carlos Castillo
regards,
My Name is Carlos and I'm a I:T Admin. I recently noticed that my oVirt Web
Admin portal shows 0% memory usage for all my virtual machines, a situation
that does not seem right to me.
I have tried to search for information about why it may be happening that,
but I have not found anything useful, so I go to this list looking for
ideas?
My oVirt Engine Version: 3.3.1-2.el6,
O.S. CentOS release 6.4 (Final)
--
Carlos J. Castillo
----------------------------------------------------------------------------------
Ingeniero de Soluciones de TI
+58 426 2542313
@Dr4g0nKn1ght
10 years, 6 months
Fwd: [Users] Ovirt 3.4 EqualLogic multipath Bug 953343
by Gary Lloyd
Hi
I was just wondering if ISCSI Multipathing is supported yet on Direct Lun ?
I have deployed 3.4.0.1 but i can only see the option for ISCSI
multipathing on storage domains.
We will be glad if it could be, as it saves us having to inject new code
into our vdsm nodes with each new version.
Thanks
*Gary Lloyd*
----------------------------------
IT Services
Keele University
-----------------------------------
Guys,
Please, pay attention that this feature currently may not work. I resolved
several bugs related to this feature but some of my patches are still
waiting for merge.
Regards,
Sergey
----- Original Message -----
> From: "Maor Lipchuk" <mlipchuk(a)redhat.com>
> To: "Gary Lloyd" <g.lloyd(a)keele.ac.uk>, users(a)ovirt.org, "Sergey Gotliv" <
sgotliv(a)redhat.com>
> Sent: Thursday, March 27, 2014 6:50:10 PM
> Subject: Re: [Users] Ovirt 3.4 EqualLogic multipath Bug 953343
>
> IIRC it should also support direct luns as well.
> Sergey?
>
> regards,
> Maor
>
> On 03/27/2014 06:25 PM, Gary Lloyd wrote:
> > Hi I have just had a look at this thanks. Whilst it seemed promising we
> > are in a situation where we use Direct Lun for all our production VM's
> > in order to take advantage of being able to individually replicate and
> > restore vm volumes using the SAN tools. Is multipath supported for
> > Direct Luns or only data domains ?
> >
> > Thanks
> >
> > /Gary Lloyd/
> > ----------------------------------
> > IT Services
> > Keele University
> > -----------------------------------
> >
> >
> > On 27 March 2014 16:02, Maor Lipchuk <mlipchuk(a)redhat.com
> > <mailto:mlipchuk@redhat.com>> wrote:
> >
> > Hi Gary,
> >
> > Please take a look at
> > http://www.ovirt.org/Feature/iSCSI-Multipath#User_Experience
> >
> > Regards,
> > Maor
> >
> > On 03/27/2014 05:59 PM, Gary Lloyd wrote:
> > > Hello
> > >
> > > I have just deployed Ovirt 3.4 on our test environment. Does
> > anyone know
> > > how the ISCSI multipath issue is resolved ? At the moment it is
> > behaving
> > > as before and only opening one session per lun ( we bodged vdsm
> > > python
> > > code in previous releases to get it to work).
> > >
> > > The Planning sheet shows that its fixed but I am not sure what to
do
> > > next:
> >
https://docs.google.com/spreadsheet/ccc?key=0AuAtmJW_VMCRdHJ6N1M3d1F1UTJT...
> > >
> > >
> > > Thanks
> > >
> > > /Gary Lloyd/
> > > ----------------------------------
> > > IT Services
> > > Keele University
> > > -----------------------------------
> > >
> > >
> > > _______________________________________________
> > > Users mailing list
> > > Users(a)ovirt.org <mailto:Users@ovirt.org>
> > > http://lists.ovirt.org/mailman/listinfo/users
> > >
> >
> >
>
>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--
Martin Goldstone
IT Systems Administrator - Finance & IT
Keele University, Keele, Staffordshire, United Kingdom, ST5 5BG
Telephone: +44 1782 734457
G+: http://google.com/+MartinGoldstoneKeele
10 years, 6 months
oVirt Weekly Meeting Minutes: May 21, 2014
by Brian Proffitt
Minutes: http://ovirt.org/meetings/ovirt/2014/ovirt.2014-05-21-14.02.html
Minutes (text): http://ovirt.org/meetings/ovirt/2014/ovirt.2014-05-21-14.02.txt
Log: http://ovirt.org/meetings/ovirt/2014/ovirt.2014-05-21-14.02.log.html
=========================
#ovirt: oVirt Weekly Sync
=========================
Meeting started by bkp at 14:02:03 UTC. The full logs are available at
http://ovirt.org/meetings/ovirt/2014/ovirt.2014-05-21-14.02.log.html .
Meeting summary
---------------
* Agenda and Roll Call (bkp, 14:02:19)
* infra updates (bkp, 14:02:42)
* 3.4.z updates (bkp, 14:02:42)
* 3.5 planning (bkp, 14:02:42)
* conferences and workshops (bkp, 14:02:42)
* other topics (bkp, 14:02:42)
* infra updates (bkp, 14:03:01)
* infra os1 issues solved, will add new slaves in the following days
(bkp, 14:06:28)
* 3.4.z updates (bkp, 14:08:05)
* 3.4.z updates 3.4.2: RC scheduled for 2014-05-27 (bkp, 14:11:53)
* 3.4.z updates SvenKieske sent a list of proposed blockers to be
reviewed:
http://lists.ovirt.org/pipermail/users/2014-May/024518.html (bkp,
14:11:53)
* 3.4.z updates Bug 1037663 status: ovirt-log-collector: conflicts
with file from package sos >= 3.0, sbonazzo working with sos
maintainer for coordinating oVirt 3.4.2 release. Not a blocker.
(bkp, 14:11:53)
* 3.4.z updates Discussion is ongoing on mailing list for other bugs
(bkp, 14:11:54)
* 3.5 planning (bkp, 14:12:20)
* 3.5 planning UX Phase one of patternfly is in, and the sorting infra
(bkp, 14:14:53)
* 3.5 planning UX Issues fixed as they are reported. (bkp, 14:14:53)
* 3.5 planning SLA All planned 3.5 features will be ready on time,
fingers crossed :) (bkp, 14:18:22)
* 3.5 planning SLA NUMA feature: APIs (GUI and RESTful) still missing.
(bkp, 14:18:22)
* 3.5 planning SLA optaplanner: in advanced development state. (bkp,
14:18:22)
* 3.5 planning SLA Limits (blkio and cpu, incluiding disk and cpu
profile, including refactoring current network qos and vnic profile
to suit new sla/profiles infra): we are working on them nowadays (no
real issues there) (bkp, 14:18:22)
* 3.5 planning SLA Scheduling RESTful API: developed now - patch this
evening (bkp, 14:18:22)
* LINK: http://www.ovirt.org/Features/Self_Hosted_Engine_iSCSI_Support
all patches have been pushed and under review (sbonazzo, 14:20:07)
* LINK: http://www.ovirt.org/Features/oVirt_Windows_Guest_Tools
(sbonazzo, 14:20:55)
* 3.5 planning integration ovirt 3.5.0 alpha released yesterday (bkp,
14:22:47)
* 3.5 planning integration ovirt live iso uploaded this afternoon
(bkp, 14:22:47)
* 3.5 planning integration ovirt node iso will follow (bkp, 14:22:47)
* 3.5 planning integration 3.5.0 Second Alpha scheduled for May 30 for
Feature Freeze (bkp, 14:22:47)
* 3.5 planning integration
http://www.ovirt.org/Features/Self_Hosted_Engine_iSCSI_Support all
patches have been pushed and under review (bkp, 14:22:47)
* 3.5 planning integration There's an issue on additional host setup,
but should be fixed easily. Patch pushed an under review. (bkp,
14:22:49)
* 3.5 planning integration F20 support started for ovirt-engine,
hopefully ready for alpha 2 (bkp, 14:22:52)
* 3.5 planning virt Unfinished features are closer to completition...
but nothing got in yet. (bkp, 14:29:06)
* 3.5 planning virt Last week spent fixing major bugs. (bkp,
14:29:06)
* 3.5 planning Gluster Volume capacity - vdsm dependency on
libgfapi-python needs to be added (bkp, 14:34:31)
* 3.5 planning Gluster Volume profile - review comments on patches
incorporated and submitted, need final review and approval (bkp,
14:34:31)
* 3.5 planning Gluster REST API in progress (bkp, 14:34:31)
* 3.5 planning node All features are in progress. (bkp, 14:37:59)
* 3.5 planning node Two features up for review (generic registration
and hosted engine plugin) (bkp, 14:37:59)
* 3.5 planning node Appliances can also be built, now all needs to be
reviewed and tested. (bkp, 14:37:59)
* 3.5 planning node ETA for a node with the bits in place by early
next week. (bkp, 14:37:59)
* 3.5 planning network All Neutron appliance oVirt code is merged.
Feature now depends on some OpenStack repository processes that are
async with oVirt releases. oVirt side completed. (bkp, 14:42:17)
* 3.5 planning network The two MAC pool features are in a tough
review, so they're in danger even for the new deadline of end of
May. (bkp, 14:42:17)
* 3.5 planning network Progress with the RHEL bug on which bridge_opts
depends upon is unknown. danken adds that it is not a blocker for
3.5 (bkp, 14:42:17)
* 3.5 planning network [UPDATE] Progress with the RHEL bug on which
bridge_opts depends upon done (with an asterisk). (bkp, 14:43:34)
* 3.5 planning storage Store OVF on any domains - merged (bkp,
14:49:12)
* 3.5 planning storage Disk alias recycling in web-admin portal -
merged (bkp, 14:49:13)
* 3.5 planning storage [RFE] Snapshot overview in webadmin portal -
merged (bkp, 14:49:13)
* 3.5 planning storage import existing data domain - phase one was
revised yesterday, should be reviewed today and tommorow, and
hopefully be merged by end of week (bkp, 14:49:13)
* 3.5 planning storage Sanlock fencing seems this will slip 3.5.0
after (infra required a late redesign) (bkp, 14:49:13)
* 3.5 planning storage live merge (delete snapshot) - in progress.
Three patches upstream (engine side), one is going to be merged soon
(today hopefully), the other two under review. (bkp, 14:49:14)
* 3.5 infra Working hard on 3.5 features, and hope everything will
make it on time. (bkp, 14:49:53)
* 3.5 infra The current status is in the planning wiki. (bkp,
14:49:53)
* 3.5 infra If infra-related questions pop up then just email them,
and ovedo will answer them this evening/tomorrow. (bkp, 14:49:53)
* conferences and workshops (bkp, 14:50:22)
* Conferences OpenStack Israel is still on for June 2. (bkp,
14:51:50)
* Conferences FrOSCON CfP closes May 23 (bkp, 14:51:50)
* Conferences OSCON Open Cloud Day CfP closes May 27 (bkp, 14:51:50)
* ACTION: More conference participation is needed for 2014. See bkp
for help planning/submitting presentations. (bkp, 14:51:50)
* other topics (bkp, 14:53:04)
Meeting ended at 14:56:14 UTC.
Action Items
------------
* More conference participation is needed for 2014. See bkp for help
planning/submitting presentations.
Action Items, by person
-----------------------
* bkp
* More conference participation is needed for 2014. See bkp for help
planning/submitting presentations.
People Present (lines said)
---------------------------
* bkp (115)
* sbonazzo (32)
* lvernia (14)
* gchaplik (12)
* nsoffer (11)
* fabiand (9)
* mskrivanek (8)
* danken (5)
* awels (4)
* dcaro (3)
* sahina (3)
* ovirtbot (3)
* jb_badpenny (1)
* SvenKieske (1)
* fsimonce (1)
* mburns (1)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
--
Brian Proffitt
oVirt Community Manager
Project Atomic Community Lead
Open Source and Standards, Red Hat - http://community.redhat.com
Phone: +1 574 383 9BKP
IRC: bkp @ OFTC
10 years, 6 months
sanlock + gluster recovery -- RFE
by Ted Miller
--------------050904040002030300050808
Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
Content-Transfer-Encoding: 7bit
Itamar, I am addressing this to you because one of your assignments seems to
be to coordinate other oVirt contributors when dealing with issues that are
raised on the ovirt-users email list.
As you are aware, there is an ongoing split-brain problem with running
sanlock on replicated gluster storage. Personally, I believe that this is
the 5th time that I have been bitten by this sanlock+gluster problem.
I believe that the following are true (if not, my entire request is probably
off base).
* ovirt uses sanlock in such a way that when the sanlock storage is on a
replicated gluster file system, very small storage disruptions can result
in a gluster split-brain on the sanlock space
o gluster is aware of the problem, and is working on a different way of
replicating data, which will reduce these problems.
* most (maybe all) of the sanlock locks have a short duration, measured in
seconds
* there are only a couple of things that a user can safely do from the
command line when a file is in split-brain
o delete the file
o rename (mv) the file
* x
_How did I get into this mess?_
had 3 hosts running ovirt 3.3
each hosted VMs
gluster replica 3 storage
engine was external to cluster
upgraded 3 hosts from ovirt 3.3 to 3.4
hosted-engine deploy
used new gluster volume (accessed via nfs) for storage
storage was accessed using localhost:engVM1 link (localhost was
probably a poor choice)
created new engine on VM (did not transfer any data from old engine)
added 3 hosts to new engine via web-gui
ran above setup for 3 days
shut entire system down before I left on vacation (holiday)
came back from vacation
powered on hosts
found that iptables did not have rules for gluster access
(a continuing problem if host installation is allowed to set up firewall)
added rules for gluster
glusterfs now up and running
added storage manually
tried "hosted-engine --vm-start"
vm did not start
logs show sanlock errors
"gluster volume heal engVM1full:
"gluster volume heal engVM1 info split-brain" showed 6 files in split-brain
all 5 prefixed by /rhev/data-center/mnt/localhost\:_engVM1
UUID/dom_md/ids
UUID/images/UUID/UUID (VM hard disk)
UUID/images/UUID/UUID.lease
UUID/ha_agent/hosted-engine.lockspace
UUID/ha_agent/hosted-engine.metadata
I copied each of the above files off of each of the three bricks to a safe
place (15 files copied)
I renamed the 5 files on /rhev/....
I copied the 5 files from one of the bricks to /rhev/
files can now be read OK (e.g. cat ids)
sanlock.log shows error sets like these:
2014-05-20 03:23:39-0400 36199 [2843]: s3358 lockspace 5ebb3b40-a394-405b-bbac-4c0e21ccd659:1:/rhev/data-center/mnt/localhost:_engVM1/5ebb3b40-a394-405b-bbac-4c0e21ccd659/dom_md/ids:0
2014-05-20 03:23:39-0400 36199 [18873]: open error -5 /rhev/data-center/mnt/localhost:_engVM1/5ebb3b40-a394-405b-bbac-4c0e21ccd659/dom_md/ids
2014-05-20 03:23:39-0400 36199 [18873]: s3358 open_disk /rhev/data-center/mnt/localhost:_engVM1/5ebb3b40-a394-405b-bbac-4c0e21ccd659/dom_md/ids error -5
2014-05-20 03:23:40-0400 36200 [2843]: s3358 add_lockspace fail result -19
I am now stuck
What I would like to see in ovirt to help me (and others like me). Alternates
listed in order from most desirable (automatic) to least desirable (set of
commands to type, with lots of variables to figure out).
1. automagic recovery
* When a host is not able to access sanlock, it writes a small "problem"
text file into the shared storage
o the host-ID as part of the name (so only one host ever accesses that
file)
o a status number for the error causing problems
o time stamp
o time stamp when last sanlock lease will expire
o if sanlock is able to access the file, the "problem" file is deleted
* when time passes for its last sanlock lease to be expired, highest number
host does a survey
o did all other hosts create "problem" files?
o do all "problem" files show same (or compatible) error codes related
to file access problems?
o are all hosts communicating by network?
o if yes to all above
* delete all sanlock storage space
* initialize sanlock from scratch
* restart whatever may have given up because of sanlock
* restart VM if necessary
2. recovery subcommand
* add "hosted-engine --lock-initialize" command that would delete sanlock,
start over from scratch
3. script
* publish a script (in ovirt packages or available on web) which, when run,
does all (or most) of the recovery process needed.
4. commands
* publish on the web a "recipe" for dealing with files that commonly go
split-brain
o ids
o *.lease
o *.lockspace
Any chance of any help on any of the above levels?
Ted Miller
Elkhart, IN, USA
--------------050904040002030300050808
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Itamar, I am addressing this to you because one of your assignments
seems to be to coordinate other oVirt contributors when dealing with
issues that are raised on the ovirt-users email list.<br>
<br>
As you are aware, there is an ongoing split-brain problem with
running sanlock on replicated gluster storage. Personally, I
believe that this is the 5th time that I have been bitten by this
sanlock+gluster problem.<br>
<br>
I believe that the following are true (if not, my entire request is
probably off base).<br>
<ul>
<li>ovirt uses sanlock in such a way that when the sanlock storage
is on a replicated gluster file system, very small storage
disruptions can result in a gluster split-brain on the sanlock
space</li>
<ul>
<li>gluster is aware of the problem, and is working on a
different way of replicating data, which will reduce these
problems.</li>
</ul>
<li>most (maybe all) of the sanlock locks have a short duration,
measured in seconds</li>
<li>there are only a couple of things that a user can safely do
from the command line when a file is in split-brain</li>
<ul>
<li>delete the file</li>
<li>rename (mv) the file<br>
</li>
</ul>
<li>x</li>
</ul>
<u>How did I get into this mess?</u><br>
<br>
had 3 hosts running ovirt 3.3<br>
each hosted VMs<br>
gluster replica 3 storage<br>
engine was external to cluster<br>
upgraded 3 hosts from ovirt 3.3 to 3.4<br>
hosted-engine deploy<br>
used new gluster volume (accessed via nfs) for storage<br>
storage was accessed using localhost:engVM1 link (localhost
was probably a poor choice)<br>
created new engine on VM (did not transfer any data from old
engine)<br>
added 3 hosts to new engine via web-gui<br>
ran above setup for 3 days<br>
shut entire system down before I left on vacation (holiday)<br>
came back from vacation<br>
powered on hosts<br>
found that iptables did not have rules for gluster access<br>
(a continuing problem if host installation is allowed to set up
firewall)<br>
added rules for gluster<br>
glusterfs now up and running<br>
added storage manually<br>
tried "hosted-engine --vm-start"<br>
vm did not start<br>
logs show sanlock errors<br>
"gluster volume heal engVM1full:<br>
"gluster volume heal engVM1 info split-brain" showed 6 files in
split-brain<br>
all 5 prefixed by /rhev/data-center/mnt/localhost\:_engVM1<br>
UUID/dom_md/ids<br>
UUID/images/UUID/UUID (VM hard disk)<br>
UUID/images/UUID/UUID.lease<br>
UUID/ha_agent/hosted-engine.lockspace<br>
UUID/ha_agent/hosted-engine.metadata<br>
I copied each of the above files off of each of the three bricks to
a safe place (15 files copied)<br>
I renamed the 5 files on /rhev/....<br>
I copied the 5 files from one of the bricks to /rhev/<br>
files can now be read OK (e.g. cat ids)<br>
sanlock.log shows error sets like these:<br>
<pre>2014-05-20 03:23:39-0400 36199 [2843]: s3358 lockspace 5ebb3b40-a394-405b-bbac-4c0e21ccd659:1:/rhev/data-center/mnt/localhost:_engVM1/5ebb3b40-a394-405b-bbac-4c0e21ccd659/dom_md/ids:0
2014-05-20 03:23:39-0400 36199 [18873]: open error -5 /rhev/data-center/mnt/localhost:_engVM1/5ebb3b40-a394-405b-bbac-4c0e21ccd659/dom_md/ids
2014-05-20 03:23:39-0400 36199 [18873]: s3358 open_disk /rhev/data-center/mnt/localhost:_engVM1/5ebb3b40-a394-405b-bbac-4c0e21ccd659/dom_md/ids error -5
2014-05-20 03:23:40-0400 36200 [2843]: s3358 add_lockspace fail result -19</pre>
I am now stuck<br>
<br>
What I would like to see in ovirt to help me (and others like me).
Alternates listed in order from most desirable (automatic) to least
desirable (set of commands to type, with lots of variables to figure
out).<br>
<br>
1. automagic recovery<br>
<ul>
<li> When a host is not able to access sanlock, it writes a small
"problem" text file into the shared storage</li>
<ul>
<li>the host-ID as part of the name (so only one host ever
accesses that file)</li>
<li>a status number for the error causing problems</li>
<li>time stamp</li>
<li>time stamp when last sanlock lease will expire</li>
<li>if sanlock is able to access the file, the "problem" file is
deleted</li>
</ul>
<li>when time passes for its last sanlock lease to be expired,
highest number host does a survey</li>
<ul>
<li>did all other hosts create "problem" files?</li>
<li>do all "problem" files show same (or compatible) error codes
related to file access problems?</li>
<li>are all hosts communicating by network?</li>
<li>if yes to all above</li>
</ul>
<li>delete all sanlock storage space<br>
</li>
<li>initialize sanlock from scratch</li>
<li>restart whatever may have given up because of sanlock</li>
<li>restart VM if necessary</li>
</ul>
<p>2. recovery subcommand<br>
</p>
<ul>
<li>add "hosted-engine --lock-initialize" command that would
delete sanlock, start over from scratch</li>
</ul>
<p>3. script<br>
</p>
<ul>
<li>publish a script (in ovirt packages or available on web)
which, when run, does all (or most) of the recovery process
needed.</li>
</ul>
<p>4. commands<br>
</p>
<ul>
<li>publish on the web a "recipe" for dealing with files that
commonly go split-brain</li>
<ul>
<li>ids</li>
<li>*.lease</li>
<li>*.lockspace</li>
</ul>
</ul>
<p>Any chance of any help on any of the above levels?<br>
</p>
<p>Ted Miller<br>
Elkhart, IN, USA<br>
<br>
</p>
</body>
</html>
--------------050904040002030300050808--
10 years, 6 months
[QE][ACTION NEEDED] oVirt 3.4.2 RC status
by Sandro Bonazzola
Hi,
We're going to start composing oVirt 3.4.2 RC on *2014-05-27 08:00 UTC* from 3.4 branches.
The bug tracker [1] shows no blocking bugs for the release
There are still 71 bugs [2] targeted to 3.4.2.
Excluding node and documentation bugs we still have 39 bugs [3] targeted to 3.4.2.
Maintainers / Assignee:
- Please add the bugs to the tracker if you think that 3.4.2 should not be released without them fixed.
- Please update the target to any next release for bugs that won't be in 3.4.2:
it will ease gathering the blocking bugs for next releases.
- Please fill release notes, the page has been created here [4]
- If you need to rebuild packages, please build them before *2014-05-26 15:00 UTC*.
Otherwise we'll take last 3.4 snapshot available.
Community:
- If you're going to test this release, please add yourself to the test page [5]
[1] http://bugzilla.redhat.com/1095370
[2] http://red.ht/1oqLLlr
[3] http://red.ht/1nIAZXO
[4] http://www.ovirt.org/OVirt_3.4.2_Release_Notes
[5] http://www.ovirt.org/Testing/oVirt_3.4.2_Testing
Thanks,
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 6 months
[QE][ACTION NEEDED] oVirt 3.5.0 Alpha status
by Sandro Bonazzola
Hi,
We released oVirt 3.5.0 Alpha on *2014-05-20* and we're now preparing for feature freeze scheduled for 2014-05-30.
We're going to compose a second Alpha on Firday *2014-05-30 08:00 UTC*.
Maintainers:
- Please be sure that master snapshot allow to create VMs before *2014-05-29 15:00 UTC*
The bug tracker [1] shows the following proposed blockers to be reviewed:
Bug ID Whiteboard Status Summary
1001100 integration NEW Add log gathering for a new ovirt module (External scheduler)
1073944 integration ASSIGNED Add log gathering for a new ovirt module (External scheduler)
1060198 integration NEW [RFE] add support for Fedora 20
1099432 virt NEW noVNC client doesn't work: Server disconnected (code: 1006)
Feature freeze has been postponed to 2014-05-30 and the following features should be testable in 3.5.0 Alpha according to Features Status Table [2]
Group oVirt BZ Title
gluster 1096713 Monitoring (UI plugin) Dashboard (Integrated with Nagios monitoring)
infra 1090530 [RFE] Please add host count and guest count columns to "Clusters" tab in webadmin
infra 1054778 [RFE] Allow to perform fence operations from a host in another DC
infra 1090803 [RFE] Change the "Slot" field to "Service Profile" when cisco_ucs is selected as the fencing type
infra 1090511 [RFE] Improve fencing robustness by retrying failed attempts
infra 1090794 [RFE] Search VMs based on MAC address from web-admin portal
infra 1090793 consider the event type while printing events to engine.log
infra 1090796 [RFE] Re-work engine ovirt-node host-deploy sequence
infra 1090798 [RFE] Admin GUI - Add host uptime information to the "General" tab
infra 1090808 [RFE] Ability to dismiss alerts and events from web-admin portal
infra-api 1090797 [RFE] RESTAPI: Add /tags sub-collection for Template resource
infra-dwh 1091686 prevent OutOfMemoryError after starting the dwh service.
network 1078836 Add a warning when adding display network
network 1079719 Display of NIC Slave/Bond fault on Event Log
network 1080987 Support ethtool_opts functionality within oVirt
storage 1054241 Store OVF on any domains
storage 1083312 Disk alias recycling in web-admin portal
ux 1064543 oVirt new look and feel [PatternFly adoption] - phase #1
virt 1058832 Allow to clone a (down) VM without snapshot/template
virt 1031040 can't set different keymap for vnc via runonce option
virt 1043471 oVirt guest agent for SLES
virt 1083049 add progress bar for vm migration
virt 1083065 EL 7 guest compatibility
virt 1083059 "Instance types (new template handling) - adding flavours"
virt Allow guest serial number to be configurable
virt 1047624 [RFE] support BIOS boot device menu
virt 1083129 allows setting netbios name, locale, language and keyboard settings for windows vm's
virt 1038632 spice-html5 button to show debug console/output window
virt 1080002 [RFE] Enable user defined Windows Sysprep file done
Some more features may be included since they were near to be completed on last sync meeting.
The table will be updated on next sync meeting scheduled for 2014-05-21.
There are still 382 bugs [3] targeted to 3.5.0.
Excluding node and documentation bugs we still have 319 bugs [4] targeted to 3.5.0.
Maintainers / Assignee:
- Please remember to rebuild your packages before *2014-05-29 15:00 UTC* if needed, otherwise nightly snapshot will be taken.
- Please be sure that master snapshot allow to create VMs before *2014-05-29 15:00 UTC*
- If you find a blocker bug please remember to add it to the tracker [1]
- Please start filling release notes, the page has been created here [5]
Community:
- You're welcome to join us testing this alpha release and getting involved in oVirt Quality Assurance[6]!
[1] http://bugzilla.redhat.com/1073943
[2] http://bit.ly/17qBn6F
[3] http://red.ht/1pVEk7H
[4] http://red.ht/1rLCJwF
[5] http://www.ovirt.org/OVirt_3.5_Release_Notes
[6] http://www.ovirt.org/OVirt_Quality_Assurance
Thanks,
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 6 months
Jpackage down
by Neil
Hi guys,
I'm doing an urgent ovirt upgrade from 3.2(dreyou) to 3.4 (official)
but I see that www.jpackage.org is down.
I've managed to install jboss-as-7.1.1-11.el6.x86_64 from another
location, however I'm not sure if this will be compatible with 3.4, as
per the instructions from
"http://wiki.dreyou.org/dokuwiki/doku.php?id=ovirt_rpm_start33"
I've tested browsing to the site on two different internet links, as
well as from the remote server but I'm just getting...
RepoError: Cannot retrieve repository metadata (repomd.xml) for
repository: ovirt-jpackage-6.0-generic. Please verify its path and try
again
Can I go ahead and attempt the upgrade using 7.1.1-11?
Thanks!
Regards.
Neil Wilson.
10 years, 6 months
Power Management IBM IMM
by Milen Nikolov
--Apple-Mail=_BCCF7375-D576-4E3B-981C-2183629F8D3D
Content-Type: multipart/alternative;
boundary="Apple-Mail=_23F6B5A1-D355-4DA7-8EC0-D7FEE3AC1787"
--Apple-Mail=_23F6B5A1-D355-4DA7-8EC0-D7FEE3AC1787
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=windows-1252
Hi,
Any advise of which Power Management driver to use with IBM=92s IMM2 =
management module?
There=92s only driver for the very old rsa management module.
Best!,
Milen.
---
Milen Nikolov
CTO, Daticum JSC
e-mail: milen(a)sirma.bg
tel: +359 2 490 1580
mobile: +359 886 409 033
www.daticum.com www.sirma.bg
---
--Apple-Mail=_23F6B5A1-D355-4DA7-8EC0-D7FEE3AC1787
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
charset=windows-1252
<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Dwindows-1252"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: =
after-white-space;">Hi,<div><br></div><div>Any advise of which =
Power Management driver to use with IBM=92s IMM2 management =
module?</div><div>There=92s only driver for the very old rsa management =
module.</div><div><br><div apple-content-edited=3D"true"><div =
style=3D"color: rgb(0, 0, 0); font-family: Helvetica; font-size: medium; =
font-style: normal; font-variant: normal; font-weight: normal; =
letter-spacing: normal; line-height: normal; orphans: 2; text-align: =
-webkit-auto; text-indent: 0px; text-transform: none; white-space: =
normal; widows: 2; word-spacing: 0px; -webkit-text-size-adjust: auto; =
-webkit-text-stroke-width: 0px; word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; =
"><br></div><div style=3D"color: rgb(0, 0, 0); font-family: Helvetica; =
font-style: normal; font-variant: normal; font-weight: normal; =
letter-spacing: normal; line-height: normal; orphans: 2; text-align: =
-webkit-auto; text-indent: 0px; text-transform: none; white-space: =
normal; widows: 2; word-spacing: 0px; -webkit-text-size-adjust: auto; =
-webkit-text-stroke-width: 0px; word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; =
">Best!,</div><div style=3D"color: rgb(0, 0, 0); font-family: Helvetica; =
font-style: normal; font-variant: normal; font-weight: normal; =
letter-spacing: normal; line-height: normal; orphans: 2; text-align: =
-webkit-auto; text-indent: 0px; text-transform: none; white-space: =
normal; widows: 2; word-spacing: 0px; -webkit-text-size-adjust: auto; =
-webkit-text-stroke-width: 0px; word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; =
">Milen.</div><div style=3D"color: rgb(0, 0, 0); font-family: Helvetica; =
font-style: normal; font-variant: normal; font-weight: normal; =
letter-spacing: normal; line-height: normal; orphans: 2; text-align: =
-webkit-auto; text-indent: 0px; text-transform: none; white-space: =
normal; widows: 2; word-spacing: 0px; -webkit-text-size-adjust: auto; =
-webkit-text-stroke-width: 0px; word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; =
"><br></div><div style=3D"color: rgb(0, 0, 0); font-family: Helvetica; =
font-style: normal; font-variant: normal; font-weight: normal; =
letter-spacing: normal; line-height: normal; orphans: 2; text-align: =
-webkit-auto; text-indent: 0px; text-transform: none; white-space: =
normal; widows: 2; word-spacing: 0px; -webkit-text-size-adjust: auto; =
-webkit-text-stroke-width: 0px; word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; =
"><br></div><div style=3D"color: rgb(0, 0, 0); font-family: Helvetica; =
font-style: normal; font-variant: normal; font-weight: normal; =
letter-spacing: normal; line-height: normal; orphans: 2; text-align: =
-webkit-auto; text-indent: 0px; text-transform: none; white-space: =
normal; widows: 2; word-spacing: 0px; -webkit-text-size-adjust: auto; =
-webkit-text-stroke-width: 0px; word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; =
"><br></div><div style=3D"color: rgb(0, 0, 0); font-family: Helvetica; =
font-style: normal; font-variant: normal; font-weight: normal; =
letter-spacing: normal; line-height: normal; orphans: 2; text-align: =
-webkit-auto; text-indent: 0px; text-transform: none; white-space: =
normal; widows: 2; word-spacing: 0px; -webkit-text-size-adjust: auto; =
-webkit-text-stroke-width: 0px; word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; =
">---<br>Milen Nikolov<br>CTO, Daticum JSC<br>e-mail: <a =
href=3D"mailto:milen@sirma.bg">milen(a)sirma.bg</a><br>tel: +359 2 490 =
1580<br>mobile: +359 886 409 033<br><a =
href=3D"http://www.daticum.com">www.daticum.com</a> <a =
href=3D"http://www.sirma.bg">www.sirma.bg</a><br>---</div>
</div>
<br></div></body></html>=
--Apple-Mail=_23F6B5A1-D355-4DA7-8EC0-D7FEE3AC1787--
--Apple-Mail=_BCCF7375-D576-4E3B-981C-2183629F8D3D
Content-Disposition: attachment;
filename=smime.p7s
Content-Type: application/pkcs7-signature;
name=smime.p7s
Content-Transfer-Encoding: base64
MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIIuTCCBBYw
ggL+oAMCAQICCwQAAAAAAS9O4SzhMA0GCSqGSIb3DQEBBQUAMFcxCzAJBgNVBAYTAkJFMRkwFwYD
VQQKExBHbG9iYWxTaWduIG52LXNhMRAwDgYDVQQLEwdSb290IENBMRswGQYDVQQDExJHbG9iYWxT
aWduIFJvb3QgQ0EwHhcNMTEwNDEzMTAwMDAwWhcNMTkwNDEzMTAwMDAwWjBUMQswCQYDVQQGEwJC
RTEZMBcGA1UEChMQR2xvYmFsU2lnbiBudi1zYTEqMCgGA1UEAxMhR2xvYmFsU2lnbiBQZXJzb25h
bFNpZ24gMSBDQSAtIEcyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA8aUcr5BvPNGj
x0+LH0uRqeZCHrYQ7KN3QuahfxY8fAzAbnvNDzGdEMyKn3+YX+k/QbAGNJOSFRxrAfhviF7WGcqD
lin3HracDqMRgwrknWuFeqxhN2J7uXs3Y0zluJEkEittRXv+ZdXOG/Gp3gtoz5P9noc5jBbfWQpQ
BhcaJA2ucABbUVTHDTxi7dBY8mTWq6kRAkGWBybHwq0YX+jaHudtQw0oBEmxjpJFP9qIXu0ckU/+
OhtnAhrgzrsd4oAyqgc6u4dBYERcjDJFohihjbzPozgKDSSbdr44uO3p9Bg6ibjCxn2besLrIE7u
poxvV09Fsf7hDeD/jcvs64z8pQIDAQABo4HlMIHiMA4GA1UdDwEB/wQEAwIBBjASBgNVHRMBAf8E
CDAGAQH/AgEAMB0GA1UdDgQWBBTsrJjMJ3KTz1YyzSPHnY1FhfQiAzBHBgNVHSAEQDA+MDwGBFUd
IAAwNDAyBggrBgEFBQcCARYmaHR0cHM6Ly93d3cuZ2xvYmFsc2lnbi5jb20vcmVwb3NpdG9yeS8w
MwYDVR0fBCwwKjAooCagJIYiaHR0cDovL2NybC5nbG9iYWxzaWduLm5ldC9yb290LmNybDAfBgNV
HSMEGDAWgBRge2YaRQ2XyolQL30EzTSo//z9SzANBgkqhkiG9w0BAQUFAAOCAQEAr7unyEtmt9Ia
7hmNpqP+xMd0t5hLM0QBY8G3Dlg70XI6F+ZeSZeeXgCtUT/JhdQ+HsJ8+c6HypDuvg/OZ0gILDFI
a9LDfRWm+tHIgxKaJjtCy0izg838dLwwnt/O3kA9N/htEYev2lsmWYCV9cVUm5V1tW3XuYNg6Sbt
cDRH+Ki1RED9es3R0BgHSm012KPxsiAOOxuhm1D3Iqs1qe6ms5WTKXVgwb/j/kplOa13nshhc8zU
LVO+oAlD4+7czNK2RJiTvhJiDJDRTZy3DJ3BCQ8rXOGdWzDEI5uiB8TZ0s327g44Ylc6dgKgYelN
n9RLYjNETX8OIJZlr0tFYpcYrDCCBJswggODoAMCAQICEGZgT+TGYtW+XJFIU8YivuQwDQYJKoZI
hvcNAQEFBQAwVDELMAkGA1UEBhMCQkUxGTAXBgNVBAoTEEdsb2JhbFNpZ24gbnYtc2ExKjAoBgNV
BAMTIUdsb2JhbFNpZ24gUGVyc29uYWxTaWduIDEgQ0EgLSBHMjAeFw0xMzA4MTUxMTA1MTVaFw0x
NDA4MTYxMTA1MTVaMDgxFzAVBgNVBAMMDm1pbGVuQHNpcm1hLmJnMR0wGwYJKoZIhvcNAQkBFg5t
aWxlbkBzaXJtYS5iZzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANTWw9EEIQEUOFfQ
YW8YV3TKTxg8bxVjEzGHsT4scB/DTxLh+WBTmHbTyGNxl+O6bHuZy7PLO3O8uZHZvyiuaFONjXej
C4lwWDKXrfnTlJqUkMPW/DFNe0kAXZIcE5L7l8W/rNogUCagjFstuOWzKK/np0iAA6cQV1gIy2X5
ydIS6Jh7gxx2knIRZETYkTLzxsj4eV1V5M9VTnoL3UFgTTy08TwO/CtCfxHjK3hqK3qUC9wza8b4
xtHBKSAIDCm61YM/ByFimIR3DincgiVJ6+0hyhpCX2jsvcir5MdJrxD9efSGzxqdpcQFq1cVYJC6
xvrlOX15UgFn6cNeJSf9098CAwEAAaOCAYMwggF/MA4GA1UdDwEB/wQEAwIFoDBMBgNVHSAERTBD
MEEGCSsGAQQBoDIBKDA0MDIGCCsGAQUFBwIBFiZodHRwczovL3d3dy5nbG9iYWxzaWduLmNvbS9y
ZXBvc2l0b3J5LzAZBgNVHREEEjAQgQ5taWxlbkBzaXJtYS5iZzAJBgNVHRMEAjAAMB0GA1UdJQQW
MBQGCCsGAQUFBwMCBggrBgEFBQcDBDBDBgNVHR8EPDA6MDigNqA0hjJodHRwOi8vY3JsLmdsb2Jh
bHNpZ24uY29tL2dzL2dzcGVyc29uYWxzaWduMWcyLmNybDBVBggrBgEFBQcBAQRJMEcwRQYIKwYB
BQUHMAKGOWh0dHA6Ly9zZWN1cmUuZ2xvYmFsc2lnbi5jb20vY2FjZXJ0L2dzcGVyc29uYWxzaWdu
MWcyLmNydDAdBgNVHQ4EFgQUtA64XK8GI+t4AjSn3IhA0qUhecIwHwYDVR0jBBgwFoAU7KyYzCdy
k89WMs0jx52NRYX0IgMwDQYJKoZIhvcNAQEFBQADggEBAEF2ZM9zCwQafo0drpe40Tigr1biK8RA
Vvi9fMXyqOiguNf/O5rO4OcYtnaCuSCfI8D0vRPORK6QyDA5PtJCqBgUea3DV9I19NfCq9uubbsx
Cp/UJwdpByNG8w1pQJ7ZE6HrMKGUfRoK/334uZ8TXnjjDz2CbCyPeon4StHGFfo7EEPJmaW2dtfu
MM3C4efYDYwaj2qPGiJxp+tNJSin/3K6Uvs95Xmmd/4pepQd9ORL7M1Iwp1limgXjNHFWLv3Q5MK
VZzzmzTA9dJpkx2i2GjrOpnnsAORALGpjkoG+gf3BHB8OrMMA1nL9br+1CbCRQGq+z8hzgkQhkYq
pmqGbfMxggLkMIIC4AIBATBoMFQxCzAJBgNVBAYTAkJFMRkwFwYDVQQKExBHbG9iYWxTaWduIG52
LXNhMSowKAYDVQQDEyFHbG9iYWxTaWduIFBlcnNvbmFsU2lnbiAxIENBIC0gRzICEGZgT+TGYtW+
XJFIU8YivuQwCQYFKw4DAhoFAKCCAVEwGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG
9w0BCQUxDxcNMTQwNTE5MDkzMjI4WjAjBgkqhkiG9w0BCQQxFgQUyJ+pWOgBOgYJlnONCJaZVyxC
Q2MwdwYJKwYBBAGCNxAEMWowaDBUMQswCQYDVQQGEwJCRTEZMBcGA1UEChMQR2xvYmFsU2lnbiBu
di1zYTEqMCgGA1UEAxMhR2xvYmFsU2lnbiBQZXJzb25hbFNpZ24gMSBDQSAtIEcyAhBmYE/kxmLV
vlyRSFPGIr7kMHkGCyqGSIb3DQEJEAILMWqgaDBUMQswCQYDVQQGEwJCRTEZMBcGA1UEChMQR2xv
YmFsU2lnbiBudi1zYTEqMCgGA1UEAxMhR2xvYmFsU2lnbiBQZXJzb25hbFNpZ24gMSBDQSAtIEcy
AhBmYE/kxmLVvlyRSFPGIr7kMA0GCSqGSIb3DQEBAQUABIIBAJ4zaUlZZBNV7gvGRlgVihCKcDfK
x15+5/7jT50gf6tahBNJT9ulK0PMci4IM3yUyobJIQZmb6FNFsq5v2SQEybgionfOwW9iKgeako5
/3bsQSmrh048sFVdHpqbDhr72Jzknx2h77kmb09ADOb5456F1g5l6kbOgV1HLqjx5bZNZPM5jwpV
MJTNAN77bALV/CHAq/Bntww/0JSz+EK1X2jL9ipptL/un0S8dChZ4mNeqd59IhwkB8VZHCuE/Mnh
t+cZcACPe/bGJ04tev3qpxmGZni0uMgQRp0YrHwU5YWyj+G9T1OfZ2FnUyjJ93OWKKM4zgweFTKW
uZuBhzctp/cAAAAAAAA=
--Apple-Mail=_BCCF7375-D576-4E3B-981C-2183629F8D3D--
10 years, 6 months