any virtual machine does not start
by nicola.gentile.to
hi,
I have a problem with ovirt 3.6.
When I click on the button "run" to start vm, it not start and terminate
immediatly.
This is the error in the admin portale:
VM ubuexp is down with error. Exit message: internal error: process
exited while connecting to monitor: 2016-04-18T08:18:22.499782Z
qemu-kvm: -drive
file=/rhev/data-center/00000001-0001-0001-0001-00000000037d/88dfff37-4ad5-446c-a957-1cf67829c9a6/images/da93350c-8926-42bf-99fa-8a7aca043fcd/22e7803d-e266-4e3e-ae34-0181f8acef8b,if=none,id=drive-virtio-disk0,format=qcow2,serial=da93350c-8926-42bf-99fa-8a7aca043fcd,cache=none,werror=stop,rerror=stop,aio=native:
Could not open
'/rhev/data-center/00000001-0001-0001-0001-00000000037d/88dfff37-4ad5-446c-a957-1cf67829c9a6/images/da93350c-8926-42bf-99fa-8a7aca043fcd/22e7803d-e266-4e3e-ae34-0181f8acef8b':
Permission denied
.
My ovirt installation is composed of thefollowing:
- 1 manager (virtualized)
- 1 host
- storage iscsi
Everything seems to work well from the web administration portal, but
all vm do not start.
I attach a log file.
Please help me!
Nick
8 years, 8 months
Re: [ovirt-users] Users Digest, Vol 55, Issue 124
by Charles Tassell
Hi Nick,
Try running "ls -l
/rhev/data-center/00000001-0001-0001-0001-00000000037d/88dfff37-4ad5-446c-a957-1cf67829c9a6/images/da93350c-8926-42bf-99fa-8a7aca043fcd/22e7803d-e266-4e3e-ae34-0181f8acef8b"
and check the ownership of the file. It should be owned by vdsm:kvm (or
user and group 36:36 I think) Another thing to check would be that the
parent directories are accessible. Try running "sudo -s /bin/bash vdsm"
and then "cd
/rhev/data-center/00000001-0001-0001-0001-00000000037d/88dfff37-4ad5-446c-a957-1cf67829c9a6/images"
to make sure you can access all the parent folders. If you can, try
creating a temp file ("echo hello >hello") in that same directory to
make sure that the iSCSI mount hasn't gone read only.
If none of that helps, I'd say take a look at the output of vgdisplay to
make sure that the volume group still has free physical extents (Free
PE/Size near the bottom) If you are using thin provisioning that might
be an issue.
This is all based on the issue being disk permission errors. It might
be an error with the libvirt authentication, but I have no idea how to
diagnose that.
On 16-04-18 05:32 AM, users-request(a)ovirt.org wrote:
Message: 2
Date: Mon, 18 Apr 2016 10:32:40 +0200
From: "nicola.gentile.to"<nicola.gentile.to(a)gmail.com>
To:Users@ovirt.org
Subject: [ovirt-users] any virtual machine does not start
Message-ID:<57149BA8.90803(a)gmail.com>
Content-Type: text/plain; charset="iso-8859-15"; Format="flowed"
hi,
I have a problem with ovirt 3.6.
When I click on the button "run" to start vm, it not start and terminate
immediatly.
This is the error in the admin portale:
VM ubuexp is down with error. Exit message: internal error: process
exited while connecting to monitor: 2016-04-18T08:18:22.499782Z
qemu-kvm: -drive
file=/rhev/data-center/00000001-0001-0001-0001-00000000037d/88dfff37-4ad5-446c-a957-1cf67829c9a6/images/da93350c-8926-42bf-99fa-8a7aca043fcd/22e7803d-e266-4e3e-ae34-0181f8acef8b,if=none,id=drive-virtio-disk0,format=qcow2,serial=da93350c-8926-42bf-99fa-8a7aca043fcd,cache=none,werror=stop,rerror=stop,aio=native:
Could not open
'/rhev/data-center/00000001-0001-0001-0001-00000000037d/88dfff37-4ad5-446c-a957-1cf67829c9a6/images/da93350c-8926-42bf-99fa-8a7aca043fcd/22e7803d-e266-4e3e-ae34-0181f8acef8b':
Permission denied
.
My ovirt installation is composed of thefollowing:
- 1 manager (virtualized)
- 1 host
- storage iscsi
Everything seems to work well from the web administration portal, but
all vm do not start.
I attach a log file.
Please help me!
Nick
8 years, 8 months
adding glusterfs to ovirt
by Fabrice Bacchella
I installed a oVirt stack without glusterfs. Now I want to add it. So I configured a cluster to be a gluster-enabled one. Now i want to add a gluster volume to it.
That went fine until I click Ok at the "New Volume" dialog, the answer was :
Error while executing action Create Gluster Volume: Unexpected exception
I the log, I saw :
2016-04-13 16:45:53,573 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterTasksListVDSCommand] (DefaultQuartzScheduler_Worker-84) [5303bffb] FINISH, GlusterTasksListVDSCommand, log id: 2e5a7991
2016-04-13 16:45:53,573 ERROR [org.ovirt.engine.core.bll.gluster.GlusterTasksSyncJob] (DefaultQuartzScheduler_Worker-84) [5303bffb] Error updating tasks from CLI: org.ovirt.engine.core.common.errors.EngineException: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to GlusterTasksListVDS, error = The method does not exist / is not available., code = -32601 (Failed with error unexpected and code 16)
at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:112) [bll.jar:]
at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33) [bll.jar:]
at org.ovirt.engine.core.bll.gluster.tasks.GlusterTasksService.runVdsCommand(GlusterTasksService.java:64) [bll.jar:]
at org.ovirt.engine.core.bll.gluster.tasks.GlusterTasksService.getTaskListForCluster(GlusterTasksService.java:32) [bll.jar:]
at org.ovirt.engine.core.bll.gluster.GlusterTasksSyncJob.updateGlusterAsyncTasks(GlusterTasksSyncJob.java:87) [bll.jar:]
at sun.reflect.GeneratedMethodAccessor250.invoke(Unknown Source) [:1.7.0_99]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.7.0_99]
at java.lang.reflect.Method.invoke(Method.java:606) [rt.jar:1.7.0_99]
at org.ovirt.engine.core.utils.timer.JobWrapper.invokeMethod(JobWrapper.java:81) [scheduler.jar:]
at org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:52) [scheduler.jar:]
at org.quartz.core.JobRunShell.run(JobRunShell.java:213) [quartz.jar:]
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557) [quartz.jar:]
8 years, 8 months
backup-engine 3.6.4.1-1 Files Compressor still broken in Centos 6.7? Work around
by Jack Greene
This is a multipart message in MIME format.
------=_NextPart_000_0001_01D19828.03448130
Content-Type: text/plain;
charset="us-ascii"
Content-Transfer-Encoding: 7bit
Been trying to run a complete backup on a fresh install to get rid of that
annoying alert.
engine-backup --scope=all
Kept failing with the error:
2016-04-17 04:05:31 8742: Creating temp folder
/tmp/engine-backup.ymrXeYZay1/tar
2016-04-17 04:05:31 8742: OUTPUT: - Files
2016-04-17 04:05:31 8742: Backing up files to
/tmp/engine-backup.ymrXeYZay1/tar/files
2016-04-17 04:05:31 8742: FATAL: Failed backing up /etc/ovirt-engine
Research shows a bug reported on version 3.6.0 related to the tar options
https://gerrit.ovirt.org/#/c/48596/3/packaging/bin/engine-backup.sh
https://bugzilla.redhat.com/show_bug.cgi?id=1282397
But a grep of the script shows the changes have been made
[root@engine1 iso]# grep cpSs /usr/share/ovirt-engine/bin/engine-backup.sh
[root@engine1 iso]# grep cpS /usr/share/ovirt-engine/bin/engine-backup.sh
tar -C "${dir}" -cpS"${ARCHIVE_COMPRESS_OPTION}"f "${file}" . >>
"${tar_log}" 2>&1
tar -C / --files-from - -cpS"${FILES_COMPRESS_OPTION}"f
"${target}" || logdie "Failed backing up ${paths}"
Verified I had a new version (just re-installed ovirt-engine today)
ovirt-engine-3.6.4.1-1.el6.noarch
Linux engine1.attlocal.net 2.6.32-573.22.1.el6.x86_64 #1 SMP Wed Mar 23
03:35:39 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
# ovirt-engine-backup - oVirt engine backup and restore utility
# Copyright (C) 2013-2016 Red Hat, Inc.
Then I noticed there is an option to turn off -files-compressor. Doing this
I was able to get a complete backup.
[root@engine1 iso]# engine-backup --mode=backup --scope=files
--file=/mnt/h97m/backup_files_20160417
--log=/root/backup_files_20160417.log --files-compressor=None
Backing up:
Notifying engine
- Files
Packing into file '/mnt/h97m/backup_files_20160417'
Notifying engine
Done.
[root@engine1 iso]# engine-backup --mode=backup --scope=all
--file=/mnt/h97m/backup_all_20160417 --log=/root/backup_all_20160417.log
--files-compressor=None
Backing up:
Notifying engine
- Files
- Engine database 'engine'
Packing into file '/mnt/h97m/backup_all_20160417'
Notifying engine
Done.
[root@engine1 iso]# service ovirt-engine start
Passing this on in case I need the work around again ;)
Jack Greene
------=_NextPart_000_0001_01D19828.03448130
Content-Type: text/html;
charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta =
http-equiv=3DContent-Type content=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 15 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US =
link=3D"#0563C1" vlink=3D"#954F72"><div class=3DWordSection1><p =
class=3DMsoNormal>Been trying to run a complete backup on a fresh =
install to get rid of that annoying alert.<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>engine-backup --scope=3Dall<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>Kept failing =
with the error:<o:p></o:p></p><p class=3DMsoNormal>2016-04-17 04:05:31 =
8742: Creating temp folder =
/tmp/engine-backup.ymrXeYZay1/tar<o:p></o:p></p><p =
class=3DMsoNormal>2016-04-17 04:05:31 8742: OUTPUT: - =
Files<o:p></o:p></p><p class=3DMsoNormal>2016-04-17 04:05:31 8742: =
Backing up files to =
/tmp/engine-backup.ymrXeYZay1/tar/files<o:p></o:p></p><p =
class=3DMsoNormal>2016-04-17 04:05:31 8742: FATAL: Failed backing up =
/etc/ovirt-engine<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>Research =
shows a bug reported on version 3.6.0 related to the tar =
options<o:p></o:p></p><p class=3DMsoNormal><a =
href=3D"https://gerrit.ovirt.org/#/c/48596/3/packaging/bin/engine-backup.=
sh">https://gerrit.ovirt.org/#/c/48596/3/packaging/bin/engine-backup.sh</=
a><o:p></o:p></p><p class=3DMsoNormal><a =
href=3D"https://bugzilla.redhat.com/show_bug.cgi?id=3D1282397">https://bu=
gzilla.redhat.com/show_bug.cgi?id=3D1282397</a><o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>But a grep =
of the script shows the changes have been made<o:p></o:p></p><p =
class=3DMsoNormal>[root@engine1 iso]# grep cpSs =
/usr/share/ovirt-engine/bin/engine-backup.sh<o:p></o:p></p><p =
class=3DMsoNormal>[root@engine1 iso]# grep cpS =
/usr/share/ovirt-engine/bin/engine-backup.sh<o:p></o:p></p><p =
class=3DMsoNormal> tar -C =
"${dir}" -cpS"${ARCHIVE_COMPRESS_OPTION}"f =
"${file}" . >> "${tar_log}" =
2>&1<o:p></o:p></p><p =
class=3DMsoNormal> &=
nbsp; tar -C / --files-from - =
-cpS"${FILES_COMPRESS_OPTION}"f "${target}" || =
logdie "Failed backing up ${paths}"<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>Verified I =
had a new version (just re-installed ovirt-engine =
today)<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>ovirt-engine-3.6.4.1-1.el6.noarch<o:p></o:p></p><p =
class=3DMsoNormal>Linux engine1.attlocal.net 2.6.32-573.22.1.el6.x86_64 =
#1 SMP Wed Mar 23 03:35:39 UTC 2016 x86_64 x86_64 x86_64 =
GNU/Linux<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal># ovirt-engine-backup - oVirt engine backup and =
restore utility<o:p></o:p></p><p class=3DMsoNormal># Copyright (C) =
2013-2016 Red Hat, Inc.<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>Then I =
noticed there is an option to turn off –files-compressor. =
Doing this I was able to get a complete backup.<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>[root@engine1 iso]# engine-backup --mode=3Dbackup =
--scope=3Dfiles --file=3D/mnt/h97m/backup_files_20160417 =
--log=3D/root/backup_files_20160417.log =
--files-compressor=3DNone<o:p></o:p></p><p class=3DMsoNormal>Backing =
up:<o:p></o:p></p><p class=3DMsoNormal>Notifying engine<o:p></o:p></p><p =
class=3DMsoNormal>- Files<o:p></o:p></p><p class=3DMsoNormal>Packing =
into file '/mnt/h97m/backup_files_20160417'<o:p></o:p></p><p =
class=3DMsoNormal>Notifying engine<o:p></o:p></p><p =
class=3DMsoNormal>Done.<o:p></o:p></p><p class=3DMsoNormal>[root@engine1 =
iso]# engine-backup --mode=3Dbackup --scope=3Dall =
--file=3D/mnt/h97m/backup_all_20160417 =
--log=3D/root/backup_all_20160417.log =
--files-compressor=3DNone<o:p></o:p></p><p class=3DMsoNormal>Backing =
up:<o:p></o:p></p><p class=3DMsoNormal>Notifying engine<o:p></o:p></p><p =
class=3DMsoNormal>- Files<o:p></o:p></p><p class=3DMsoNormal>- Engine =
database 'engine'<o:p></o:p></p><p class=3DMsoNormal>Packing into file =
'/mnt/h97m/backup_all_20160417'<o:p></o:p></p><p =
class=3DMsoNormal>Notifying engine<o:p></o:p></p><p =
class=3DMsoNormal>Done.<o:p></o:p></p><p class=3DMsoNormal>[root@engine1 =
iso]# service ovirt-engine start<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>Passing this =
on in case I need the work around again ;)<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>Jack =
Greene<o:p></o:p></p></div></body></html>
------=_NextPart_000_0001_01D19828.03448130--
8 years, 8 months
hosted-engine stuck "failed liveliness check" "detail": "up"
by Paul Groeneweg | Pazion
Tonight my server with NFS hosted-engine mount crashed.
Now all is back online ,except the hosted engine. I can't ping or ssh the
machine
when I do hosted-engine --vm-status, I get:
..........
--== Host 2 status ==--
Status up-to-date : True
Hostname : geisha-3.pazion.nl
Host ID : 2
Engine status : {"reason": "failed liveliness check",
"health": "bad", "vm": "up", "detail": "up"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : d71d7c6b
Host timestamp : 4404
............
I tried restarting all services/nfs mounts, start hosted engine on other
hosts, but all the same host up, but liveliness failed and unable to access
the network/IP.
I imagine it is stuck at the console requiring a fsck check maybe?
Is there a way to access the boot display directly?
Any help is highly appreciated!
8 years, 8 months
nfs storage permission problem
by Bill James
I have a cluster working fine with 2 nodes.
I'm trying to add a third and it is complaining:
StorageServerAccessPermissionError: Permission settings on the specified
path do not allow access to the storage. Verify permission settings on
the specified storage path.: 'path =
/rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs'
if I try the commands manually as vdsm they work fine and the volume mounts.
[vdsm@ovirt4 test /]$ mkdir -p
/rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs
[vdsm@ovirt4 test /]$ sudo -n /usr/bin/mount -t nfs -o
soft,nosharecache,timeo=600,retrans=6
ovirt3-ks.test.j2noc.com:/ovirt-store/nfs
/rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs
[vdsm@ovirt4 test /]$ df -h
/rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs
Filesystem Size Used Avail Use% Mounted on
ovirt3-ks.test.j2noc.com:/ovirt-store/nfs 1.1T 305G 759G 29%
/rhev/data-center/mnt/ovirt3-ks.test.j2noc.com:_ovirt-store_nfs
After manually mounting the NFS volumes and activating the node it still
fails.
2016-04-13 14:55:16,559 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-61) [64ceea1d] Correlation ID: 64ceea1d,
Job ID: a47b74c7-2ae0-43f9-9bdf-e50963a28895, Call Stack: null, Custom
Event ID: -1, Message: Host ovirt4.test.j2noc.com cannot access the
Storage Domain(s) <UNKNOWN> attached to the Data Center Default. Setting
Host state to Non-Operational.
Not sure what "UNKNOWN" storage is, unless its one I deleted earlier
that somehow isn't really removed.
Also tried "reinstall" on node. same issue.
Attached are engine and vdsm logs.
Thanks.
8 years, 8 months
serial console and permission
by Nathanaël Blanchet
Hi all,
I've successfully set up the serial console feature for all my vms.
But the only way I found to make it work is to add each user as a
UserVmManager role, whereas they have the SuperUser role at the
datacenter level.I know there is an open bug on it for this.
A second bug is that adding a group with UserVmManager as permission on
a vm (instead of a simple user) doesn't allow to get the serial console.
Thank you for your help
8 years, 8 months
Fwd: Re: HA agent fails to start
by Richard Neuboeck
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--SbPVBmWBWGj6xG5SNLe8NaNUk0mGIEPC2
Content-Type: multipart/mixed;
boundary="------------070004000805050505060204"
This is a multi-part message in MIME format.
--------------070004000805050505060204
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
On 04/14/2016 11:03 PM, Simone Tiraboschi wrote:
> On Thu, Apr 14, 2016 at 10:38 PM, Simone Tiraboschi <stirabos(a)redhat.co=
m> wrote:
>> On Thu, Apr 14, 2016 at 6:53 PM, Richard Neuboeck <hawk(a)tbi.univie.ac.=
at> wrote:
>>> On 14.04.16 18:46, Simone Tiraboschi wrote:
>>>> On Thu, Apr 14, 2016 at 4:04 PM, Richard Neuboeck <hawk(a)tbi.univie.a=
c.at> wrote:
>>>>> On 04/14/2016 02:14 PM, Simone Tiraboschi wrote:
>>>>>> On Thu, Apr 14, 2016 at 12:51 PM, Richard Neuboeck
>>>>>> <hawk(a)tbi.univie.ac.at> wrote:
>>>>>>> On 04/13/2016 10:00 AM, Simone Tiraboschi wrote:
>>>>>>>> On Wed, Apr 13, 2016 at 9:38 AM, Richard Neuboeck <hawk(a)tbi.univ=
ie.ac.at> wrote:
>>>>>>>>> The answers file shows the setup time of both machines.
>>>>>>>>>
>>>>>>>>> On both machines hosted-engine.conf got rotated right before I =
wrote
>>>>>>>>> this mail. Is it possible that I managed to interrupt the rotat=
ion with
>>>>>>>>> the reboot so the backup was accurate but the update not yet wr=
itten to
>>>>>>>>> hosted-engine.conf?
>>>>>>>>
>>>>>>>> AFAIK we don't have any rotation mechanism for that file; someth=
ing
>>>>>>>> else you have in place on that host?
>>>>>>>
>>>>>>> Those machines are all CentOS 7.2 minimal installs. The only
>>>>>>> adaptation I do is installing vim, removing postfix and installin=
g
>>>>>>> exim, removing firewalld and installing iptables-service. Then I =
add
>>>>>>> the oVirt repos (3.6 and 3.6-snapshot) and deploy the host.
>>>>>>>
>>>>>>> But checking lsof shows that 'ovirt-ha-agent --no-daemon' has acc=
ess
>>>>>>> to the config file (and the one ending with ~):
>>>>>>>
>>>>>>> # lsof | grep 'hosted-engine.conf~'
>>>>>>> ovirt-ha- 193446 vdsm 351u REG
>>>>>>> 253,0 1021 135070683
>>>>>>> /etc/ovirt-hosted-engine/hosted-engine.conf~
>>>>>>
>>>>>> This is not that much relevant if the file was renamed after
>>>>>> ovirt-ha-agent opened it.
>>>>>> Try this:
>>>>>>
>>>>>> [root@c72he20160405h1 ovirt-hosted-engine-setup]# tail -n1 -f
>>>>>> /etc/ovirt-hosted-engine/hosted-engine.conf &
>>>>>> [1] 28866
>>>>>> [root@c72he20160405h1 ovirt-hosted-engine-setup]# port=3D
>>>>>>
>>>>>> [root@c72he20160405h1 ovirt-hosted-engine-setup]# lsof | grep host=
ed-engine.conf
>>>>>> tail 28866 root 3r REG
>>>>>> 253,0 1014 1595898 /etc/ovirt-hosted-engine/hosted-engine.=
conf
>>>>>> [root@c72he20160405h1 ovirt-hosted-engine-setup]# mv
>>>>>> /etc/ovirt-hosted-engine/hosted-engine.conf
>>>>>> /etc/ovirt-hosted-engine/hosted-engine.conf_123
>>>>>> [root@c72he20160405h1 ovirt-hosted-engine-setup]# lsof | grep host=
ed-engine.conf
>>>>>> tail 28866 root 3r REG
>>>>>> 253,0 1014 1595898
>>>>>> /etc/ovirt-hosted-engine/hosted-engine.conf_123
>>>>>> [root@c72he20160405h1 ovirt-hosted-engine-setup]#
>>>>>>
>>>>>
>>>>> I've issued the commands you suggested but I don't know how that
>>>>> helps to find the process accessing the config files.
>>>>>
>>>>> After moving the hosted-engine.conf file the HA agent crashed
>>>>> logging the information that the config file is not available.
>>>>>
>>>>> Here is the output from every command:
>>>>>
>>>>> # tail -n1 -f /etc/ovirt-hosted-engine/hosted-engine.conf &
>>>>> [1] 167865
>>>>> [root@cube-two ~]# port=3D
>>>>> # lsof | grep hosted-engine.conf
>>>>> ovirt-ha- 166609 vdsm 5u REG
>>>>> 253,0 1021 134433491
>>>>> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
>>>>> ovirt-ha- 166609 vdsm 7u REG
>>>>> 253,0 1021 134433453
>>>>> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
>>>>> ovirt-ha- 166609 vdsm 8u REG
>>>>> 253,0 1021 134433489
>>>>> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
>>>>> ovirt-ha- 166609 vdsm 9u REG
>>>>> 253,0 1021 134433493
>>>>> /etc/ovirt-hosted-engine/hosted-engine.conf~
>>>>> ovirt-ha- 166609 vdsm 10u REG
>>>>> 253,0 1021 134433495
>>>>> /etc/ovirt-hosted-engine/hosted-engine.conf
>>>>> tail 167865 root 3r REG
>>>>> 253,0 1021 134433493
>>>>> /etc/ovirt-hosted-engine/hosted-engine.conf~
>>>>> # mv /etc/ovirt-hosted-engine/hosted-engine.conf
>>>>> /etc/ovirt-hosted-engine/hosted-engine.conf_123
>>>>> # lsof | grep hosted-engine.conf
>>>>> ovirt-ha- 166609 vdsm 5u REG
>>>>> 253,0 1021 134433491
>>>>> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
>>>>> ovirt-ha- 166609 vdsm 7u REG
>>>>> 253,0 1021 134433453
>>>>> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
>>>>> ovirt-ha- 166609 vdsm 8u REG
>>>>> 253,0 1021 134433489
>>>>> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
>>>>> ovirt-ha- 166609 vdsm 9u REG
>>>>> 253,0 1021 134433493
>>>>> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
>>>>> ovirt-ha- 166609 vdsm 10u REG
>>>>> 253,0 1021 134433495
>>>>> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
>>>>> ovirt-ha- 166609 vdsm 12u REG
>>>>> 253,0 1021 134433498
>>>>> /etc/ovirt-hosted-engine/hosted-engine.conf~
>>>>> ovirt-ha- 166609 vdsm 13u REG
>>>>> 253,0 1021 134433499
>>>>> /etc/ovirt-hosted-engine/hosted-engine.conf_123
>>>>> tail 167865 root 3r REG
>>>>> 253,0 1021 134433493
>>>>> /etc/ovirt-hosted-engine/hosted-engine.conf (deleted)
>>>>>
>>>>>
>>>>>> The issue is understanding who renames that file on your host.
>>>>>
>>>>> From what I've seen so far it looks like a child of vdsm accesses
>>>>> /etc/ovirt-hosted-engine/hosted-engine.conf periodically but is not=
>>>>> responsible for the ~ file.
>>>>>
>>>>> # auditctl -w /etc/ovirt-hosted-engine/hosted-engine.conf
>>>>> and
>>>>> # auditctl -w /etc/ovirt-hosted-engine/hosted-engine.conf~
>>>>>
>>>>> auditd.log shows this:
>>>>>
>>>>> type=3DSYSCALL msg=3Daudit(1460639783.613:482590): arch=3Dc000003e
>>>>> syscall=3D2 success=3Dyes exit=3D75 a0=3D7f29b400f0b0 a1=3D0 a2=3D1=
b6 a3=3D24
>>>>> items=3D1 ppid=3D1 pid=3D3701 auid=3D4294967295 uid=3D36 gid=3D36 e=
uid=3D36
>>>>> suid=3D36 fsuid=3D36 egid=3D36 sgid=3D36 fsgid=3D36 tty=3D(none) se=
s=3D4294967295
>>>>> comm=3D"jsonrpc.Executo" exe=3D"/usr/bin/python2.7"
>>>>> subj=3Dsystem_u:system_r:virtd_t:s0-s0:c0.c1023 key=3D(null)
>>>>> type=3DCWD msg=3Daudit(1460639783.613:482590): cwd=3D"/"
>>>>> type=3DPATH msg=3Daudit(1460639783.613:482590): item=3D0
>>>>> name=3D"/etc/ovirt-hosted-engine/hosted-engine.conf" inode=3D134433=
499
>>>>> dev=3Dfd:00 mode=3D0100644 ouid=3D0 ogid=3D0 rdev=3D00:00
>>>>> obj=3Dsystem_u:object_r:etc_t:s0 objtype=3DNORMAL
>>>>>
>>>>>
>>>>> Now that the HA agent is dead I'm removing the ~ file and starting
>>>>> the HA agent again. The ~ file immediately appears again.
>>>>>
>>>>> # rm hosted-engine.conf~
>>>>> rm: remove regular file =E2=80=98hosted-engine.conf~=E2=80=99? y
>>>>> [root@cube-two ovirt-hosted-engine]# ls -l
>>>>> total 6800
>>>>> -rw-r--r--. 1 root root 3252 Apr 8 10:35 answers.conf
>>>>> -rw-r--r--. 1 root root 6948582 Apr 14 14:48 ha-trace.log
>>>>> -rw-r--r--. 1 root root 1021 Apr 14 15:07 hosted-engine.conf
>>>>> -rw-r--r--. 1 root root 413 Apr 8 10:35 iptables.example
>>>>> [root@cube-two ovirt-hosted-engine]# systemctl start ovirt-ha-agent=
>>>>> [root@cube-two ovirt-hosted-engine]# ls -l
>>>>> total 6804
>>>>> -rw-r--r--. 1 root root 3252 Apr 8 10:35 answers.conf
>>>>> -rw-r--r--. 1 root root 6948582 Apr 14 14:48 ha-trace.log
>>>>> -rw-r--r--. 1 root root 1021 Apr 14 15:18 hosted-engine.conf
>>>>> -rw-r--r--. 1 root root 1021 Apr 14 15:07 hosted-engine.conf~
>>>>> -rw-r--r--. 1 root root 413 Apr 8 10:35 iptables.example
>>>>>
>>>>> The auditd.log shows that ~ file is moved into place but not what
>>>>> issued the mv:
>>>>>
>>>>> type=3DCONFIG_CHANGE msg=3Daudit(1460639919.277:482750): auid=3D429=
4967295
>>>>> ses=3D4294967295 op=3D"updated_rules"
>>>>> path=3D"/etc/ovirt-hosted-engine/hosted-engine.conf~" key=3D(null)
>>>>> list=3D4 res=3D1
>>>>> type=3DSYSCALL msg=3Daudit(1460639919.277:482751): arch=3Dc000003e
>>>>> syscall=3D82 success=3Dyes exit=3D0 a0=3D7ffe4b3c0e90 a1=3D7ffe4b3b=
f920
>>>>> a2=3D7f68083a2778 a3=3D7ffe4b3bf680 items=3D5 ppid=3D170233 pid=3D1=
70234
>>>>> auid=3D4294967295 uid=3D0 gid=3D0 euid=3D0 suid=3D0 fsuid=3D0 eg
>>>>> id=3D0 sgid=3D0 fsgid=3D0 tty=3D(none) ses=3D4294967295 comm=3D"mv"=
>>>>> exe=3D"/usr/bin/mv" subj=3Dsystem_u:system_r:unconfined_service_t:s=
0
>>>>> key=3D(null)
>>>>> type=3DCWD msg=3Daudit(1460639919.277:482751): cwd=3D"/"
>>>>> type=3DPATH msg=3Daudit(1460639919.277:482751): item=3D0
>>>>> name=3D"/etc/ovirt-hosted-engine/" inode=3D69555 dev=3Dfd:00 mode=3D=
040755
>>>>> ouid=3D0 ogid=3D0 rdev=3D00:00 obj=3Dsystem_u:object_r:etc_t:s0 obj=
type=3DPARENT
>>>>> type=3DPATH msg=3Daudit(1460639919.277:482751): item=3D1
>>>>> name=3D"/etc/ovirt-hosted-engine/" inode=3D69555 dev=3Dfd:00 mode=3D=
040755
>>>>> ouid=3D0 ogid=3D0 rdev=3D00:00 obj=3Dsystem_u:object_r:etc_t:s0 obj=
type=3DPARENT
>>>>> type=3DPATH msg=3Daudit(1460639919.277:482751): item=3D2
>>>>> name=3D"/etc/ovirt-hosted-engine/hosted-engine.conf" inode=3D134433=
453
>>>>> dev=3Dfd:00 mode=3D0100644 ouid=3D0 ogid=3D0 rdev=3D00:00
>>>>> obj=3Dsystem_u:object_r:etc_t:s0 objtype=3DDELETE
>>>>> type=3DPATH msg=3Daudit(1460639919.277:482751): item=3D3
>>>>> name=3D"/etc/ovirt-hosted-engine/hosted-engine.conf~" inode=3D13443=
3499
>>>>> dev=3Dfd:00 mode=3D0100644 ouid=3D0 ogid=3D0 rdev=3D00:00
>>>>> obj=3Dsystem_u:object_r:etc_t:s0 objtype=3DDELETE
>>>>> type=3DPATH msg=3Daudit(1460639919.277:482751): item=3D4
>>>>> name=3D"/etc/ovirt-hosted-engine/hosted-engine.conf~" inode=3D13443=
3453
>>>>> dev=3Dfd:00 mode=3D0100644 ouid=3D0 ogid=3D0 rdev=3D00:00
>>>>> obj=3Dsystem_u:object_r:etc_t:s0 objtype=3DCREATE
>>>>>
>>>>>
>>>>>> As a thumb rule, if a file name is appended with a tilde~, it only=
>>>>>> means that it is a backup created by a text editor or similar prog=
ram.
>>>>>
>>>>> If anyone except myself would have access to these systems I would
>>>>> guess the same. But since I'm not editing anything in
>>>>> /etc/ovirt-hosted-engine there must be another reason. And there is=
=2E
>>>>>
>>>>> Aside from auditd I tried to strace the whole thing just to make
>>>>> sure it comes from the HA agent.
>>>>>
>>>>> [root@cube-two ~]# strace -o ha-trace.log -f
>>>>> /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent --no-daemon
>>>>>
>>>>> Looking at the trace log I found this:
>>>>>
>>>>> 183409 statfs("/etc/ovirt-hosted-engine/.", {f_type=3D0x58465342,
>>>>> f_bsize=3D4096, f_blocks=3D13100800, f_bfree=3D12523576,
>>>>> f_bavail=3D12523576, f_files=3D52428800, f_ffree=3D52379892,
>>>>> f_fsid=3D{64768, 0}, f_namelen=3D255, f_frsize=3D4096}) =3D 0
>>>>> 183409 rename("/etc/ovirt-hosted-engine/hosted-engine.conf",
>>>>> "/etc/ovirt-hosted-engine/hosted-engine.conf~") =3D 0
>>>>> 183409 rename("/var/lib/ovirt-hosted-engine-ha/tmpNjTElr",
>>>>> "/etc/ovirt-hosted-engine/hosted-engine.conf") =3D 0
>>>>> 183409 newfstatat(AT_FDCWD,
>>>>> "/etc/ovirt-hosted-engine/hosted-engine.conf",
>>>>> {st_mode=3DS_IFREG|0600, st_size=3D1021, ...}, AT_SYMLINK_NOFOLLOW)=
=3D 0
>>>>> 183409 open("/etc/ovirt-hosted-engine/hosted-engine.conf",
>>>>> O_RDONLY|O_NOFOLLOW) =3D 3
>>>>>
>>>>>
>>>>> Putting it all together I started reading the HA agent sources and
>>>>> found the function _wrote_updated_conf_file in
>>>>> /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/upgrade=
=2Epy
>>>>> which issues a mv -b which creates the ~ file.
>>>>
>>>> This should just trigger during 3.5 to 3.6 upgrade but your host are=
new.
>>>> Can you please attach /var/log/ovirt-hosted-engine-ha/agent.log from=
>>>> one of them?
>>>
>>> The agent.log of host cube-two is attached to this mail.
>>
>> Yes, you are right:
>> it's looping trying to fix a path in the config file (on 3.5 we didn't=
>> check if an NFS path was ending with a '/' while for other reasons it
>> wasn't working on 3.6 and so we need to fix it) but its doesn't seams
>> you case and so the strange loop.
>>
>> Now I need to understand why it enters there.
>> Can you please execute
>> tree /rhev/data-center/
>> and post me the output?
>>
>> Thanks again
>=20
> OK, I think it's just a side effect if this bug:
> https://bugzilla.redhat.com/show_bug.cgi?id=3D1317699
I stumbled upon this bug and the mount problem. I thought this bug
was caused by the missing vfs_type for the engine gluster storage
volume.
> The local mount point of NFS and GlusterFS storage domain are slightly
> different.
> The hosted-engine storage domain autoimport procedure didn't probably
> recognized the gluster storage domain as a gluster one and so marked
> it as nfs in the engine and so it got mounted twice on different paths
> (by ovirt-ha-agent at boot time and by engine further on);
> ovirt-ha-agent notices it and think that's due to the old trailing
> slash issue on NFS paths and so it tries to fix but the there is
> nothing to fix in the config file and so the infinite loop.
Thanks for the explanation.
> I'll try to push a patch ASAP.
Thank you!
> Can you please still provide the output of
> tree /rhev/data-center/
Sure. It's attached to this mail (tree ran on host cube-two).
Anything else I can do to help?
>=20
>>>>> The question now is why is this done so frequently. Especially
>>>>> considering since there are no modifications to the file. Is this
>>>>> behavior normal?
>>>>>
>>>>> [root@cube-two ~]# diff /etc/ovirt-hosted-engine/hosted-engine.conf=
*
>>>>> [root@cube-two ~]#
>>>>>
>>>>>
>>>>>>>>> [root@cube-two ~]# ls -l /etc/ovirt-hosted-engine
>>>>>>>>> total 16
>>>>>>>>> -rw-r--r--. 1 root root 3252 Apr 8 10:35 answers.conf
>>>>>>>>> -rw-r--r--. 1 root root 1021 Apr 13 09:31 hosted-engine.conf
>>>>>>>>> -rw-r--r--. 1 root root 1021 Apr 13 09:30 hosted-engine.conf~
>>>>>>>>>
>>>>>>>>> [root@cube-three ~]# ls -l /etc/ovirt-hosted-engine
>>>>>>>>> total 16
>>>>>>>>> -rw-r--r--. 1 root root 3233 Apr 11 08:02 answers.conf
>>>>>>>>> -rw-r--r--. 1 root root 1002 Apr 13 09:31 hosted-engine.conf
>>>>>>>>> -rw-r--r--. 1 root root 1002 Apr 13 09:31 hosted-engine.conf~
>>>>>>>>>
>>>>>>>>> On 12.04.16 16:01, Simone Tiraboschi wrote:
>>>>>>>>>> Everything seams fine here,
>>>>>>>>>> /etc/ovirt-hosted-engine/hosted-engine.conf seams to be correc=
tly
>>>>>>>>>> created with the right name.
>>>>>>>>>> Can you please check the latest modification time of your
>>>>>>>>>> /etc/ovirt-hosted-engine/hosted-engine.conf~ and compare it wi=
th the
>>>>>>>>>> setup time?
>>>>>>>>>>
>>>>>>>>>> On Tue, Apr 12, 2016 at 2:34 PM, Richard Neuboeck <hawk(a)tbi.un=
ivie.ac.at> wrote:
>>>>>>>>>>> On 04/12/2016 11:32 AM, Simone Tiraboschi wrote:
>>>>>>>>>>>> On Mon, Apr 11, 2016 at 8:11 AM, Richard Neuboeck <hawk(a)tbi.=
univie.ac.at> wrote:
>>>>>>>>>>>>> Hi oVirt Group,
>>>>>>>>>>>>>
>>>>>>>>>>>>> in my attempts to get all aspects of oVirt 3.6 up and runni=
ng I
>>>>>>>>>>>>> stumbled upon something I'm not sure how to fix:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Initially I installed a hosted engine setup. After that I a=
dded
>>>>>>>>>>>>> another HA host (with hosted-engine --deploy). The host was=
>>>>>>>>>>>>> registered in the Engine correctly and HA agent came up as =
expected.
>>>>>>>>>>>>>
>>>>>>>>>>>>> However if I reboot the second host (through the Engine UI =
or
>>>>>>>>>>>>> manually) HA agent fails to start. The reason seems to be t=
hat
>>>>>>>>>>>>> /etc/ovirt-hosted-engine/hosted-engine.conf is empty. The b=
ackup
>>>>>>>>>>>>> file ending with ~ exists though.
>>>>>>>>>>>>
>>>>>>>>>>>> Can you please attach hosted-engine-setup logs from your add=
itional hosts?
>>>>>>>>>>>> AFAIK our code will never take a ~ ending backup of that fil=
e.
>>>>>>>>>>>
>>>>>>>>>>> ovirt-hosted-engine-setup logs from both additional hosts are=
>>>>>>>>>>> attached to this mail.
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>> Here are the log messages from the journal:
>>>>>>>>>>>>> Apr 11 07:29:39 cube-two.tbi.univie.ac.at systemd[1]: Start=
ing oVirt
>>>>>>>>>>>>> Hosted Engine High Availability Monitoring Agent...
>>>>>>>>>>>>> Apr 11 07:29:39 cube-two.tbi.univie.ac.at ovirt-ha-agent[37=
47]:
>>>>>>>>>>>>> INFO:ovirt_hosted_engine_ha.agent.agent.Agent:ovirt-hosted-=
engine-ha
>>>>>>>>>>>>> agent 1.3.5.3-0.0.master started
>>>>>>>>>>>>> Apr 11 07:29:39 cube-two.tbi.univie.ac.at ovirt-ha-agent[37=
47]:
>>>>>>>>>>>>> INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngin=
e:Found
>>>>>>>>>>>>> certificate common name: cube-two.tbi.univie.ac.at
>>>>>>>>>>>>> Apr 11 07:29:39 cube-two.tbi.univie.ac.at ovirt-ha-agent[37=
47]:
>>>>>>>>>>>>> ovirt-ha-agent
>>>>>>>>>>>>> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERR=
OR Hosted
>>>>>>>>>>>>> Engine is not configured. Shutting down.
>>>>>>>>>>>>> Apr 11 07:29:39 cube-two.tbi.univie.ac.at ovirt-ha-agent[37=
47]:
>>>>>>>>>>>>> ERROR:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngi=
ne:Hosted
>>>>>>>>>>>>> Engine is not configured. Shutting down.
>>>>>>>>>>>>> Apr 11 07:29:39 cube-two.tbi.univie.ac.at ovirt-ha-agent[37=
47]:
>>>>>>>>>>>>> INFO:ovirt_hosted_engine_ha.agent.agent.Agent:Agent shuttin=
g down
>>>>>>>>>>>>> Apr 11 07:29:39 cube-two.tbi.univie.ac.at systemd[1]:
>>>>>>>>>>>>> ovirt-ha-agent.service: main process exited, code=3Dexited,=
status=3D255/n/a
>>>>>>>>>>>>>
>>>>>>>>>>>>> If I restore the configuration from the backup file and man=
ually
>>>>>>>>>>>>> restart the HA agent it's working properly.
>>>>>>>>>>>>>
>>>>>>>>>>>>> For testing purposes I added a third HA host which turn out=
to
>>>>>>>>>>>>> behave exactly the same.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Any help would be appreciated!
>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>> Cheers
>>>>>>>>>>>>> Richard
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> /dev/null
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>> Users mailing list
>>>>>>>>>>>>> Users(a)ovirt.org
>>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> /dev/null
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> Users mailing list
>>>>>>>>> Users(a)ovirt.org
>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> /dev/null
>>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> /dev/null
>>>>>
>>>
--=20
/dev/null
--------------070004000805050505060204
Content-Type: text/plain; charset=UTF-8;
name="data-center.txt"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
filename="data-center.txt"
L3JoZXYvZGF0YS1jZW50ZXIvCuKUnOKUgOKUgCAwMDAwMDAwMS0wMDAxLTAwMDEtMDAwMS0w
MDAwMDAwMDAzY2UK4pSCwqDCoCDilJzilIDilIAgMjczMmNkYTctNDFjMy00YjZmLTg3Y2Mt
MTRjZGRjNmMwZmU0IC0+IC9yaGV2L2RhdGEtY2VudGVyL21udC9nbHVzdGVyU0QvYm9yZy1z
cGhlcmUtb25lOl9leHBvcnQvMjczMmNkYTctNDFjMy00YjZmLTg3Y2MtMTRjZGRjNmMwZmU0
CuKUgsKgwqAg4pSc4pSA4pSAIDUxNmEyZDc3LWVkY2MtNDZjZi05ODYwLTVmYjQ0MTEyZjRk
MCAtPiAvcmhldi9kYXRhLWNlbnRlci9tbnQvdmluY3VsdW0udGJpLnVuaXZpZS5hYy5hdDpf
dmFyX2xpYl9leHBvcnRzX2lzby81MTZhMmQ3Ny1lZGNjLTQ2Y2YtOTg2MC01ZmI0NDExMmY0
ZDAK4pSCwqDCoCDilJzilIDilIAgODFiNTliMDgtZGMwMi00MTRiLThjMDEtNDNiN2RlZDE0
ZGUzIC0+IC9yaGV2L2RhdGEtY2VudGVyL21udC9nbHVzdGVyU0QvYm9yZy1zcGhlcmUtb25l
Ol9lbmdpbmUvODFiNTliMDgtZGMwMi00MTRiLThjMDEtNDNiN2RlZDE0ZGUzCuKUgsKgwqAg
4pSc4pSA4pSAIGRlNjI4MWI2LWY4NDMtNDhjYy1iN2Y2LTBhZDQ2Nzc2MGUwZCAtPiAvcmhl
di9kYXRhLWNlbnRlci9tbnQvZ2x1c3RlclNEL2Jvcmctc3BoZXJlLW9uZTpfcGxleHVzL2Rl
NjI4MWI2LWY4NDMtNDhjYy1iN2Y2LTBhZDQ2Nzc2MGUwZArilILCoMKgIOKUlOKUgOKUgCBt
YXN0ZXJzZCAtPiAvcmhldi9kYXRhLWNlbnRlci9tbnQvZ2x1c3RlclNEL2Jvcmctc3BoZXJl
LW9uZTpfcGxleHVzL2RlNjI4MWI2LWY4NDMtNDhjYy1iN2Y2LTBhZDQ2Nzc2MGUwZArilJTi
lIDilIAgbW50CiAgICDilJzilIDilIAgZ2x1c3RlclNECiAgICDilILCoMKgIOKUnOKUgOKU
gCBib3JnLXNwaGVyZS1vbmU6X2VuZ2luZQogICAg4pSCwqDCoCDilILCoMKgIOKUnOKUgOKU
gCA4MWI1OWIwOC1kYzAyLTQxNGItOGMwMS00M2I3ZGVkMTRkZTMKICAgIOKUgsKgwqAg4pSC
wqDCoCDilILCoMKgIOKUnOKUgOKUgCBkb21fbWQKICAgIOKUgsKgwqAg4pSCwqDCoCDilILC
oMKgIOKUgsKgwqAg4pSc4pSA4pSAIGlkcwogICAg4pSCwqDCoCDilILCoMKgIOKUgsKgwqAg
4pSCwqDCoCDilJzilIDilIAgaW5ib3gKICAgIOKUgsKgwqAg4pSCwqDCoCDilILCoMKgIOKU
gsKgwqAg4pSc4pSA4pSAIGxlYXNlcwogICAg4pSCwqDCoCDilILCoMKgIOKUgsKgwqAg4pSC
wqDCoCDilJzilIDilIAgbWV0YWRhdGEKICAgIOKUgsKgwqAg4pSCwqDCoCDilILCoMKgIOKU
gsKgwqAg4pSU4pSA4pSAIG91dGJveAogICAg4pSCwqDCoCDilILCoMKgIOKUgsKgwqAg4pSc
4pSA4pSAIGhhX2FnZW50CiAgICDilILCoMKgIOKUgsKgwqAg4pSCwqDCoCDilILCoMKgIOKU
nOKUgOKUgCBob3N0ZWQtZW5naW5lLmxvY2tzcGFjZSAtPiAvdmFyL3J1bi92ZHNtL3N0b3Jh
Z2UvODFiNTliMDgtZGMwMi00MTRiLThjMDEtNDNiN2RlZDE0ZGUzLzZjMTI3ZjRmLThlYTct
NDM1Ni05OWFkLWMxMTgyZmI1NjBmNS8xZmNiNGIwYi1jOGQzLTRiZDYtYjYwNC1lZDc2NDhk
MGQ1MmUKICAgIOKUgsKgwqAg4pSCwqDCoCDilILCoMKgIOKUgsKgwqAg4pSU4pSA4pSAIGhv
c3RlZC1lbmdpbmUubWV0YWRhdGEgLT4gL3Zhci9ydW4vdmRzbS9zdG9yYWdlLzgxYjU5YjA4
LWRjMDItNDE0Yi04YzAxLTQzYjdkZWQxNGRlMy9iMDNhN2Y1Zi1hNjRlLTQwM2QtYWFjMi01
ZjE3N2MyYTIzMDUvYzIzNjJhOTEtNDc1Yy00NjE3LWE4NDgtMzI3MjVhZjFlZWM0CiAgICDi
lILCoMKgIOKUgsKgwqAg4pSCwqDCoCDilJTilIDilIAgaW1hZ2VzCiAgICDilILCoMKgIOKU
gsKgwqAg4pSCwqDCoCAgICAg4pSc4pSA4pSAIDAxODIyMjc5LTZmYWQtNGIyYy05ZmI3LTVl
MjFkNzE1NzRmZQogICAg4pSCwqDCoCDilILCoMKgIOKUgsKgwqAgICAgIOKUgsKgwqAg4pSc
4pSA4pSAIDE3OWRhODJlLWFhNTUtNGQ1OS1hZWYyLTM2ZGQxNTY0ZDAyNgogICAg4pSCwqDC
oCDilILCoMKgIOKUgsKgwqAgICAgIOKUgsKgwqAg4pSc4pSA4pSAIDE3OWRhODJlLWFhNTUt
NGQ1OS1hZWYyLTM2ZGQxNTY0ZDAyNi5sZWFzZQogICAg4pSCwqDCoCDilILCoMKgIOKUgsKg
wqAgICAgIOKUgsKgwqAg4pSU4pSA4pSAIDE3OWRhODJlLWFhNTUtNGQ1OS1hZWYyLTM2ZGQx
NTY0ZDAyNi5tZXRhCiAgICDilILCoMKgIOKUgsKgwqAg4pSCwqDCoCAgICAg4pSc4pSA4pSA
IDZjMTI3ZjRmLThlYTctNDM1Ni05OWFkLWMxMTgyZmI1NjBmNQogICAg4pSCwqDCoCDilILC
oMKgIOKUgsKgwqAgICAgIOKUgsKgwqAg4pSc4pSA4pSAIDFmY2I0YjBiLWM4ZDMtNGJkNi1i
NjA0LWVkNzY0OGQwZDUyZQogICAg4pSCwqDCoCDilILCoMKgIOKUgsKgwqAgICAgIOKUgsKg
wqAg4pSc4pSA4pSAIDFmY2I0YjBiLWM4ZDMtNGJkNi1iNjA0LWVkNzY0OGQwZDUyZS5sZWFz
ZQogICAg4pSCwqDCoCDilILCoMKgIOKUgsKgwqAgICAgIOKUgsKgwqAg4pSU4pSA4pSAIDFm
Y2I0YjBiLWM4ZDMtNGJkNi1iNjA0LWVkNzY0OGQwZDUyZS5tZXRhCiAgICDilILCoMKgIOKU
gsKgwqAg4pSCwqDCoCAgICAg4pSc4pSA4pSAIDhhNDllMGZkLTlkNDYtNDZjZS1hMDQwLWFh
ZGIyNWMwYTEyOQogICAg4pSCwqDCoCDilILCoMKgIOKUgsKgwqAgICAgIOKUgsKgwqAg4pSc
4pSA4pSAIDQ0MDA4YWYwLTUzYWQtNDA1Zi04YmUzLWMzN2MzYTQ0ZGMyOAogICAg4pSCwqDC
oCDilILCoMKgIOKUgsKgwqAgICAgIOKUgsKgwqAg4pSc4pSA4pSAIDQ0MDA4YWYwLTUzYWQt
NDA1Zi04YmUzLWMzN2MzYTQ0ZGMyOC5sZWFzZQogICAg4pSCwqDCoCDilILCoMKgIOKUgsKg
wqAgICAgIOKUgsKgwqAg4pSU4pSA4pSAIDQ0MDA4YWYwLTUzYWQtNDA1Zi04YmUzLWMzN2Mz
YTQ0ZGMyOC5tZXRhCiAgICDilILCoMKgIOKUgsKgwqAg4pSCwqDCoCAgICAg4pSc4pSA4pSA
IGE0ODAwNjExLWViNmEtNDZhYi1iNTE4LTNkNzk0MGEwMjlkNwogICAg4pSCwqDCoCDilILC
oMKgIOKUgsKgwqAgICAgIOKUgsKgwqAg4pSc4pSA4pSAIDA0ZTEwYTg0LWU5ZjgtNDc5NC04
NTQ4LTlhMjQ0NjFjY2IzZgogICAg4pSCwqDCoCDilILCoMKgIOKUgsKgwqAgICAgIOKUgsKg
wqAg4pSc4pSA4pSAIDA0ZTEwYTg0LWU5ZjgtNDc5NC04NTQ4LTlhMjQ0NjFjY2IzZi5sZWFz
ZQogICAg4pSCwqDCoCDilILCoMKgIOKUgsKgwqAgICAgIOKUgsKgwqAg4pSU4pSA4pSAIDA0
ZTEwYTg0LWU5ZjgtNDc5NC04NTQ4LTlhMjQ0NjFjY2IzZi5tZXRhCiAgICDilILCoMKgIOKU
gsKgwqAg4pSCwqDCoCAgICAg4pSc4pSA4pSAIGIwM2E3ZjVmLWE2NGUtNDAzZC1hYWMyLTVm
MTc3YzJhMjMwNQogICAg4pSCwqDCoCDilILCoMKgIOKUgsKgwqAgICAgIOKUgsKgwqAg4pSc
4pSA4pSAIGMyMzYyYTkxLTQ3NWMtNDYxNy1hODQ4LTMyNzI1YWYxZWVjNAogICAg4pSCwqDC
oCDilILCoMKgIOKUgsKgwqAgICAgIOKUgsKgwqAg4pSc4pSA4pSAIGMyMzYyYTkxLTQ3NWMt
NDYxNy1hODQ4LTMyNzI1YWYxZWVjNC5sZWFzZQogICAg4pSCwqDCoCDilILCoMKgIOKUgsKg
wqAgICAgIOKUgsKgwqAg4pSU4pSA4pSAIGMyMzYyYTkxLTQ3NWMtNDYxNy1hODQ4LTMyNzI1
YWYxZWVjNC5tZXRhCiAgICDilILCoMKgIOKUgsKgwqAg4pSCwqDCoCAgICAg4pSU4pSA4pSA
IGRkZWRiZDg5LTkzZTMtNGU3NS05ZTk0LTMyZWFkZDZmOTIyMQogICAg4pSCwqDCoCDilILC
oMKgIOKUgsKgwqAgICAgICAgICDilJzilIDilIAgZWVjYWNkNTAtYzE5Zi00MzVhLWJiZDMt
NTk3MmI5MjMyZmJhCiAgICDilILCoMKgIOKUgsKgwqAg4pSCwqDCoCAgICAgICAgIOKUnOKU
gOKUgCBlZWNhY2Q1MC1jMTlmLTQzNWEtYmJkMy01OTcyYjkyMzJmYmEubGVhc2UKICAgIOKU
gsKgwqAg4pSCwqDCoCDilILCoMKgICAgICAgICAg4pSU4pSA4pSAIGVlY2FjZDUwLWMxOWYt
NDM1YS1iYmQzLTU5NzJiOTIzMmZiYS5tZXRhCiAgICDilILCoMKgIOKUgsKgwqAg4pSU4pSA
4pSAIF9fRElSRUNUX0lPX1RFU1RfXwogICAg4pSCwqDCoCDilJzilIDilIAgYm9yZy1zcGhl
cmUtb25lOl9leHBvcnQKICAgIOKUgsKgwqAg4pSCwqDCoCDilJzilIDilIAgMjczMmNkYTct
NDFjMy00YjZmLTg3Y2MtMTRjZGRjNmMwZmU0CiAgICDilILCoMKgIOKUgsKgwqAg4pSCwqDC
oCDilJzilIDilIAgZG9tX21kCiAgICDilILCoMKgIOKUgsKgwqAg4pSCwqDCoCDilILCoMKg
IOKUnOKUgOKUgCBpZHMKICAgIOKUgsKgwqAg4pSCwqDCoCDilILCoMKgIOKUgsKgwqAg4pSc
4pSA4pSAIGluYm94CiAgICDilILCoMKgIOKUgsKgwqAg4pSCwqDCoCDilILCoMKgIOKUnOKU
gOKUgCBsZWFzZXMKICAgIOKUgsKgwqAg4pSCwqDCoCDilILCoMKgIOKUgsKgwqAg4pSc4pSA
4pSAIG1ldGFkYXRhCiAgICDilILCoMKgIOKUgsKgwqAg4pSCwqDCoCDilILCoMKgIOKUlOKU
gOKUgCBvdXRib3gKICAgIOKUgsKgwqAg4pSCwqDCoCDilILCoMKgIOKUnOKUgOKUgCBpbWFn
ZXMKICAgIOKUgsKgwqAg4pSCwqDCoCDilILCoMKgIOKUlOKUgOKUgCBtYXN0ZXIKICAgIOKU
gsKgwqAg4pSCwqDCoCDilILCoMKgICAgICDilJzilIDilIAgdGFza3MKICAgIOKUgsKgwqAg
4pSCwqDCoCDilILCoMKgICAgICDilJTilIDilIAgdm1zCiAgICDilILCoMKgIOKUgsKgwqAg
4pSU4pSA4pSAIF9fRElSRUNUX0lPX1RFU1RfXwogICAg4pSCwqDCoCDilJTilIDilIAgYm9y
Zy1zcGhlcmUtb25lOl9wbGV4dXMKICAgIOKUgsKgwqAgICAgIOKUnOKUgOKUgCBkZTYyODFi
Ni1mODQzLTQ4Y2MtYjdmNi0wYWQ0Njc3NjBlMGQKICAgIOKUgsKgwqAgICAgIOKUgsKgwqAg
4pSc4pSA4pSAIGRvbV9tZAogICAg4pSCwqDCoCAgICAg4pSCwqDCoCDilILCoMKgIOKUnOKU
gOKUgCBpZHMKICAgIOKUgsKgwqAgICAgIOKUgsKgwqAg4pSCwqDCoCDilJzilIDilIAgaW5i
b3gKICAgIOKUgsKgwqAgICAgIOKUgsKgwqAg4pSCwqDCoCDilJzilIDilIAgbGVhc2VzCiAg
ICDilILCoMKgICAgICDilILCoMKgIOKUgsKgwqAg4pSc4pSA4pSAIG1ldGFkYXRhCiAgICDi
lILCoMKgICAgICDilILCoMKgIOKUgsKgwqAg4pSU4pSA4pSAIG91dGJveAogICAg4pSCwqDC
oCAgICAg4pSCwqDCoCDilJzilIDilIAgaW1hZ2VzCiAgICDilILCoMKgICAgICDilILCoMKg
IOKUgsKgwqAg4pSc4pSA4pSAIDQwODM4ODRlLTUxN2UtNDQ5ZC04ZjVjLTY5ZjIxYjQ0ZjVm
NQogICAg4pSCwqDCoCAgICAg4pSCwqDCoCDilILCoMKgIOKUgsKgwqAg4pSc4pSA4pSAIGUy
ZWFkODI3LTIzMWItNGFkZi1hMTIyLTkyMWVlM2ZlYmJjOQogICAg4pSCwqDCoCAgICAg4pSC
wqDCoCDilILCoMKgIOKUgsKgwqAg4pSc4pSA4pSAIGUyZWFkODI3LTIzMWItNGFkZi1hMTIy
LTkyMWVlM2ZlYmJjOS5sZWFzZQogICAg4pSCwqDCoCAgICAg4pSCwqDCoCDilILCoMKgIOKU
gsKgwqAg4pSU4pSA4pSAIGUyZWFkODI3LTIzMWItNGFkZi1hMTIyLTkyMWVlM2ZlYmJjOS5t
ZXRhCiAgICDilILCoMKgICAgICDilILCoMKgIOKUgsKgwqAg4pSc4pSA4pSAIDkzN2UzNTBm
LWI4YjktNGExMi1hYjJiLWM3ZDFjOTM4NjQ2OAogICAg4pSCwqDCoCAgICAg4pSCwqDCoCDi
lILCoMKgIOKUgsKgwqAg4pSc4pSA4pSAIGExZjFkZmMzLWE1OWUtNGIzMS1iYjg2LTY2ODJl
NGM4ZTIxNwogICAg4pSCwqDCoCAgICAg4pSCwqDCoCDilILCoMKgIOKUgsKgwqAg4pSc4pSA
4pSAIGExZjFkZmMzLWE1OWUtNGIzMS1iYjg2LTY2ODJlNGM4ZTIxNy5sZWFzZQogICAg4pSC
wqDCoCAgICAg4pSCwqDCoCDilILCoMKgIOKUgsKgwqAg4pSU4pSA4pSAIGExZjFkZmMzLWE1
OWUtNGIzMS1iYjg2LTY2ODJlNGM4ZTIxNy5tZXRhCiAgICDilILCoMKgICAgICDilILCoMKg
IOKUgsKgwqAg4pSc4pSA4pSAIGE0NjQ2ZmM0LWQ3MWQtNGJjZS05MmE2LTk3NzQ3Njk0Yjlm
YQogICAg4pSCwqDCoCAgICAg4pSCwqDCoCDilILCoMKgIOKUgsKgwqAg4pSc4pSA4pSAIDVm
N2ZhOTVkLWJjY2ItNGNlNy04NmIxLWQ3YmI4ZjhlYmJkZAogICAg4pSCwqDCoCAgICAg4pSC
wqDCoCDilILCoMKgIOKUgsKgwqAg4pSc4pSA4pSAIDVmN2ZhOTVkLWJjY2ItNGNlNy04NmIx
LWQ3YmI4ZjhlYmJkZC5sZWFzZQogICAg4pSCwqDCoCAgICAg4pSCwqDCoCDilILCoMKgIOKU
gsKgwqAg4pSU4pSA4pSAIDVmN2ZhOTVkLWJjY2ItNGNlNy04NmIxLWQ3YmI4ZjhlYmJkZC5t
ZXRhCiAgICDilILCoMKgICAgICDilILCoMKgIOKUgsKgwqAg4pSc4pSA4pSAIGJkOTZlZDE1
LTQyZDEtNGY5MS04MmNjLTAwOGU2NzFhNDVmMwogICAg4pSCwqDCoCAgICAg4pSCwqDCoCDi
lILCoMKgIOKUgsKgwqAg4pSc4pSA4pSAIDM1ZGQ4ZmEzLTM2NzgtNDI1YS1hYzJiLTEyNDA5
ZjExYTUwZQogICAg4pSCwqDCoCAgICAg4pSCwqDCoCDilILCoMKgIOKUgsKgwqAg4pSc4pSA
4pSAIDM1ZGQ4ZmEzLTM2NzgtNDI1YS1hYzJiLTEyNDA5ZjExYTUwZS5sZWFzZQogICAg4pSC
wqDCoCAgICAg4pSCwqDCoCDilILCoMKgIOKUgsKgwqAg4pSU4pSA4pSAIDM1ZGQ4ZmEzLTM2
NzgtNDI1YS1hYzJiLTEyNDA5ZjExYTUwZS5tZXRhCiAgICDilILCoMKgICAgICDilILCoMKg
IOKUgsKgwqAg4pSU4pSA4pSAIGNjMTRiYTA1LTVhZTQtNGM2Zi04YzhhLTQyYWQ4ODQxMDYz
NgogICAg4pSCwqDCoCAgICAg4pSCwqDCoCDilILCoMKgICAgICDilJzilIDilIAgNjE4ZjQx
YjEtZTI2ZS00NTNlLThiYmQtNTBjZjZlZTM3NWQwCiAgICDilILCoMKgICAgICDilILCoMKg
IOKUgsKgwqAgICAgIOKUnOKUgOKUgCA2MThmNDFiMS1lMjZlLTQ1M2UtOGJiZC01MGNmNmVl
Mzc1ZDAubGVhc2UKICAgIOKUgsKgwqAgICAgIOKUgsKgwqAg4pSCwqDCoCAgICAg4pSc4pSA
4pSAIDYxOGY0MWIxLWUyNmUtNDUzZS04YmJkLTUwY2Y2ZWUzNzVkMC5tZXRhCiAgICDilILC
oMKgICAgICDilILCoMKgIOKUgsKgwqAgICAgIOKUnOKUgOKUgCA5YjZkZjNkNC02OTgzLTQ0
YzEtYmRhNi1lNmQyZDY5MTU3YzgKICAgIOKUgsKgwqAgICAgIOKUgsKgwqAg4pSCwqDCoCAg
ICAg4pSc4pSA4pSAIDliNmRmM2Q0LTY5ODMtNDRjMS1iZGE2LWU2ZDJkNjkxNTdjOC5sZWFz
ZQogICAg4pSCwqDCoCAgICAg4pSCwqDCoCDilILCoMKgICAgICDilJTilIDilIAgOWI2ZGYz
ZDQtNjk4My00NGMxLWJkYTYtZTZkMmQ2OTE1N2M4Lm1ldGEKICAgIOKUgsKgwqAgICAgIOKU
gsKgwqAg4pSU4pSA4pSAIG1hc3RlcgogICAg4pSCwqDCoCAgICAg4pSCwqDCoCAgICAg4pSc
4pSA4pSAIHRhc2tzCiAgICDilILCoMKgICAgICDilILCoMKgICAgICDilJTilIDilIAgdm1z
CiAgICDilILCoMKgICAgICDilJTilIDilIAgX19ESVJFQ1RfSU9fVEVTVF9fCiAgICDilJzi
lIDilIAgX3Zhcl9saWJfb3ZpcnQtaG9zdGVkLWVuZ2luZS1zZXR1cF90bXBJQ2NtRFcKICAg
IOKUlOKUgOKUgCB2aW5jdWx1bS50YmkudW5pdmllLmFjLmF0Ol92YXJfbGliX2V4cG9ydHNf
aXNvCiAgICAgICAg4pSc4pSA4pSAIDUxNmEyZDc3LWVkY2MtNDZjZi05ODYwLTVmYjQ0MTEy
ZjRkMAogICAgICAgIOKUgsKgwqAg4pSc4pSA4pSAIGRvbV9tZAogICAgICAgIOKUgsKgwqAg
4pSCwqDCoCDilJzilIDilIAgaWRzCiAgICAgICAg4pSCwqDCoCDilILCoMKgIOKUnOKUgOKU
gCBpbmJveAogICAgICAgIOKUgsKgwqAg4pSCwqDCoCDilJzilIDilIAgbGVhc2VzCiAgICAg
ICAg4pSCwqDCoCDilILCoMKgIOKUnOKUgOKUgCBtZXRhZGF0YQogICAgICAgIOKUgsKgwqAg
4pSCwqDCoCDilJTilIDilIAgb3V0Ym94CiAgICAgICAg4pSCwqDCoCDilJTilIDilIAgaW1h
Z2VzCiAgICAgICAg4pSCwqDCoCAgICAg4pSU4pSA4pSAIDExMTExMTExLTExMTEtMTExMS0x
MTExLTExMTExMTExMTExMQogICAgICAgIOKUgsKgwqAgICAgICAgICDilJzilIDilIAgQ2Vu
dE9TLTcteDg2XzY0LU5ldEluc3RhbGwtMTUxMS5pc28KICAgICAgICDilILCoMKgICAgICAg
ICAg4pSU4pSA4pSAIEZlZG9yYS1Xb3Jrc3RhdGlvbi1MaXZlLXg4Nl82NC0yNF9BbHBoYS03
LmlzbwogICAgICAgIOKUlOKUgOKUgCBfX0RJUkVDVF9JT19URVNUX18KCjQ0IGRpcmVjdG9y
aWVzLCA2NCBmaWxlcwo=
--------------070004000805050505060204--
--SbPVBmWBWGj6xG5SNLe8NaNUk0mGIEPC2
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAEBCgAGBQJXEI4PAAoJEA7XCanqEVqIOIsP/iv1aLehzgftrqZWrKirrIzd
0WbPS/jg4NyUnl+CD24Hbmvg80O70wvJiYQs978/6e3izaUH2wLZSfNyqIf1m6G5
XiMYtgfevnx8hB4RAxRsbS/Z2xLBBt11S5beTSIlKnODoRttnaGYK/IpqpWWNRhv
C5IPDQ5z/q+3CE+O/nhPjiILPb0x4CumTm6oAF+8pEsT3RIPIcQO/K0aD9mvB1v2
cwbqhfIOJNaHsdeLQwRHmvpckbzjSrD8YXrFrn+NMmBLHfUXYmz+tXjFGccO0r2/
bt8ZjD200lZapxqBTqFBUf78elwbbeTmObNIfS4kUt05JVSLOrlE1n0c4xUTZuEf
1iMIZLqDd1Ce57VPnNtPoMN6YfiJOXvV1+sCup1q6KWGdUo/fzsJy897sg11eQO7
znhVUhgblUwnQXOhkQcIy4whxZHXQh93415mO6CZ7n6UtpINIPDi1mdyPlrltQpo
6k6Ll+wfTa/eyuLbef9zAdOpdyA6kmeWO2IuFNecgkzwJo1nYZf7kZjT907sSN09
yjS4Hj/mUa9VJhAv2XsXw9HIIa8EuUvhJMQceFds44n9RGXDRKtFvQzuI/hcUP8C
C2sk4pduOfzlAxouH1l9h1P544Pa9mhW4aWJDfxWKpoDBpqmFpnu0WYYJYaNRcgk
g/sIIt53qwFjnoX40sgU
=n3nl
-----END PGP SIGNATURE-----
--SbPVBmWBWGj6xG5SNLe8NaNUk0mGIEPC2--
8 years, 8 months