[Users] [oVirt Test Day 3] ovirt-guest-agent
by ybronhei
I tried ovirt-guest-agent over fedora 19 and 20,
- installed a vm with f19
- yum install ovirt-guest-agent
- started the service
- the information about machine name, ip address, os, ram, user all
appeared after installation
- notification for power off also appears
same for f20
looks ok on both
--
Yaniv Bronhaim.
10 years, 8 months
[Users] [oVirt Test Day 3] All-in-one
by Piotr Kliczewski
Hello,
I tested all-in-one and here is what I did.
I installed fresh f19 on a vm running on my f20 (nested virt). During
the installation of ovirt-engine-setup-plugin-allinone rpm I noticed
that yum verification step for vdsm run forever but the second attempt
succeeded. During engine setup iso domain failed to create [1]. Second
attempt solved it.
I noticed that for local cluster a user is able only to create data
domain and only import iso and export domains. During creation of data
domain I got permissions failure which disappeared after running
second time (performed twice with the same result).
After successful vm start I was not able to open spice console due to:
"Unable to connect to the graphic server". When I tried with
virt-manager I got: "viewer connection to hypervisor host got refused
or disconnected". Danken noticed the same issue for CentOS. He noticed
that after disabling iptables service it worked as well as after
starting it again.
Thanks,
Piotr
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1073456
10 years, 8 months
[Users] Increase core or socker for vCPU
by Tejesh M
Hi,
I have a basic doubt, while increasing vCPU for a VM in RHEV-M, do we need
to increase core or sockets? For instance, if someone asks for 2vCPU then
should i make it 2 socket x 1 core or 1 socket x 2 core..
I do understand that it depends on application too to choose. but i'm
asking in general.
Sorry, if its not relevant to this forum.
Thanks
10 years, 8 months
[Users] [oVirt Test Day 3] Reports installation
by Martin Perina
Hi,
I tried to install oVirt Reports during test day 3 with these results:
1) Centos 6.5
- epel and jpackage6 repositories needs to be enabled
- installation and setup worked fine (set everything at default)
- after creating dc and cluster and adding host to engine, reports were able
to show data, so it looks OK
2) Fedora 19
- installation and setup worked fine (set everything at default)
- after creating dc and cluster and adding host to engine, reports were able
to show data, so it looks OK
Martin Perina
10 years, 8 months
[Users] Get involved in oVirt integration!
by Sandro Bonazzola
Hi,
have you got some free time and do you want to get involved in oVirt integration?
Here are a couple of bugs you can hopefully fix in less that one day:
Bug 1059950 - If cluster=Default does not exist in hosted-engine it will fail and timeout
Bug 1072664 - engine-setup.manual create database fail.
Is this the first time you try to contribute to oVirt project?
You can start from here [1][2]!
[1] http://www.ovirt.org/Develop
[2] http://www.ovirt.org/Working_with_oVirt_Gerrit
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 8 months
[Users] Clone a Template using API
by Tejesh M
Hi,
While creating VM from Template via Web Admin portal, the default
provisioning is Clone. But while doing the same via API its is doing Thin
provisioning. Can someone share there view.
org.ovirt.engine.sdk.entities.Template templateData = new Template();
templateData.setName(templateName);
org.ovirt.engine.sdk.entities.Cluster clusterData = new Cluster();
clusterData.setName(clusterName);
org.ovirt.engine.sdk.entities.VM vmDataForCreate = new VM();
vmDataForCreate.setName(vmName);
vmDataForCreate.setCluster(clusterData);
vmDataForCreate.setTemplate(templateData);
api.getVMs().add(vmDataForCreate);
--
Thanks & Regards
Tejesh
10 years, 8 months
[Users] [ANN] oVirt 3.4.0 RC Test Day - Today Mar 6th
by Sandro Bonazzola
Hi all,
today Mar 6th we have oVirt 3.4.0 RC test day.
On this day all relevant engineers will be online ready to support
any issues you find during install / operating this new release.
Just make sure you have 1 hosts or more to test drive the new release.
If you're curious to see how it works, this is your chance.
Thanks again for everyone who will join us tomorrow!
Location
#ovirt irc channel
Please communicate here to allow others to see any issues
What
In this test day you have a license to kill ;)
Follow the documentation to setup your environment, and test drive the new features.
Please remember we expect to see some issues, and anything you come up with will save a you when you'll install final release
Remember to try daily tasks you'd usually do in the engine, to see there are no regressions.
Write down the configuration you used (HW, console, etc) in the report etherpad[1].
Documentation
Release notes: http://www.ovirt.org/OVirt_3.4.0_release_notes
Features pages links: http://bit.ly/17qBn6F
If you find errors in the wiki please annotate it as well in report etherpad [1]
Prerequisites / recommendations
Use CentOS or RHEL 6.5 only. 6.4 is unsupported due to various issues (sanlock, libvirt, etc).
Use Fedora 19 only. Fedora 20 is unsupported for running the engine due to various issues (jboss / wildfly) but may be used as hypervisor.
Latest RPMs
repository to be enabled for testing the release are listed in the release notes page [2].
Note: on Fedora you'll need to enable fedora-virt-preview repository for getting latest libvirt rpms.
Note: on EL6 you'll need to enable EPEL repository.
NEW issues / reports
For any new issue, please update the reports etherpad [1]
Feature owners, please make sure:
your feature is updated and referenced on release page [2].
you have testing instruction for your feature either on test day page [3] or in your feature page.
your team regression testing section is organized and up to date on test day page [3].
[1] http://etherpad.ovirt.org/p/3.4-testday-3
[2] http://www.ovirt.org/OVirt_3.4.0_release_notes
[3] http://www.ovirt.org/OVirt_3.4_Test_Day
Thanks.
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 8 months
[Users] Data Center Non Responsive / Contending
by Giorgio Bersano
Hi everyone,
I'm asking for help again as testing my setup I put myself in a
situation in which I can't get out.
Layout: two hosts, an iSCSI storage, the engine installed as a
"regular" KVM guest on another host (external to the oVirt setup). All
CentOS 6.5, oVirt 3.4.0beta3.
One DC ("Default"), one Cluster ("Default"), default storage type iSCSI.
Moreover, ISO domain is another "external" KVM guest exposing an NFS
share; Export Domain is in fact a VM in this Cluster.
This is a preproduction setup.
Yesterday all was running fine until I needed to do some hardware maintenance.
So I decided to put the two hosts in maintenance from the webadmin
then shutdown them in the usual way. Engine was left operational.
Later i booted again one host, waited some time then tried to activate
it (from the webadmin) without success. Even switching on the second
host didn't change anything.
Now the Data Center status toggles between "Contending" and "Non Responsive".
None of the hosts is SPM and if I choose "Select as SPM" in the
webadmin the result is this popup:
----------
Operation Canceled
----------
Error while executing action: Cannot force select SPM: Storage Domain
cannot be accessed.
-Please check that at least one Host is operational and Data Center state is up.
----------
The two hosts are operational but the DC isn't up (downpointing red
arrow - "Non Responsive", hourglass - "Contending").
Storage domains are in "Unknown" state and if I try to "Activate" the
Master Data Domain its status becomes "Locked" and then it fails with
these events:
. Invalid status on Data Center Default. Setting status to Non Responsive
. Failed to activate Storage Domain dt02clu6070 (Data Center Default) by admin
Connectivity seems OK. iSCSI connectivity seems OK.
I'm almost certain that if I had left one host active I would have had
zero problems.
But I also think shutting down the full system should not be a problem.
In the end I restarted the ovirt-engine service without results (as expected).
Any thought? Logs needed?
TIA,
Giorgio.
10 years, 8 months
[Users] SD Disk's Logical Volume not visible/activated on some nodes
by Boyan Tabakov
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--kNwttSRPs68koQxvdSI5KDEeiPmQG9f68
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Hello,
I have ovirt 3.3 installed on on FC 19 hosts with vdsm 4.13.3-2.fc19.
One of the hosts (host1) is engine + node + SPM and the other host2 is
just a node. I have an iSCSI storage domain configured and accessible
from both nodes.
When creating a new disk in the SD, the underlying logical volume gets
properly created (seen in vgdisplay output on host1), but doesn't seem
to be automatically picked by host2. Consequently, when creating/booting
a VM with the said disk attached, the VM fails to start on host2,
because host2 can't see the LV. Similarly, if the VM is started on
host1, it fails to migrate to host2. Extract from host2 log is in the
end. The LV in question is 6b35673e-7062-4716-a6c8-d5bf72fe3280.
As far as I could track quickly the vdsm code, there is only call to lvs
and not to lvscan or lvchange so the host2 LVM doesn't fully refresh.
The only workaround so far has been to restart VDSM on host2, which
makes it refresh all LVM data properly.
When is host2 supposed to pick up any newly created LVs in the SD VG?
Any suggestions where the problem might be?
Thanks!
Thread-998::DEBUG::2014-02-18
14:49:15,399::BindingXMLRPC::965::vds::(wrapper) client
[10.10.0.10]::call vmCreate with ({'acpiEnable': 'true',
'emulatedMachine': 'pc-1.0', 'tabletEnable': 'true', 'vmId':
'4669e4ad-7b76-4531-a16b-2b85345593a3', 'memGuaranteedSize': 1024,
'spiceSslCipherSuite': 'DEFAULT', 'timeOffset': '0', 'cpuType':
'Penryn', 'custom': {}, 'smp': '1', 'vmType': 'kvm', 'memSize': 1024,
'smpCoresPerSocket': '1', 'vmName': 'testvm', 'nice': '0',
'smartcardEnable': 'false', 'keyboardLayout': 'en-us', 'kvmEnable':
'true', 'pitReinjection': 'false', 'transparentHugePages': 'true',
'devices': [{'device': 'cirrus', 'specParams': {'vram': '32768',
'heads': '1'}, 'type': 'video', 'deviceId':
'9b769f44-db37-4e42-a343-408222e1422f'}, {'index': '2', 'iface': 'ide',
'specParams': {'path': ''}, 'readonly': 'true', 'deviceId':
'a61b291c-94de-4fd4-922e-1d51f4d2760d', 'path': '', 'device': 'cdrom',
'shared': 'false', 'type': 'disk'}, {'index': 0, 'iface': 'virtio',
'format': 'raw', 'bootOrder': '1', 'volumeID':
'6b35673e-7062-4716-a6c8-d5bf72fe3280', 'imageID':
'3738d400-a62e-4ded-b97f-1b4028e5f45b', 'specParams': {}, 'readonly':
'false', 'domainID': '3307f6fa-dd58-43db-ab23-b1fb299006c7', 'optional':
'false', 'deviceId': '3738d400-a62e-4ded-b97f-1b4028e5f45b', 'poolID':
'61f15cc0-8bba-482d-8a81-cd636a581b58', 'device': 'disk', 'shared':
'false', 'propagateErrors': 'off', 'type': 'disk'}, {'device':
'memballoon', 'specParams': {'model': 'virtio'}, 'type': 'balloon',
'deviceId': '4e05c1d1-2ac3-4885-8c1d-92ccd1388f0d'}, {'device': 'scsi',
'specParams': {}, 'model': 'virtio-scsi', 'type': 'controller',
'deviceId': 'fa3c223a-0cfc-4adf-90bc-cb6073a4b212'}],
'spiceSecureChannels':
'smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard',
'display': 'vnc'},) {} flowID [20468793]
Thread-998::INFO::2014-02-18
14:49:15,406::API::642::vds::(_getNetworkIp) network None: using 0
Thread-998::INFO::2014-02-18
14:49:15,406::clientIF::394::vds::(createVm) vmContainerLock acquired by
vm 4669e4ad-7b76-4531-a16b-2b85345593a3
Thread-999::DEBUG::2014-02-18
14:49:15,409::vm::2091::vm.Vm::(_startUnderlyingVm)
vmId=3D`4669e4ad-7b76-4531-a16b-2b85345593a3`::Start
Thread-998::DEBUG::2014-02-18
14:49:15,409::clientIF::407::vds::(createVm) Total desktops after
creation of 4669e4ad-7b76-4531-a16b-2b85345593a3 is 1
Thread-999::DEBUG::2014-02-18
14:49:15,409::vm::2095::vm.Vm::(_startUnderlyingVm)
vmId=3D`4669e4ad-7b76-4531-a16b-2b85345593a3`::_ongoingCreations acquired=
Thread-998::DEBUG::2014-02-18
14:49:15,410::BindingXMLRPC::972::vds::(wrapper) return vmCreate with
{'status': {'message': 'Done', 'code': 0}, 'vmList': {'status':
'WaitForLaunch', 'acpiEnable': 'true', 'emulatedMachine': 'pc-1.0',
'tabletEnable': 'true', 'pid': '0', 'memGuaranteedSize': 1024,
'timeOffset': '0', 'keyboardLayout': 'en-us', 'displayPort': '-1',
'displaySecurePort': '-1', 'spiceSslCipherSuite': 'DEFAULT', 'cpuType':
'Penryn', 'smp': '1', 'clientIp': '', 'nicModel': 'rtl8139,pv',
'smartcardEnable': 'false', 'kvmEnable': 'true', 'pitReinjection':
'false', 'vmId': '4669e4ad-7b76-4531-a16b-2b85345593a3',
'transparentHugePages': 'true', 'devices': [{'device': 'cirrus',
'specParams': {'vram': '32768', 'heads': '1'}, 'type': 'video',
'deviceId': '9b769f44-db37-4e42-a343-408222e1422f'}, {'index': '2',
'iface': 'ide', 'specParams': {'path': ''}, 'readonly': 'true',
'deviceId': 'a61b291c-94de-4fd4-922e-1d51f4d2760d', 'path': '',
'device': 'cdrom', 'shared': 'false', 'type': 'disk'}, {'index': 0,
'iface': 'virtio', 'format': 'raw', 'bootOrder': '1', 'volumeID':
'6b35673e-7062-4716-a6c8-d5bf72fe3280', 'imageID':
'3738d400-a62e-4ded-b97f-1b4028e5f45b', 'specParams': {}, 'readonly':
'false', 'domainID': '3307f6fa-dd58-43db-ab23-b1fb299006c7', 'optional':
'false', 'deviceId': '3738d400-a62e-4ded-b97f-1b4028e5f45b', 'poolID':
'61f15cc0-8bba-482d-8a81-cd636a581b58', 'device': 'disk', 'shared':
'false', 'propagateErrors': 'off', 'type': 'disk'}, {'device':
'memballoon', 'specParams': {'model': 'virtio'}, 'type': 'balloon',
'deviceId': '4e05c1d1-2ac3-4885-8c1d-92ccd1388f0d'}, {'device': 'scsi',
'specParams': {}, 'model': 'virtio-scsi', 'type': 'controller',
'deviceId': 'fa3c223a-0cfc-4adf-90bc-cb6073a4b212'}], 'custom': {},
'vmType': 'kvm', 'memSize': 1024, 'displayIp': '0',
'spiceSecureChannels':
'smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard',
'smpCoresPerSocket': '1', 'vmName': 'testvm', 'display': 'vnc', 'nice':
'0'}}
Thread-999::INFO::2014-02-18 14:49:15,410::vm::2926::vm.Vm::(_run)
vmId=3D`4669e4ad-7b76-4531-a16b-2b85345593a3`::VM wrapper has started
Thread-999::DEBUG::2014-02-18
14:49:15,412::task::579::TaskManager.Task::(_updateState)
Task=3D`07adf25e-b9fe-44d0-adf4-9159ac0f1f4d`::moving from state init ->
state preparing
Thread-999::INFO::2014-02-18
14:49:15,413::logUtils::44::dispatcher::(wrapper) Run and protect:
getVolumeSize(sdUUID=3D'3307f6fa-dd58-43db-ab23-b1fb299006c7',
spUUID=3D'61f15cc0-8bba-482d-8a81-cd636a581b58',
imgUUID=3D'3738d400-a62e-4ded-b97f-1b4028e5f45b',
volUUID=3D'6b35673e-7062-4716-a6c8-d5bf72fe3280', options=3DNone)
Thread-999::DEBUG::2014-02-18
14:49:15,413::lvm::440::OperationMutex::(_reloadlvs) Operation 'lvm
reload operation' got the operation mutex
Thread-999::DEBUG::2014-02-18
14:49:15,413::lvm::309::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n
/sbin/lvm lvs --config " devices { preferred_names =3D
[\\"^/dev/mapper/\\"] ignore_suspended_devices=3D1 write_cache_state=3D0
disable_after_error_count=3D3 obtain_device_list_from_udev=3D0 filter =3D=
[
\'a|/dev/mapper/36090a098103b1821532d057a5a0120d4|/dev/mapper/36090a09810=
bb99fefb2da57b94332027|\',
\'r|.*|\' ] } global { locking_type=3D1 prioritise_write_locks=3D1
wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 } "
--noheadings --units b --nosuffix --separator | -o
uuid,name,vg_name,attr,size,seg_start_pe,devices,tags
3307f6fa-dd58-43db-ab23-b1fb299006c7' (cwd None)
Thread-999::DEBUG::2014-02-18
14:49:15,466::lvm::309::Storage.Misc.excCmd::(cmd) SUCCESS: <err> =3D '';=
<rc> =3D 0
Thread-999::DEBUG::2014-02-18
14:49:15,481::lvm::475::Storage.LVM::(_reloadlvs) lvs reloaded
Thread-999::DEBUG::2014-02-18
14:49:15,481::lvm::475::OperationMutex::(_reloadlvs) Operation 'lvm
reload operation' released the operation mutex
Thread-999::WARNING::2014-02-18
14:49:15,482::lvm::621::Storage.LVM::(getLv) lv:
6b35673e-7062-4716-a6c8-d5bf72fe3280 not found in lvs vg:
3307f6fa-dd58-43db-ab23-b1fb299006c7 response
Thread-999::ERROR::2014-02-18
14:49:15,482::task::850::TaskManager.Task::(_setError)
Task=3D`07adf25e-b9fe-44d0-adf4-9159ac0f1f4d`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 857, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
res =3D f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 3059, in getVolumeSize
apparentsize =3D str(volClass.getVSize(dom, imgUUID, volUUID, bs=3D1)=
)
File "/usr/share/vdsm/storage/blockVolume.py", line 111, in getVSize
size =3D int(int(lvm.getLV(sdobj.sdUUID, volUUID).size) / bs)
File "/usr/share/vdsm/storage/lvm.py", line 914, in getLV
raise se.LogicalVolumeDoesNotExistError("%s/%s" % (vgName, lvName))
LogicalVolumeDoesNotExistError: Logical volume does not exist:
('3307f6fa-dd58-43db-ab23-b1fb299006c7/6b35673e-7062-4716-a6c8-d5bf72fe32=
80',)
Thread-999::DEBUG::2014-02-18
14:49:15,485::task::869::TaskManager.Task::(_run)
Task=3D`07adf25e-b9fe-44d0-adf4-9159ac0f1f4d`::Task._run:
07adf25e-b9fe-44d0-adf4-9159ac0f1f4d
('3307f6fa-dd58-43db-ab23-b1fb299006c7',
'61f15cc0-8bba-482d-8a81-cd636a581b58',
'3738d400-a62e-4ded-b97f-1b4028e5f45b',
'6b35673e-7062-4716-a6c8-d5bf72fe3280') {} failed - stopping task
Thread-999::DEBUG::2014-02-18
14:49:15,485::task::1194::TaskManager.Task::(stop)
Task=3D`07adf25e-b9fe-44d0-adf4-9159ac0f1f4d`::stopping in state preparin=
g
(force False)
Thread-999::DEBUG::2014-02-18
14:49:15,486::task::974::TaskManager.Task::(_decref)
Task=3D`07adf25e-b9fe-44d0-adf4-9159ac0f1f4d`::ref 1 aborting True
Thread-999::INFO::2014-02-18
14:49:15,486::task::1151::TaskManager.Task::(prepare)
Task=3D`07adf25e-b9fe-44d0-adf4-9159ac0f1f4d`::aborting: Task is aborted:=
'Logical volume does not exist' - code 610
Thread-999::DEBUG::2014-02-18
14:49:15,486::task::1156::TaskManager.Task::(prepare)
Task=3D`07adf25e-b9fe-44d0-adf4-9159ac0f1f4d`::Prepare: aborted: Logical
volume does not exist
Thread-999::DEBUG::2014-02-18
14:49:15,486::task::974::TaskManager.Task::(_decref)
Task=3D`07adf25e-b9fe-44d0-adf4-9159ac0f1f4d`::ref 0 aborting True
Thread-999::DEBUG::2014-02-18
14:49:15,486::task::909::TaskManager.Task::(_doAbort)
Task=3D`07adf25e-b9fe-44d0-adf4-9159ac0f1f4d`::Task._doAbort: force False=
Thread-999::DEBUG::2014-02-18
14:49:15,487::resourceManager::976::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-999::DEBUG::2014-02-18
14:49:15,487::task::579::TaskManager.Task::(_updateState)
Task=3D`07adf25e-b9fe-44d0-adf4-9159ac0f1f4d`::moving from state preparin=
g
-> state aborting
Thread-999::DEBUG::2014-02-18
14:49:15,487::task::534::TaskManager.Task::(__state_aborting)
Task=3D`07adf25e-b9fe-44d0-adf4-9159ac0f1f4d`::_aborting: recover policy =
none
Thread-999::DEBUG::2014-02-18
14:49:15,487::task::579::TaskManager.Task::(_updateState)
Task=3D`07adf25e-b9fe-44d0-adf4-9159ac0f1f4d`::moving from state aborting=
-> state failed
Thread-999::DEBUG::2014-02-18
14:49:15,487::resourceManager::939::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-999::DEBUG::2014-02-18
14:49:15,488::resourceManager::976::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-999::ERROR::2014-02-18
14:49:15,488::dispatcher::67::Storage.Dispatcher.Protect::(run)
{'status': {'message': "Logical volume does not exist:
('3307f6fa-dd58-43db-ab23-b1fb299006c7/6b35673e-7062-4716-a6c8-d5bf72fe32=
80',)",
'code': 610}}
Thread-999::ERROR::2014-02-18
14:49:15,488::vm::1826::vm.Vm::(_normalizeVdsmImg)
vmId=3D`4669e4ad-7b76-4531-a16b-2b85345593a3`::Unable to get volume size
for 6b35673e-7062-4716-a6c8-d5bf72fe3280
Traceback (most recent call last):
File "/usr/share/vdsm/vm.py", line 1822, in _normalizeVdsmImg
drv['truesize'] =3D res['truesize']
KeyError: 'truesize'
Thread-999::DEBUG::2014-02-18
14:49:15,489::vm::2112::vm.Vm::(_startUnderlyingVm)
vmId=3D`4669e4ad-7b76-4531-a16b-2b85345593a3`::_ongoingCreations released=
Thread-999::ERROR::2014-02-18
14:49:15,489::vm::2138::vm.Vm::(_startUnderlyingVm)
vmId=3D`4669e4ad-7b76-4531-a16b-2b85345593a3`::The vm start process faile=
d
Traceback (most recent call last):
File "/usr/share/vdsm/vm.py", line 2098, in _startUnderlyingVm
self._run()
File "/usr/share/vdsm/vm.py", line 2930, in _run
devices =3D self.buildConfDevices()
File "/usr/share/vdsm/vm.py", line 1935, in buildConfDevices
self._normalizeVdsmImg(drv)
File "/usr/share/vdsm/vm.py", line 1828, in _normalizeVdsmImg
drv['volumeID'])
RuntimeError: Volume 6b35673e-7062-4716-a6c8-d5bf72fe3280 is corrupted
or missing
Best regards,
Boyan Tabakov
--kNwttSRPs68koQxvdSI5KDEeiPmQG9f68
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iEYEARECAAYFAlMDYX8ACgkQXOXFG4fgV77amACfdaldO8NlpTry88hrRrdH6eGk
umQAoMNVB+zEV4gO+SunFLjuyq8J0pL0
=m0Qx
-----END PGP SIGNATURE-----
--kNwttSRPs68koQxvdSI5KDEeiPmQG9f68--
10 years, 8 months