Ovirt 3.6.6 on Centos 7.2 not using native gluster (gfapi)
by Ralf Schenk
This is a multi-part message in MIME format.
--------------AF66DE8F49B7941B791B002E
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Hello,
I set up 8 Hosts and self-hosted-engine running HA on 3 of them from
gluster replica 3 Volume. HA is working, I can set one host of the 3
configured for hosted-engine to maintenance and engine migrates to other
host. I did the hosted-engine --deploy with type gluster and my gluster
hosted storage is accessed as glusterfs.mydomain.de:/engine
I set up another gluster volume (distributed replicated 4x2=8) as Data
storage for my virtual machines which is accessible as
glusterfs.mydomain.de:/gv0. ISO and Export Volume are defined from NFS
Server.
When I set up a VM on the gluster storage I expected it to run with
native gluster support. However if I dumpxml the libvirt machine
definition I've something like that in it's config:
[...]
<disk type='file' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop'
io='threads'/>
<source
file='/rhev/data-center/00000001-0001-0001-0001-0000000000b9/5d99af76-33b5-47d8-99da-1f32413c7bb0/images/011ab08e-71af-4d5b-a6a8-9b843a10329e/3f71d6c7-9b6d-4872-abc6-01a2b3329656'/>
<backingStore/>
<target dev='vda' bus='virtio'/>
<serial>011ab08e-71af-4d5b-a6a8-9b843a10329e</serial>
<boot order='1'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</disk>
I expected to have something like this:
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='none' error_policy='stop'
io='threads'/>
<source protocol='gluster'
name='gv0/5d99af76-33b5-47d8-99da-1f32413c7bb0/images/011ab08e-71af-4d5b-a6a8-9b843a10329e/3f71d6c7-9b6d-4872-abc6-01a2b3329656'>
<host name='glusterfs.mydomain.de'/>
</source>
[...]
All hosts have vdsm-gluster gluster installed:
[root@microcloud21 libvirt]# yum list installed | grep vdsm-*
vdsm.noarch 4.17.28-0.el7.centos @ovirt-3.6
vdsm-cli.noarch 4.17.28-0.el7.centos @ovirt-3.6
vdsm-gluster.noarch 4.17.28-0.el7.centos @ovirt-3.6
vdsm-hook-hugepages.noarch 4.17.28-0.el7.centos @ovirt-3.6
vdsm-hook-vmfex-dev.noarch 4.17.28-0.el7.centos @ovirt-3.6
vdsm-infra.noarch 4.17.28-0.el7.centos @ovirt-3.6
vdsm-jsonrpc.noarch 4.17.28-0.el7.centos @ovirt-3.6
vdsm-python.noarch 4.17.28-0.el7.centos @ovirt-3.6
vdsm-xmlrpc.noarch 4.17.28-0.el7.centos @ovirt-3.6
vdsm-yajsonrpc.noarch 4.17.28-0.el7.centos @ovirt-3.6
How do I get my most wanted feature native gluster support running ?
--
*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *rs(a)databay.de* <mailto:rs@databay.de>
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Klaus Scholzen (RA)
------------------------------------------------------------------------
--------------AF66DE8F49B7941B791B002E
Content-Type: multipart/related;
boundary="------------0AF6A7E1F21AB119CC361D8D"
--------------0AF6A7E1F21AB119CC361D8D
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p>Hello,<br>
</p>
<p>I set up 8 Hosts and self-hosted-engine running HA on 3 of them
from gluster replica 3 Volume. HA is working, I can set one host
of the 3 configured for hosted-engine to maintenance and engine
migrates to other host. I did the hosted-engine --deploy with type
gluster and my gluster hosted storage is accessed as
glusterfs.mydomain.de:/engine<br>
</p>
<p>I set up another gluster volume (distributed replicated 4x2=8) as
Data storage for my virtual machines which is accessible as
glusterfs.mydomain.de:/gv0. ISO and Export Volume are defined
from NFS Server.</p>
<p>When I set up a VM on the gluster storage I expected it to run
with native gluster support. However if I dumpxml the libvirt
machine definition I've something like that in it's config:</p>
<p>[...]<br>
</p>
<p><tt> <disk type='file' device='disk' snapshot='no'></tt><tt><br>
</tt><tt> <driver name='qemu' type='raw' cache='none'
error_policy='stop' io='threads'/></tt><tt><br>
</tt><tt> <source
file='/rhev/data-center/00000001-0001-0001-0001-0000000000b9/5d99af76-33b5-47d8-99da-1f32413c7bb0/images/011ab08e-71af-4d5b-a6a8-9b843a10329e/3f71d6c7-9b6d-4872-abc6-01a2b3329656'/></tt><tt><br>
</tt><tt> <backingStore/></tt><tt><br>
</tt><tt> <target dev='vda' bus='virtio'/></tt><tt><br>
</tt><tt>
<serial>011ab08e-71af-4d5b-a6a8-9b843a10329e</serial></tt><tt><br>
</tt><tt> <boot order='1'/></tt><tt><br>
</tt><tt> <alias name='virtio-disk0'/></tt><tt><br>
</tt><tt> <address type='pci' domain='0x0000' bus='0x00'
slot='0x06' function='0x0'/></tt><tt><br>
</tt><tt> </disk></tt><br>
<br>
</p>
I expected to have something like this:<br>
<tt><disk type='network' device='disk'></tt><tt><br>
</tt><tt> <driver name='qemu' type='raw' cache='none'
error_policy='stop' io='threads'/></tt><tt><br>
</tt><tt> <source protocol='gluster'
name='gv0/5d99af76-33b5-47d8-99da-1f32413c7bb0/images/011ab08e-71af-4d5b-a6a8-9b843a10329e/3f71d6c7-9b6d-4872-abc6-01a2b3329656'></tt><tt><br>
</tt><tt> <host name='glusterfs.mydomain.de'/></tt><tt><br>
</tt><tt> </source></tt><tt><br>
</tt><tt> [...]</tt><tt><br>
</tt><br>
All hosts have vdsm-gluster gluster installed:<br>
<tt>[root@microcloud21 libvirt]# yum list installed | grep vdsm-*</tt><tt><br>
</tt><tt>vdsm.noarch
4.17.28-0.el7.centos @ovirt-3.6</tt><tt><br>
</tt><tt>vdsm-cli.noarch
4.17.28-0.el7.centos @ovirt-3.6</tt><tt><br>
</tt><tt>vdsm-gluster.noarch
4.17.28-0.el7.centos @ovirt-3.6</tt><tt><br>
</tt><tt>vdsm-hook-hugepages.noarch
4.17.28-0.el7.centos @ovirt-3.6</tt><tt><br>
</tt><tt>vdsm-hook-vmfex-dev.noarch
4.17.28-0.el7.centos @ovirt-3.6</tt><tt><br>
</tt><tt>vdsm-infra.noarch
4.17.28-0.el7.centos @ovirt-3.6</tt><tt><br>
</tt><tt>vdsm-jsonrpc.noarch
4.17.28-0.el7.centos @ovirt-3.6</tt><tt><br>
</tt><tt>vdsm-python.noarch
4.17.28-0.el7.centos @ovirt-3.6</tt><tt><br>
</tt><tt>vdsm-xmlrpc.noarch
4.17.28-0.el7.centos @ovirt-3.6</tt><tt><br>
</tt><tt>vdsm-yajsonrpc.noarch
4.17.28-0.el7.centos @ovirt-3.6</tt><tt><br>
</tt><br>
How do I get my most wanted feature native gluster support running ?<br>
<br>
<div class="moz-signature">-- <br>
<p>
</p>
<table border="0" cellpadding="0" cellspacing="0">
<tbody>
<tr>
<td colspan="3"><img
src="cid:part1.B7BA9679.E32DEC3A@databay.de" height="30"
border="0" width="151"></td>
</tr>
<tr>
<td valign="top"> <font face="Verdana, Arial, sans-serif"
size="-1"><br>
<b>Ralf Schenk</b><br>
fon +49 (0) 24 05 / 40 83 70<br>
fax +49 (0) 24 05 / 40 83 759<br>
mail <a href="mailto:rs@databay.de"><font
color="#FF0000"><b>rs(a)databay.de</b></font></a><br>
</font> </td>
<td width="30"> </td>
<td valign="top"> <font face="Verdana, Arial, sans-serif"
size="-1"><br>
<b>Databay AG</b><br>
Jens-Otto-Krag-Straße 11<br>
D-52146 Würselen<br>
<a href="http://www.databay.de"><font color="#FF0000"><b>www.databay.de</b></font></a>
</font> </td>
</tr>
<tr>
<td colspan="3" valign="top"> <font face="Verdana, Arial,
sans-serif" size="1"><br>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE
210844202<br>
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch
Yavari, Dipl.-Kfm. Philipp Hermanns<br>
Aufsichtsratsvorsitzender: Klaus Scholzen (RA) </font>
</td>
</tr>
</tbody>
</table>
<hr color="#000000" noshade="noshade" size="1" width="100%">
</div>
</body>
</html>
--------------0AF6A7E1F21AB119CC361D8D
Content-Type: image/gif;
name="logo_databay_email.gif"
Content-Transfer-Encoding: base64
Content-ID: <part1.B7BA9679.E32DEC3A(a)databay.de>
Content-Disposition: inline;
filename="logo_databay_email.gif"
R0lGODlhlwAeAMQAAObm5v9QVf/R0oKBgfDw8NfX105MTLi3t/r6+sfHx/+rrf98gC0sLP8L
EhIQEKalpf/g4ZmYmHd2dmppaf8uNP/y8v8cIv+Ym//AwkE/P46NjRwbG11cXP8ABwUDA///
/yH5BAAAAAAALAAAAACXAB4AAAX/4CeOYnUJZKqubOu+cCzPNA0tVnfVfO//wGAKk+t0Ap+K
QMFUYCDCqHRKJVUWDaPRUsFktZ1G4AKtms9o1gKsFVS+7I5ll67bpd647hPQawNld4KDMQJF
bA07F35aFBiEkJEpfXEBjx8KjI0Vkp2DEIdaCySgFBShbEgrCQOtrq+uEQcALQewrQUjEbe8
rgkkD7y5KhMZB3drqSoVFQhdlHGXKQYe1dbX2BvHKwzY1RMiAN7j1xEjBeTmKeIeD3cYCxRf
FigvChRxFJwkBBvk5A7cpZhAjgGCDwn+kfslgto4CSoSehh2BwEEBQvowDAUR0EKdArHZTg4
4oDCXBFC/3qj9SEluZEpHnjYQFIGgpo1KgSasYjNKBImrzF4NaFbNgIjCGRQeIyVKwneOLzS
cLCAg38OWI4Y4GECgQcSOEwYcADnh6/FNjAwoGFYAQ0atI4AAFeEFwsLFLiJUQEfGH0kNGAD
x8+oNQdIRQg+7NCaOhIgD8sVgYADNsPVGI5YWjRqzQTdHDDIYHRDLokaUhCglkFEJi0NKJhl
0RP2TsvXUg88KiLBVWsZrF6DmMKlNYMqglqTik1guN8OBgAgkGCpB+L9ugK4iSCBvwEfECw1
kILrBpa1jVCQIQBRvbP+rlEcQVAoSevWyv6uhpwE12uEkQAAZucpVw1xIsjkgf8B863mQVYt
eQATCZYJZJ5WBfij2wfpHcEeHGG8Z+BMszVWDXkfKLhceJhBSAJ+1ThH32AfRFZNayNAtUFi
wFSTSwEHJIYAAQU84IADwyjIEALU9MchG+vFgIF7W2GDI2T7HfjBgNcgKQKMHmwjgnCSpeCb
ULRkdxhF1CDY40RjgmUAA/v1J5FAKW2gGSZscBFDMraNgJs1AYpAAGYP5jJoNQ4Y4Gh8jpFg
HH9mgbmWo1l6oA4C3Ygp6UwEIFBfNRtkMIBlKMLnAXgAXLWhXXH85EIFqMhGGZgDEKArABGA
ed0HI4bk5qgnprCYSt88B6dqS0FEEAMPJDCdCJYViur/B1BlwGMJqDTwnhqxJgUpo0ceOQ4D
0yEakpMm/jqCRMgWm2I1j824Y6vLvuuPjHnqOJkIgP6xzwp5sCFNsCFp88Gxh11lrjfDcNrc
CEx64/CD3iAHlQcMUEQXvcA+qBkBB4Q2X1CusjBlJdKMYAKI6g28MbKN5hJsBAXknHOwutn4
oFYqkpqAzjnPbE0u1PxmwAQGXLWBbvhuIIEGEnRjlAHO4SvhbCNAkwoGzEBwgV9U0lfu2WiX
OkDEGaCdKgl0nk2YkWdPOCDabvaGdkAftL1LlgwCM+7Tq11V71IO7LkM2XE0YAHMYMhqqK6U
V165CpaHukLmiXFO8XSVzzakX+UH6TrmAajPNxfqByTQec41AeBPvSwIALkmAnuiexCsca3C
BajgfsROuxcPA8kHQJX4DAIwjnsAvhsvfXHWKEwDAljg7sj03L9wwAQTxOWD2AE0YP75eCkw
cPfs+xACADs=
--------------0AF6A7E1F21AB119CC361D8D--
--------------AF66DE8F49B7941B791B002E--
8 years, 5 months
Problem using xenial cloud images with ovirt
by Claude Durocher
------=_=-_OpenGroupware_org_NGMime-4367-1464365638.750478-0------
content-type: text/plain; charset=utf-8
content-length: 413
content-transfer-encoding: quoted-printable
Hi.
I'm trying to run a Ubuntu cloud image on oVirt 3.6. It works fine with=
the 14.04 image or Centos 7 image but I have trouble with the 16.04 im=
age : I have no network and cloud-init dont receive any data from oVirt=
. Also, the VM takes about 5 minutes to boot (then I cant login as clou=
d-init dont seem to work).
I've validated the 16.04 image : I can boot it fine on my workstation r=
unning kvm.
------=_=-_OpenGroupware_org_NGMime-4367-1464365638.750478-0------
content-type: text/html; charset=utf-8
content-length: 446
content-transfer-encoding: quoted-printable
<html>Hi.<br />I'm trying to run a Ubuntu cloud image on oVirt 3.6.=
It works fine with the 14.04 image or Centos 7 image but I have troubl=
e with the 16.04 image : I have no network and cloud-init dont receive =
any data from oVirt. Also, the VM takes about 5 minutes to boot (then I=
cant login as cloud-init dont seem to work).<br />I've validated t=
he 16.04 image : I can boot it fine on my workstation running kvm.</htm=
l>
------=_=-_OpenGroupware_org_NGMime-4367-1464365638.750478-0--------
8 years, 5 months
Using ceph volumes with ovirt
by Alessandro De Salvo
This is a multi-part message in MIME format.
--------------C7D3C76DD1E6339049812313
Content-Type: text/plain; charset=iso-8859-15; format=flowed
Content-Transfer-Encoding: 7bit
Hi,
I'm happily using our research cluster in Italy via gluster, and now I'm
trying to hotplug a ceph disk on a VM of my cluster, without success.
The ceph cluster is managed via openstack cinder and I can create
correctly the disk via ovirt (3.6.6.2-1 on CentOS 7.2).
The problem comes when trying to hotplug, or start a machine with the
given disk attached.
In the vdsm log of the host where the VM is running or starting I see
the following error:
jsonrpc.Executor/5::INFO::2016-05-30
10:35:29,197::vm::2729::virt.vm::(hotplugDisk)
vmId=`c189472e-25d2-4df1-b089-590009856dd3`::Hotplug disk xml: <disk
address="" device="disk" snapshot="no" type="network">
<source
name="images/volume-9134b639-c23c-4ff1-91ca-0462c80026d2" protocol="rbd">
<host name="141.108.X.Y1" port="6789" transport="tcp"/>
<host name="141.108.X.Y2" port="6789" transport="tcp"/>
</source>
<auth username="cinder">
<secret type="ceph" uuid="<base 64 ceph secret>"/>
</auth>
<target bus="virtio" dev="vdb"/>
<driver cache="none" error_policy="stop" io="threads"
name="qemu" type="raw"/>
</disk>
jsonrpc.Executor/5::ERROR::2016-05-30
10:35:29,198::vm::2737::virt.vm::(hotplugDisk)
vmId=`c189472e-25d2-4df1-b089-590009856dd3`::Hotplug failed
Traceback (most recent call last):
File "/usr/share/vdsm/virt/vm.py", line 2735, in hotplugDisk
self._dom.attachDevice(driveXml)
File "/usr/share/vdsm/virt/virdomain.py", line 68, in f
ret = attr(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
line 124, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1313, in
wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 530, in
attachDevice
if ret == -1: raise libvirtError ('virDomainAttachDevice() failed',
dom=self)
libvirtError: XML error: invalid auth secret uuid
In fact the uuid of the secret used by ovirt to hotplug seems to be the
ceph secret (masked here as <base 64 ceph secret>), while libvirt
expects the uuid of the libvirt secret, by looking at the instructions
http://docs.ceph.com/docs/jewel/rbd/libvirt/.
Anyone got it working?
Thanks,
Alessandro
--------------C7D3C76DD1E6339049812313
Content-Type: text/html; charset=iso-8859-15
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=iso-8859-15">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p>
</p>
<div class="moz-text-flowed" style="font-family: -moz-fixed;
font-size: 12px;" lang="x-western">Hi,
<br>
I'm happily using our research cluster in Italy via gluster, and
now I'm trying to hotplug a ceph disk on a VM of my cluster,
without success.
<br>
The ceph cluster is managed via openstack cinder and I can create
correctly the disk via ovirt (3.6.6.2-1 on CentOS 7.2).
<br>
The problem comes when trying to hotplug, or start a machine with
the given disk attached.
<br>
In the vdsm log of the host where the VM is running or starting I
see the following error:
<br>
<br>
<br>
jsonrpc.Executor/5::INFO::2016-05-30
10:35:29,197::vm::2729::virt.vm::(hotplugDisk)
vmId=`c189472e-25d2-4df1-b089-590009856dd3`::Hotplug disk xml:
<disk address="" device="disk" snapshot="no" type="network">
<br>
<source
name="images/volume-9134b639-c23c-4ff1-91ca-0462c80026d2"
protocol="rbd">
<br>
<host name="141.108.X.Y1" port="6789"
transport="tcp"/>
<br>
<host name="141.108.X.Y2" port="6789"
transport="tcp"/>
<br>
</source>
<br>
<auth username="cinder">
<br>
<secret type="ceph" uuid="<base 64 ceph
secret>"/>
<br>
</auth>
<br>
<target bus="virtio" dev="vdb"/>
<br>
<driver cache="none" error_policy="stop" io="threads"
name="qemu" type="raw"/>
<br>
</disk>
<br>
<br>
jsonrpc.Executor/5::ERROR::2016-05-30
10:35:29,198::vm::2737::virt.vm::(hotplugDisk)
vmId=`c189472e-25d2-4df1-b089-590009856dd3`::Hotplug failed
<br>
Traceback (most recent call last):
<br>
File "/usr/share/vdsm/virt/vm.py", line 2735, in hotplugDisk
<br>
self._dom.attachDevice(driveXml)
<br>
File "/usr/share/vdsm/virt/virdomain.py", line 68, in f
<br>
ret = attr(*args, **kwargs)
<br>
File
"/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line
124, in wrapper
<br>
ret = f(*args, **kwargs)
<br>
File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line
1313, in wrapper
<br>
return func(inst, *args, **kwargs)
<br>
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 530,
in attachDevice
<br>
if ret == -1: raise libvirtError ('virDomainAttachDevice()
failed', dom=self)
<br>
libvirtError: XML error: invalid auth secret uuid
<br>
<br>
<br>
<br>
In fact the uuid of the secret used by ovirt to hotplug seems to
be the ceph secret (masked here as <base 64 ceph secret>),
while libvirt expects the uuid of the libvirt secret, by looking
at the instructions <a class="moz-txt-link-freetext"
href="http://docs.ceph.com/docs/jewel/rbd/libvirt/">http://docs.ceph.com/docs/jewel/rbd/libvirt/</a>.
<br>
Anyone got it working?
<br>
Thanks,
<br>
<br>
Alessandro
<br>
</div>
</body>
</html>
--------------C7D3C76DD1E6339049812313--
8 years, 5 months
failing update ovirt-engine on centos 7
by Fabrice Bacchella
--Apple-Mail=_9190D383-816A-4168-8F45-9E5CFD2D7429
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=us-ascii
I have an dedicated machin to run ovirt-engine (not hosted). It's an up =
to date centos 7.2.1511
I installed ovirt 3.6.6 a few weeks ago (May 10 17:56:44 tells me =
yum.log)
Now, I'm trying a full yum update and getting :
# yum update
....
Error: Package: ovirt-engine-tools-3.6.5.3-1.el7.centos.noarch =
(@ovirt-3.6)
Requires: ovirt-engine-tools-backup =3D 3.6.5.3-1.el7.centos
Removing: =
ovirt-engine-tools-backup-3.6.5.3-1.el7.centos.noarch (@ovirt-3.6)
ovirt-engine-tools-backup =3D 3.6.5.3-1.el7.centos
Updated By: =
ovirt-engine-tools-backup-3.6.6.2-1.el7.centos.noarch (ovirt-3.6)
ovirt-engine-tools-backup =3D 3.6.6.2-1.el7.centos
rpm -qi ovirt-engine-tools says :
Version : 3.6.5.3
Release : 1.el7.centos
...
Build Date : Mon Apr 11 23:45:30 2016
Build Host : el7-vm02.phx.ovirt.org
and :
rpm -qi ovirt-engine-tools-backup says:
Name : ovirt-engine-tools-backup
Version : 3.6.5.3
Release : 1.el7.centos
...
Build Date : Mon Apr 11 23:45:30 2016
Build Host : el7-vm02.phx.ovirt.org
yum update ovirt-engine-tools-backup ovirt-engine-tools fails in the =
same way.
but:
yum list ovirt-engine-tools-backup ovirt-engine-tools
Loaded plugins: etckeeper, fastestmirror, versionlock
Loading mirror speeds from cached hostfile
Installed Packages
ovirt-engine-tools.noarch =
3.6.5.3-1.el7.centos =
@ovirt-3.6
ovirt-engine-tools-backup.noarch =
3.6.5.3-1.el7.centos =
@ovirt-3.6
Available Packages
ovirt-engine-tools-backup.noarch =
3.6.6.2-1.el7.centos =
ovirt-3.6=20
So no ovirt-engine-tools to update.
and indeed :
yum update ovirt-engine-tools
Loaded plugins: etckeeper, fastestmirror, versionlock
Loading mirror speeds from cached hostfile
No packages marked for update
I have disable ovirt-3.6-epel, because I already use epel, is it the =
problem ?
What should I do ? I don't think removing ovirt-engine-tools is a =
option.
--Apple-Mail=_9190D383-816A-4168-8F45-9E5CFD2D7429
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
charset=us-ascii
<html><body style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; =
-webkit-line-break: after-white-space;" class=3D"">I have an dedicated =
machin to run ovirt-engine (not hosted). It's an up to date centos =
7.2.1511<br class=3D""><br class=3D"">I installed ovirt 3.6.6 a few =
weeks ago (May 10 17:56:44 tells me yum.log)<br class=3D""><br =
class=3D"">Now, I'm trying a full yum update and getting :<div =
class=3D""><font face=3D"Menlo" style=3D"font-size: 11px;" class=3D""># =
yum update</font></div><div class=3D""><font face=3D"Menlo" =
style=3D"font-size: 11px;" class=3D"">....</font></div><div =
class=3D""><font face=3D"Menlo" style=3D"font-size: 11px;" class=3D""><br =
class=3D""></font></div><div class=3D""><font face=3D"Menlo" =
style=3D"font-size: 11px;" class=3D"">Error: Package: =
ovirt-engine-tools-3.6.5.3-1.el7.centos.noarch (@ovirt-3.6)<br =
class=3D""> Requires: =
ovirt-engine-tools-backup =3D 3.6.5.3-1.el7.centos<br class=3D""> =
Removing: =
ovirt-engine-tools-backup-3.6.5.3-1.el7.centos.noarch (@ovirt-3.6)<br =
class=3D""> =
ovirt-engine-tools-backup =3D 3.6.5.3-1.el7.centos<br =
class=3D""> Updated By: =
ovirt-engine-tools-backup-3.6.6.2-1.el7.centos.noarch (ovirt-3.6)<br =
class=3D""> =
ovirt-engine-tools-backup =3D 3.6.6.2-1.el7.centos<br class=3D""><br=
class=3D""></font></div><div class=3D""><br class=3D""></div><div =
class=3D""><font face=3D"Menlo" style=3D"font-size: 11px;" class=3D"">rpm =
-qi ovirt-engine-tools says :</font></div><div class=3D""><font =
face=3D"Menlo" style=3D"font-size: 11px;" class=3D"">Version =
: 3.6.5.3<br class=3D"">Release : 1.el7.centos<br =
class=3D"">...</font></div><font face=3D"Menlo" style=3D"font-size: =
11px;" class=3D"">Build Date : Mon Apr 11 23:45:30 2016<br =
class=3D""></font><div class=3D""><font face=3D"Menlo" style=3D"font-size:=
11px;" class=3D"">Build Host : <a =
href=3D"http://el7-vm02.phx.ovirt.org" =
class=3D"">el7-vm02.phx.ovirt.org</a><br class=3D""><br =
class=3D""></font></div><div class=3D""><br class=3D""></div><div =
class=3D"">and :</div><div class=3D""><font face=3D"Menlo" =
style=3D"font-size: 11px;" class=3D"">rpm -qi ovirt-engine-tools-backup =
says:</font></div><div class=3D""><font face=3D"Menlo" style=3D"font-size:=
11px;" class=3D"">Name : =
ovirt-engine-tools-backup<br class=3D"">Version : =
3.6.5.3<br class=3D"">Release : =
1.el7.centos</font></div><div class=3D""><font face=3D"Menlo" =
style=3D"font-size: 11px;" class=3D"">...<br class=3D"">Build Date =
: Mon Apr 11 23:45:30 2016<br class=3D"">Build Host : <a =
href=3D"http://el7-vm02.phx.ovirt.org" =
class=3D"">el7-vm02.phx.ovirt.org</a><br class=3D""><br =
class=3D""></font></div><div class=3D"">yum update =
ovirt-engine-tools-backup ovirt-engine-tools fails in the same way.<br =
class=3D""></div><div class=3D""><br class=3D""></div><div =
class=3D"">but:</div><div class=3D""><br class=3D""></div><div =
class=3D""><font face=3D"Menlo" style=3D"font-size: 11px;" class=3D"">yum =
list ovirt-engine-tools-backup ovirt-engine-tools<br class=3D"">Loaded =
plugins: etckeeper, fastestmirror, versionlock<br class=3D"">Loading =
mirror speeds from cached hostfile<br class=3D"">Installed Packages<br =
class=3D"">ovirt-engine-tools.noarch =
=
=
3.6.5.3-1.el7.centos =
=
=
(a)ovirt-3.6<br class=3D"">ovirt-engine-tools-backup.noarch =
=
=
3.6.5.3-1.el7.centos =
=
=
(a)ovirt-3.6<br class=3D"">Available Packages<br =
class=3D"">ovirt-engine-tools-backup.noarch =
=
=
3.6.6.2-1.el7.centos =
=
=
ovirt-3.6 <br class=3D""></font><br class=3D""></div><div =
class=3D"">So no ovirt-engine-tools to update.</div><div =
class=3D""><br class=3D""></div><div class=3D"">and indeed :</div><div =
class=3D""><font face=3D"Menlo" style=3D"font-size: 11px;" class=3D"">yum =
update ovirt-engine-tools<br class=3D"">Loaded plugins: etckeeper, =
fastestmirror, versionlock<br class=3D"">Loading mirror speeds from =
cached hostfile<br class=3D"">No packages marked for update<br =
class=3D""><br class=3D""></font></div><div class=3D"">I have =
disable ovirt-3.6-epel, because I already use epel, is it the =
problem ?</div><div class=3D""><br class=3D""></div><div class=3D"">What =
should I do ? I don't think removing ovirt-engine-tools is a =
option.</div><div class=3D""><br class=3D""></div><div class=3D""><br =
class=3D""></div><div class=3D""><br class=3D""></div></body></html>=
--Apple-Mail=_9190D383-816A-4168-8F45-9E5CFD2D7429--
8 years, 5 months
One RHEV Virtual Machine does not Automatically Resume following Compellent SAN Controller Failover
by Duckworth, Douglas C
Hello --
Not sure if y'all can help with this issue we've been seeing with RHEV...
On 11/13/2015, during Code Upgrade of Compellent SAN at our Disaster
Recovery Site, we Failed Over to Secondary SAN Controller. Most Virtual
Machines in our DR Cluster Resumed automatically after Pausing except VM
"BADVM" on Host "BADHOST."
In Engine.log you can see that BADVM was sent into "VM_PAUSED_EIO" state
at 10:47:57:
"VM BADVM has paused due to storage I/O problem."
On this Red Hat Enterprise Virtualization Hypervisor 6.6
(20150512.0.el6ev) Host, two other VMs paused but then automatically
resumed without System Administrator intervention...
In our DR Cluster, 22 VMs also resumed automatically...
None of these Guest VMs are engaged in high I/O as these are DR site VMs
not currently doing anything.
We sent this information to Dell. Their response:
"The root cause may reside within your virtualization solution, not the
parent OS (RHEV-Hypervisor disc) or Storage (Dell Compellent.)"
We are doing this Failover again on Sunday November 29th so we would
like to know how to mitigate this issue, given we have to manually
resume paused VMs that don't resume automatically.
Before we initiated SAN Controller Failover, all iSCSI paths to Targets
were present on Host tulhv2p03.
VM logs on Host show in /var/log/libvirt/qemu/badhost.log that Storage
error was reported:
block I/O error in device 'drive-virtio-disk0': Input/output error (5)
block I/O error in device 'drive-virtio-disk0': Input/output error (5)
block I/O error in device 'drive-virtio-disk0': Input/output error (5)
block I/O error in device 'drive-virtio-disk0': Input/output error (5)
All disks used by this Guest VM are provided by single Storage Domain
COM_3TB4_DR with serial "270." In syslog we do see that all paths for
that Storage Domain Failed:
Nov 13 16:47:40 multipathd: 36000d310005caf000000000000000270: remaining
active paths: 0
Though these recovered later:
Nov 13 16:59:17 multipathd: 36000d310005caf000000000000000270: sdbg -
tur checker reports path is up
Nov 13 16:59:17 multipathd: 36000d310005caf000000000000000270: remaining
active paths: 8
Does anyone have an idea of why the VM would fail to automatically
resume if the iSCSI paths used by its Storage Domain recovered?
Thanks
Doug
--
Thanks
Douglas Charles Duckworth
Unix Administrator
Tulane University
Technology Services
1555 Poydras Ave
NOLA -- 70112
E: duckd(a)tulane.edu
O: 504-988-9341
F: 504-988-8505
8 years, 5 months
Unable to start vdsmd. Dependency vdsm-network keeps failing
by Christopher Lord
--_000_ME1PR01MB0290B8F4297138294A1AD105F9450ME1PR01MB0290ausp_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
I have a host that has dropped out of my cluster because it can't start vds=
md. Initially the logs were reporting a duplicate gateway, so I removed the=
duplicate. But I still can't start vdsmd. /var/log/vdsm/supervdsm.log is s=
howing the following.
Traceback (most recent call last):
File "/usr/share/vdsm/vdsm-restore-net-config", line 439, in restore
unified_restoration()
File "/usr/share/vdsm/vdsm-restore-net-config", line 131, in unified_rest=
oration
changed_config =3D _filter_changed_nets_bonds(available_config)
File "/usr/share/vdsm/vdsm-restore-net-config", line 258, in _filter_chan=
ged_nets_bonds
kernel_config =3D KernelConfig(netinfo.NetInfo())
File "/usr/lib/python2.7/site-packages/vdsm/netconfpersistence.py", line =
204, in __init__
for net, net_attr in self._analyze_netinfo_nets(netinfo):
File "/usr/lib/python2.7/site-packages/vdsm/netconfpersistence.py", line =
216, in _analyze_netinfo_nets
yield net, self._translate_netinfo_net(net, net_attr)
File "/usr/lib/python2.7/site-packages/vdsm/netconfpersistence.py", line =
232, in _translate_netinfo_net
self._translate_nics(attributes, nics)
File "/usr/lib/python2.7/site-packages/vdsm/netconfpersistence.py", line =
269, in _translate_nics
nic, =3D nics
ValueError: too many values to unpack
I've downloaded the source code and have tried to follow along and see wha=
t's happening, but it's going a little (a lot) over my head at the moment. =
Could anyone help me out?
Thanks,
Chris
--_000_ME1PR01MB0290B8F4297138294A1AD105F9450ME1PR01MB0290ausp_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style type=3D"text/css" style=3D"display:none;"><!-- P {margin-top:0;margi=
n-bottom:0;} --></style>
</head>
<body dir=3D"ltr">
<div id=3D"divtagdefaultwrapper" style=3D"font-size:12pt;color:#000000;back=
ground-color:#FFFFFF;font-family:Calibri,Arial,Helvetica,sans-serif;">
<p>I have a host that has dropped out of my cluster because it can't start =
vdsmd. Initially the logs were reporting a duplicate gateway, so I removed =
the duplicate. But I still can't start vdsmd. /var/log/vdsm/supervdsm.log i=
s showing the following.</p>
<p></p>
<div><br>
</div>
<div>Traceback (most recent call last):</div>
<div> File "/usr/share/vdsm/vdsm-restore-net-config", line =
439, in restore</div>
<div> unified_restoration()</div>
<div> File "/usr/share/vdsm/vdsm-restore-net-config", line =
131, in unified_restoration</div>
<div> changed_config =3D _filter_changed_nets_bonds(available_=
config)</div>
<div> File "/usr/share/vdsm/vdsm-restore-net-config", line =
258, in _filter_changed_nets_bonds</div>
<div> kernel_config =3D KernelConfig(netinfo.NetInfo())</div>
<div> File "/usr/lib/python2.7/site-packages/vdsm/netconfpersist=
ence.py", line 204, in __init__</div>
<div> for net, net_attr in self._analyze_netinfo_nets(netinfo)=
:</div>
<div> File "/usr/lib/python2.7/site-packages/vdsm/netconfpersist=
ence.py", line 216, in _analyze_netinfo_nets</div>
<div> yield net, self._translate_netinfo_net(net, net_attr)</d=
iv>
<div> File "/usr/lib/python2.7/site-packages/vdsm/netconfpersist=
ence.py", line 232, in _translate_netinfo_net</div>
<div> self._translate_nics(attributes, nics)</div>
<div> File "/usr/lib/python2.7/site-packages/vdsm/netconfpersist=
ence.py", line 269, in _translate_nics</div>
<div> nic, =3D nics</div>
<div>ValueError: too many values to unpack</div>
<div><br>
</div>
<div><span style=3D"font-family: Calibri, Arial, Helvetica, sans-serif, 'Ap=
ple Color Emoji', 'Segoe UI Emoji', NotoColorEmoji, 'Segoe UI Symbol', 'And=
roid Emoji', EmojiSymbols; font-size: 16px;"> I've downloaded the sour=
ce code and have tried to follow along
and see what's happening, but it's going a little (a lot) over my head at =
the moment. Could anyone help me out?</span><br>
</div>
<div><span style=3D"font-family: Calibri, Arial, Helvetica, sans-serif, 'Ap=
ple Color Emoji', 'Segoe UI Emoji', NotoColorEmoji, 'Segoe UI Symbol', 'And=
roid Emoji', EmojiSymbols; font-size: 16px;"><br>
</span></div>
<div><span style=3D"font-family: Calibri, Arial, Helvetica, sans-serif, 'Ap=
ple Color Emoji', 'Segoe UI Emoji', NotoColorEmoji, 'Segoe UI Symbol', 'And=
roid Emoji', EmojiSymbols; font-size: 16px;">Thanks,</span></div>
<div><span style=3D"font-family: Calibri, Arial, Helvetica, sans-serif, 'Ap=
ple Color Emoji', 'Segoe UI Emoji', NotoColorEmoji, 'Segoe UI Symbol', 'And=
roid Emoji', EmojiSymbols; font-size: 16px;"><br>
</span></div>
<div><span style=3D"font-family: Calibri, Arial, Helvetica, sans-serif, 'Ap=
ple Color Emoji', 'Segoe UI Emoji', NotoColorEmoji, 'Segoe UI Symbol', 'And=
roid Emoji', EmojiSymbols; font-size: 16px;">Chris</span></div>
<br>
<p></p>
</div>
</body>
</html>
--_000_ME1PR01MB0290B8F4297138294A1AD105F9450ME1PR01MB0290ausp_--
8 years, 5 months
Re: [ovirt-users] oVirt 3.6 with ScaleIO
by Scott Sobotka
------=_Part_1525593_163252864.1464535747326
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
Thank you, Gianluca.
The OpenStack integration with ScaleIO absolutely warrants further investigation on my part. It looks like a rather recent addition to ScaleIO, so I'll need to run that in a lab setting for a while. It looks like the best long term option.
In the meantime, I think that I might be able to get this working by exposing a few, larger LUNs via NFS running on the ScaleIO nodes. I lose granular snapshot capabilities and MPIO in ScaleIO that way, though.
If I need real MPIO-type performance on a VM, I guess I can just install the ScaleIO client in the guest and access the LUN that way.
Also, I plan on learning more about the iSCSI Direct LUN logic in oVirt to see if there is enough similarity between the output of iscsiadm and the ScaleIO SDC/SDS utilities to request a direct ScaleIO integration feature. That might be a nice little side project for a while.
Have a great day,
--Scott Sobotka
---- On Sun, 29 May 2016 02:45:36 -0700 gianluca.cecchi(a)gmail.com wrote ----
On Sun, May 29, 2016 at 7:18 AM, Scott Sobotka <scott(a)pragmatica.us> wrote:
Has anyone gotten an oVirt hosted engine and shared storage environment running with ScaleIO as the backing storage?
I was thinking that I would just set up iSCSI LUNs in ScaleIO and have oVirt point directly to one for the hosted engine setup. From there, my plan was to assign disks to VMs as direct LUNs.
Unfortunately, it looks like ScaleIO dropped iSCSI support at around version 1.32 and I'm at a loss as to how I can expose ScaleIO LUNs to oVirt.
Thank you,
--Scott Sobotka
I don't know much of ScaleIO, but at least since Liberty release it can be configured as a cinder backend driver in Openstack
http://docs.openstack.org/liberty/config-reference/content/ScaleIO-cinder...
http://docs.openstack.org/mitaka/config-reference/block-storage/drivers/e...
So the more "natural" way that comes to mind is to interface oVirt with cinder and configure ScaleIO in cinder in similar way as with Ceph backend
http://old.ovirt.org/Features/Cinder_Integration
But this implies configuring Openstack too and also I don't know how the security part (see ceph authentication in above link), if present in ScaleIO, could be managed
HIH searching a solution
Gianluca
------=_Part_1525593_163252864.1464535747326
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"><html><head>=
<meta content=3D"text/html;charset=3DUTF-8" http-equiv=3D"Content-Type"></h=
ead><body ><div style=3D'font-size:10pt;font-family:Verdana,Arial,Helvetica=
,sans-serif;'><div><div>Thank you, Gianluca. <br></div><div><br></div><div>=
The OpenStack integration with ScaleIO absolutely warrants further investig=
ation on my part. It looks like a rather recent addition to ScaleIO, so I'l=
l need to run that in a lab setting for a while. It looks like the best lon=
g term option. <br></div><div><div><br></div><div>In the meantime, I t=
hink that I might be able to get this working by exposing a few, larger LUN=
s via NFS running on the ScaleIO nodes. I lose granular snapshot capabiliti=
es and MPIO in ScaleIO that way, though. <br></div><div><br></div><div=
><div>If I need real MPIO-type performance on a VM, I guess I can just inst=
all the ScaleIO client in the guest and access the LUN that way.<br></div><=
div><br></div><div>Also, I plan on learning more about the iSCSI Direct LUN=
logic in oVirt to see if there is enough similarity between the output of =
iscsiadm and the ScaleIO SDC/SDS utilities to request a direct ScaleIO inte=
gration feature. That might be a nice little side project for a while.<br><=
/div><div><br></div><div>Have a great day,<br></div><div><div>--Scott Sobot=
ka<br></div></div><div><div><br></div><div>---- On Sun, 29 May 2016 02:45:3=
6 -0700 <b> <a href=3D"mailto:gianluca.cecchi@gmail.com" target=3D"_blank">=
gianluca.cecchi(a)gmail.com</a> </b> wrote ----<br></div><div><br></div><div>=
<div><div><div><br></div><div><div>On Sun, May 29, 2016 at 7:18 AM, Scott S=
obotka <span><<a href=3D"mailto:scott@pragmatica.us" target=3D"_blank">s=
cott(a)pragmatica.us</a>></span> wrote:<br></div><blockquote style=3D"marg=
in: 0.0px 0.0px 0.0px 0.8ex;border-left: 1.0px solid rgb(204,204,204);paddi=
ng-left: 1.0ex;"><div><u></u><br></div><div><div style=3D"font-size: 10.0pt=
;font-family: Verdana , Arial , Helvetica , sans-serif;"><div>Has anyone go=
tten an oVirt hosted engine and shared storage environment running with Sca=
leIO as the backing storage? <br></div><div><br></div><div>I was thinking t=
hat I would just set up iSCSI LUNs in ScaleIO and have oVirt point directly=
to one for the hosted engine setup. From there, my plan was to assign disk=
s to VMs as direct LUNs. <br></div><div><br></div><div>Unfortunately, it lo=
oks like ScaleIO dropped iSCSI support at around version 1.32 and I'm at a =
loss as to how I can expose ScaleIO LUNs to oVirt.<br></div><div><br></div>=
<div>Thank you,<br></div><div>--Scott Sobotka<br></div></div></div></blockq=
uote></div></div><div><div>I don't know much of ScaleIO, but at least since=
Liberty release it can be configured as a cinder backend driver in Opensta=
ck<br></div><div><a href=3D"http://docs.openstack.org/liberty/config-refere=
nce/content/ScaleIO-cinder-driver.html" target=3D"_blank">http://docs.opens=
tack.org/liberty/config-reference/content/ScaleIO-cinder-driver.html</a><br=
></div><div><a href=3D"http://docs.openstack.org/mitaka/config-reference/bl=
ock-storage/drivers/emc-scaleio-driver.html" target=3D"_blank">http://docs.=
openstack.org/mitaka/config-reference/block-storage/drivers/emc-scaleio-dri=
ver.html</a><br></div></div><div><div>So the more "natural" way that comes =
to mind is to interface oVirt with cinder and configure ScaleIO in cinder i=
n similar way as with Ceph backend<br></div><div><a href=3D"http://old.ovir=
t.org/Features/Cinder_Integration" target=3D"_blank">http://old.ovirt.org/F=
eatures/Cinder_Integration</a><br></div></div><div><div>But this implies co=
nfiguring Openstack too and also I don't know how the security part (see ce=
ph authentication in above link), if present in ScaleIO, could be managed<b=
r></div></div><div><div>HIH searching a solution<br></div></div><div>Gianlu=
ca<br></div><div><br></div><div><br></div></div></div></div></div><div><br>=
</div></div></div><div><br></div></div></body></html>
------=_Part_1525593_163252864.1464535747326--
8 years, 5 months
oVirt 3.6 with ScaleIO
by Scott Sobotka
------=_Part_1502959_2123345587.1464499127271
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
Has anyone gotten an oVirt hosted engine and shared storage environment running with ScaleIO as the backing storage?
I was thinking that I would just set up iSCSI LUNs in ScaleIO and have oVirt point directly to one for the hosted engine setup. From there, my plan was to assign disks to VMs as direct LUNs.
Unfortunately, it looks like ScaleIO dropped iSCSI support at around version 1.32 and I'm at a loss as to how I can expose ScaleIO LUNs to oVirt.
Thank you,
--Scott Sobotka
------=_Part_1502959_2123345587.1464499127271
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: 7bit
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"><html><head><meta content="text/html;charset=UTF-8" http-equiv="Content-Type"></head><body ><div style='font-size:10pt;font-family:Verdana,Arial,Helvetica,sans-serif;'><div>Has anyone gotten an oVirt hosted engine and shared storage environment running with ScaleIO as the backing storage? <br></div><div><br></div><div>I was thinking that I would just set up iSCSI LUNs in ScaleIO and have oVirt point directly to one for the hosted engine setup. From there, my plan was to assign disks to VMs as direct LUNs. <br></div><div><br></div><div>Unfortunately, it looks like ScaleIO dropped iSCSI support at around version 1.32 and I'm at a loss as to how I can expose ScaleIO LUNs to oVirt.<br></div><div><br></div><div>Thank you,<br></div><div>--Scott Sobotka<br></div><div><br></div></div></body></html>
------=_Part_1502959_2123345587.1464499127271--
8 years, 5 months
Changing Cluster CPU Type in a single Host with Hosted Engine environment
by Ralf Braendli
Hi
I have the Problem that I selected the wrong CPU Type throw the setup process.
Is it posible to change it without an new installation ?
We have a single Host with a Hosted Engine installed.
With this installation I can’t put the Host into Maintenance Mode because the Hosted Engine will run on this Host.
The Version we us is 3.5.5-1
Best Regards
Ralf Brändli
8 years, 5 months
Re: [ovirt-users] How to automate the ovirt host deployment?
by Karli Sjöberg
--_000_3553e3d6784b452880418435c193035cExch23sluse_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
DQpEZW4gMjggbWFqIDIwMTYgMTE6NTAgc2tyZXYgQXJtYW4gS2hhbGF0eWFuIDxhcm0yYXJtQGdt
YWlsLmNvbT46DQo+DQo+IFRoYW5rIHlvdSBmb3IgdGhlIGhpbnQuIEkgd2lsbCB0cnkgbmV4dCB3
ZWVrLg0KPiBGb3JlbWFuIGxvb2tzIHF1aXRlIGNvbXBsZXg6KQ0KDQpXZWxsLCB5ZWFoLCBpdCB0
YWtlcyBhIHdoaWxlIHRvIGdldCBpbnRvLiBCdXQgb25jZSB5b3UncmUgdGhlcmUsIHlvdSdsbCBu
b3RpY2UgdGhlcmUncyBhIGxvdCBvZiBzdHVmZiBhbHJlYWR5IGJlZW4gZG9uZSBmb3IgZGVwbG95
aW5nIG5ldyBob3N0c1sqXSwgYXMgd2VsbCBhcyBib2F0bG9hZHMgbW9yZSB0byBtYWtlIHlvdXIg
bGlmZSBhIGxvdCBlYXNpZXIhIElmIGFueXRoaW5nLCBqdXN0IGZvciB0aGUgImRvY3VtZW50YXRp
b24iIHlvdSBnZXQgZm9yIHVzaW5nIGl0IHRoYXQgeW91IGRvbid0IGhhdmUgdG8gd3JpdGUgYWZ0
ZXJ3YXJkczopDQoNCi9LDQoNClsqXTogaHR0cDovL3lvdXR1LmJlL2dvelg4OTFrWUFZDQoNCj4N
Cj4gSSB3b3VsZCBwcmVmZXIgc2ltcGxlIFB5dGhvbiBzY3JpcHQgd2l0aCA0IGxpbmVzOiBhZGQs
IGluc3RhbGwsIHNldHVwIG5ldHdvcmtzIGFuZCBhY3RpdmF0ZS4NCj4NCj4gQW0gMjcuMDUuMjAx
NiA2OjUxIG5hY2htLiBzY2hyaWViICJLYXJsaSBTasO2YmVyZyIgPGthcmxpLnNqb2JlcmdAc2x1
LnNlPjoNCj4+DQo+Pg0KPj4gRGVuIDI3IG1haiAyMDE2IDE4OjQxIHNrcmV2IEFybWFuIEtoYWxh
dHlhbiA8YXJtMmFybUBnbWFpbC5jb20+Og0KPj4gPg0KPj4gPiBIaSwgSSBhbSBsb29raW5nIHNv
bWUgbWV0aG9kIHRvIGF1dG9tYXRlIHRoZSBob3N0IGRlcGxveW1lbnRzIGluIGEgY2x1c3RlciBl
bnZpcm9ubWVudC4NCj4+ID4gQXNzdW1pbmcgd2UgaGF2ZSAyMCBub2RlcyB3aXRoIGNlbnRvcyA3
IGV0aDAvZXRoMSBjb25maWd1cmVkLiBJcyBpdCBwb3NzaWJsZSB0byBhdXRvbWF0ZSBpbnN0YWxs
YXRpb24gd2l0aCBvdmlydC1zZGs/DQo+PiA+IEFyZSB0aGVyZSBzb21lIGV4YW1wbGVzICA/DQo+
Pg0KPj4gWW91IGNvdWxkIGRvIHRoYXQsIG9yIGxvb2sgaW50byBmdWxsIGxpZmUgY3ljbGUgbWFu
YWdlbWVudCB3aXRoIFRoZSBGb3JlbWFuLg0KPj4NCj4+IC9LDQo+Pg0KPj4gPg0KPj4gPiBUaGFu
a3MsDQo+PiA+IEFybWFuLg0K
--_000_3553e3d6784b452880418435c193035cExch23sluse_
Content-Type: text/html; charset="utf-8"
Content-ID: <4C7CE13BD53A5A478E505D10AEF329B1(a)ad.slu.se>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHAgZGlyPSJsdHIi
Pjxicj4NCkRlbiAyOCBtYWogMjAxNiAxMTo1MCBza3JldiBBcm1hbiBLaGFsYXR5YW4gJmx0O2Fy
bTJhcm1AZ21haWwuY29tJmd0Ozo8YnI+DQomZ3Q7PGJyPg0KJmd0OyBUaGFuayB5b3UgZm9yIHRo
ZSBoaW50LiBJIHdpbGwgdHJ5IG5leHQgd2Vlay4gPGJyPg0KJmd0OyBGb3JlbWFuIGxvb2tzIHF1
aXRlIGNvbXBsZXg6KTwvcD4NCjxwIGRpcj0ibHRyIj5XZWxsLCB5ZWFoLCBpdCB0YWtlcyBhIHdo
aWxlIHRvIGdldCBpbnRvLiBCdXQgb25jZSB5b3UncmUgdGhlcmUsIHlvdSdsbCBub3RpY2UgdGhl
cmUncyBhIGxvdCBvZiBzdHVmZiBhbHJlYWR5IGJlZW4gZG9uZSBmb3IgZGVwbG95aW5nIG5ldyBo
b3N0c1sqXSwgYXMgd2VsbCBhcyBib2F0bG9hZHMgbW9yZSB0byBtYWtlIHlvdXIgbGlmZSBhIGxv
dCBlYXNpZXIhIElmIGFueXRoaW5nLCBqdXN0IGZvciB0aGUgJnF1b3Q7ZG9jdW1lbnRhdGlvbiZx
dW90Ow0KIHlvdSBnZXQgZm9yIHVzaW5nIGl0IHRoYXQgeW91IGRvbid0IGhhdmUgdG8gd3JpdGUg
YWZ0ZXJ3YXJkczopPC9wPg0KPHAgZGlyPSJsdHIiPi9LPC9wPg0KPHAgZGlyPSJsdHIiPlsqXTog
aHR0cDovL3lvdXR1LmJlL2dvelg4OTFrWUFZPC9wPg0KPHAgZGlyPSJsdHIiPiZndDs8YnI+DQom
Z3Q7IEkgd291bGQgcHJlZmVyIHNpbXBsZSBQeXRob24gc2NyaXB0IHdpdGggNCBsaW5lczogYWRk
LCBpbnN0YWxsLCBzZXR1cCBuZXR3b3JrcyBhbmQgYWN0aXZhdGUuPGJyPg0KJmd0Ozxicj4NCiZn
dDsgQW0gMjcuMDUuMjAxNiA2OjUxIG5hY2htLiBzY2hyaWViICZxdW90O0thcmxpIFNqw7ZiZXJn
JnF1b3Q7ICZsdDtrYXJsaS5zam9iZXJnQHNsdS5zZSZndDs6PGJyPg0KJmd0OyZndDs8YnI+DQom
Z3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7IERlbiAyNyBtYWogMjAxNiAxODo0MSBza3JldiBBcm1hbiBL
aGFsYXR5YW4gJmx0O2FybTJhcm1AZ21haWwuY29tJmd0Ozo8YnI+DQomZ3Q7Jmd0OyAmZ3Q7PGJy
Pg0KJmd0OyZndDsgJmd0OyBIaSwgSSBhbSBsb29raW5nIHNvbWUgbWV0aG9kIHRvIGF1dG9tYXRl
IHRoZSBob3N0IGRlcGxveW1lbnRzIGluIGEgY2x1c3RlciBlbnZpcm9ubWVudC48YnI+DQomZ3Q7
Jmd0OyAmZ3Q7IEFzc3VtaW5nIHdlIGhhdmUgMjAgbm9kZXMgd2l0aCBjZW50b3MgNyBldGgwL2V0
aDEgY29uZmlndXJlZC4gSXMgaXQgcG9zc2libGUgdG8gYXV0b21hdGUgaW5zdGFsbGF0aW9uIHdp
dGggb3ZpcnQtc2RrPzxicj4NCiZndDsmZ3Q7ICZndDsgQXJlIHRoZXJlIHNvbWUgZXhhbXBsZXMm
bmJzcDsgPzxicj4NCiZndDsmZ3Q7PGJyPg0KJmd0OyZndDsgWW91IGNvdWxkIGRvIHRoYXQsIG9y
IGxvb2sgaW50byBmdWxsIGxpZmUgY3ljbGUgbWFuYWdlbWVudCB3aXRoIFRoZSBGb3JlbWFuLjxi
cj4NCiZndDsmZ3Q7PGJyPg0KJmd0OyZndDsgL0s8YnI+DQomZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7
ICZndDs8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IFRoYW5rcywgPGJyPg0KJmd0OyZndDsgJmd0OyBBcm1h
bi48L3A+DQo8L2JvZHk+DQo8L2h0bWw+DQo=
--_000_3553e3d6784b452880418435c193035cExch23sluse_--
8 years, 5 months