unreachable iso domain
by Fabrice Bacchella
--Apple-Mail=_00AFD7BD-2DBB-487F-96CD-3D6FDE49EB9C
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=us-ascii
I try to attach an iso domain but it keep saying that it doesn't exist. =
But if I look in =
/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf, the directory =
given in OVESETUP_CONFIG/isoDomainStorageDir exists and the logs says :
=20
2016-05-19 19:23:23,900 ERROR =
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand] =
(org.ovirt.thread.pool-8-thread-19) [78115edf] Command =
'AttachStorageDomainVDSCommand( =
AttachStorageDomainVDSCommandParameters:{runAsync=3D'true', =
storagePoolId=3D'17434f4e-8d1a-4a88-ae39-d2ddd46b3b9b', =
ignoreFailoverLimit=3D'false', =
storageDomainId=3D'2a9fe2d7-ea38-4ced-a274-32734b7b571b'})' execution =
failed: IRSGenericException: IRSErrorException: Failed to =
AttachStorageDomainVDS, error =3D Storage domain does not exist: =
(u'2a9fe2d7-ea38-4ced-a274-32734b7b571b',), code =3D 358
Does code =3D 358 means something important ?
--Apple-Mail=_00AFD7BD-2DBB-487F-96CD-3D6FDE49EB9C
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
charset=us-ascii
<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Dus-ascii"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" =
class=3D""><div style=3D"margin: 0px; font-size: 11px; font-family: =
Menlo;" class=3D"">I try to attach an iso domain but it keep saying that =
it doesn't exist. But if I look in =
/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf, the directory =
given in OVESETUP_CONFIG/isoDomainStorageDir exists and the logs says =
:</div><div style=3D"margin: 0px; font-size: 11px; font-family: Menlo;" =
class=3D""> </div><div style=3D"margin: 0px; font-size: 11px; =
font-family: Menlo;" class=3D"">2016-05-19 19:23:23,900 ERROR =
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand] =
(org.ovirt.thread.pool-8-thread-19) [78115edf] Command =
'AttachStorageDomainVDSCommand( =
AttachStorageDomainVDSCommandParameters:{runAsync=3D'true', =
storagePoolId=3D'17434f4e-8d1a-4a88-ae39-d2ddd46b3b9b', =
ignoreFailoverLimit=3D'false', =
storageDomainId=3D'2a9fe2d7-ea38-4ced-a274-32734b7b571b'})' execution =
failed: IRSGenericException: IRSErrorException: Failed to =
AttachStorageDomainVDS, error =3D Storage domain does not exist: =
(u'2a9fe2d7-ea38-4ced-a274-32734b7b571b',), code =3D 358</div><div =
style=3D"margin: 0px; font-size: 11px; font-family: Menlo;" class=3D""><br=
class=3D""></div><div style=3D"margin: 0px; font-size: 11px; =
font-family: Menlo;" class=3D"">Does code =3D 358 means something =
important ?</div><div style=3D"margin: 0px; font-size: 11px; =
font-family: Menlo;" class=3D""><br class=3D""></div></body></html>=
--Apple-Mail=_00AFD7BD-2DBB-487F-96CD-3D6FDE49EB9C--
8 years, 6 months
gluster VM disk permissions
by Bill James
--------------060108010807030305050803
Content-Type: text/plain; charset="utf-8"; format=flowed
Content-Transfer-Encoding: 7bit
I tried posting this to ovirt-users list but got no response so I'll try
here too.
I just setup a new ovirt cluster with gluster & nfs data domains.
VMs on the NFS domain startup with no issues.
VMs on the gluster domains complain of "Permission denied" on startup.
2016-05-17 14:14:51,959 ERROR [org.ovirt.engine.core.dal.dbbroker.audi
tloghandling.AuditLogDirector] (ForkJoinPool-1-worker-11) [] Correlation
ID: null, Call Stack: null, Custom Event ID: -1, Message: VM
billj7-2.j2noc.com is down with error. Exit message: internal error:
process exited while connecting to monitor: 2016-05-17T21:14:51.162932Z
qemu-kvm: -drive
file=/rhev/data-center/00000001-0001-0001-0001-0000000002c5/22df0943-c131-4ed8-ba9c-05923afcf8e3/images/2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25/a2b0a04d-041f-4342-9687-142cc641b35e,if=none,id=drive-virtio-disk0,format=raw,serial=2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25,cache=none,werror=stop,rerror=stop,aio=threads:
Could not open
'/rhev/data-center/00000001-0001-0001-0001-0000000002c5/22df0943-c131-4ed8-ba9c-05923afcf8e3/images/2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25/a2b0a04d-041f-4342-9687-142cc641b35e':
Permission denied
I did setup gluster permissions:
gluster volume set gv1 storage.owner-uid 36
gluster volume set gv1 storage.owner-gid 36
files look fine:
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# ls -lah
total 2.0G
drwxr-xr-x 2 vdsm kvm 4.0K May 17 09:39 .
drwxr-xr-x 11 vdsm kvm 4.0K May 17 10:40 ..
-rw-rw---- 1 vdsm kvm 20G May 17 10:33
a2b0a04d-041f-4342-9687-142cc641b35e
-rw-rw---- 1 vdsm kvm 1.0M May 17 09:38
a2b0a04d-041f-4342-9687-142cc641b35e.lease
-rw-r--r-- 1 vdsm kvm 259 May 17 09:39
a2b0a04d-041f-4342-9687-142cc641b35e.meta
I did check and vdsm user can read the file just fine.
*If I change mod disk to 666 VM starts up fine.*
ALso if I chgrp to qemu VM starts up fine.
[root@ovirt2 prod a7af2477-4a19-4f01-9de1-c939c99e53ad]# ls -l
253f9615-f111-45ca-bdce-cbc9e70406df
-rw-rw---- 1 vdsm qemu 21474836480 May 18 11:38
253f9615-f111-45ca-bdce-cbc9e70406df
Seems similar to issue here but that suggests it was fixed:
https://bugzilla.redhat.com/show_bug.cgi?id=1052114
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# grep 36
/etc/passwd /etc/group
/etc/passwd:vdsm:x:36:36:Node Virtualization Manager:/:/bin/bash
/etc/group:kvm:x:36:qemu,sanlock
ovirt-engine-3.6.4.1-1.el7.centos.noarch
glusterfs-3.7.11-1.el7.x86_64
qemu-img-ev-2.3.0-31.el7_2.4.1.x86_64
qemu-kvm-ev-2.3.0-31.el7_2.4.1.x86_64
libvirt-daemon-1.2.17-13.el7_2.4.x86_64
I also set libvirt qemu user to root, for import-to-ovirt.pl script.
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# grep ^user
/etc/libvirt/qemu.conf
user = "root"
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# gluster volume
info gv1
Volume Name: gv1
Type: Replicate
Volume ID: 062aa1a5-91e8-420d-800e-b8bc4aff20d8
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1-gl.j2noc.com:/ovirt-store/brick1/gv1
Brick2: ovirt2-gl.j2noc.com:/ovirt-store/brick1/gv1
Brick3: ovirt3-gl.j2noc.com:/ovirt-store/brick1/gv1
Options Reconfigured:
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
features.shard: on
features.shard-block-size: 64MB
storage.owner-uid: 36
storage.owner-gid: 36
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# gluster volume
status gv1
Status of volume: gv1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ovirt1-gl.j2noc.com:/ovirt-store/bric
k1/gv1 49152 0 Y 2046
Brick ovirt2-gl.j2noc.com:/ovirt-store/bric
k1/gv1 49152 0 Y 22532
Brick ovirt3-gl.j2noc.com:/ovirt-store/bric
k1/gv1 49152 0 Y 59683
NFS Server on localhost 2049 0 Y 2200
Self-heal Daemon on localhost N/A N/A Y 2232
NFS Server on ovirt3-gl.j2noc.com 2049 0 Y 65363
Self-heal Daemon on ovirt3-gl.j2noc.com N/A N/A Y 65371
NFS Server on ovirt2-gl.j2noc.com 2049 0 Y 17621
Self-heal Daemon on ovirt2-gl.j2noc.com N/A N/A Y 17629
Task Status of Volume gv1
------------------------------------------------------------------------------
There are no active volume tasks
any ideas on why ovirt thinks it needs group of qemu??
--------------060108010807030305050803
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
I tried posting this to ovirt-users list but got no response so I'll
try here too.<br>
<br>
<div class="moz-text-html" lang="x-unicode"><br>
I just setup a new ovirt cluster with gluster & nfs data
domains.<br>
<br>
VMs on the NFS domain startup with no issues.<br>
VMs on the gluster domains complain of "Permission denied" on
startup.<br>
<br>
2016-05-17 14:14:51,959 ERROR
[org.ovirt.engine.core.dal.dbbroker.audi<br>
tloghandling.AuditLogDirector] (ForkJoinPool-1-worker-11) []
Correlation ID: null, Call Stack: null, Custom Event ID: -1,
Message: VM billj7-2.j2noc.com is down with error. Exit message:
internal error: process exited while connecting to monitor:
2016-05-17T21:14:51.162932Z qemu-kvm: -drive
file=/rhev/data-center/00000001-0001-0001-0001-0000000002c5/22df0943-c131-4ed8-ba9c-05923afcf8e3/images/2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25/a2b0a04d-041f-4342-9687-142cc641b35e,if=none,id=drive-virtio-disk0,format=raw,serial=2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25,cache=none,werror=stop,rerror=stop,aio=threads:
Could not open
'/rhev/data-center/00000001-0001-0001-0001-0000000002c5/22df0943-c131-4ed8-ba9c-05923afcf8e3/images/2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25/a2b0a04d-041f-4342-9687-142cc641b35e':
Permission denied<br>
<br>
<br>
I did setup gluster permissions:<br>
gluster volume set gv1 storage.owner-uid 36<br>
gluster volume set gv1 storage.owner-gid 36<br>
<br>
files look fine:<br>
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# ls -lah<br>
total 2.0G<br>
drwxr-xr-x 2 vdsm kvm 4.0K May 17 09:39 .<br>
drwxr-xr-x 11 vdsm kvm 4.0K May 17 10:40 ..<br>
-rw-rw---- 1 vdsm kvm 20G May 17 10:33
a2b0a04d-041f-4342-9687-142cc641b35e<br>
-rw-rw---- 1 vdsm kvm 1.0M May 17 09:38
a2b0a04d-041f-4342-9687-142cc641b35e.lease<br>
-rw-r--r-- 1 vdsm kvm 259 May 17 09:39
a2b0a04d-041f-4342-9687-142cc641b35e.meta<br>
<br>
I did check and vdsm user can read the file just fine.<br>
<b>If I change mod disk to 666 VM starts up fine.</b><br>
ALso if I chgrp to qemu VM starts up fine.<br>
<br>
[root@ovirt2 prod a7af2477-4a19-4f01-9de1-c939c99e53ad]# ls -l
253f9615-f111-45ca-bdce-cbc9e70406df<br>
-rw-rw---- 1 vdsm qemu 21474836480 May 18 11:38
253f9615-f111-45ca-bdce-cbc9e70406df<br>
<br>
<br>
Seems similar to issue here but that suggests it was fixed: <br>
<a class="moz-txt-link-freetext" href="https://bugzilla.redhat.com/show_bug.cgi?id=1052114">https://bugzilla.redhat.com/show_bug.cgi?id=1052114</a><br>
<br>
<br>
<br>
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# grep 36
/etc/passwd /etc/group<br>
/etc/passwd:vdsm:x:36:36:Node Virtualization Manager:/:/bin/bash<br>
/etc/group:kvm:x:36:qemu,sanlock<br>
<br>
<br>
ovirt-engine-3.6.4.1-1.el7.centos.noarch<br>
glusterfs-3.7.11-1.el7.x86_64<br>
qemu-img-ev-2.3.0-31.el7_2.4.1.x86_64<br>
qemu-kvm-ev-2.3.0-31.el7_2.4.1.x86_64<br>
libvirt-daemon-1.2.17-13.el7_2.4.x86_64<br>
<br>
<br>
I also set libvirt qemu user to root, for import-to-ovirt.pl
script.<br>
<br>
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# grep
^user /etc/libvirt/qemu.conf <br>
user = "root"<br>
<br>
<br>
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# gluster
volume info gv1<br>
<br>
Volume Name: gv1<br>
Type: Replicate<br>
Volume ID: 062aa1a5-91e8-420d-800e-b8bc4aff20d8<br>
Status: Started<br>
Number of Bricks: 1 x 3 = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: ovirt1-gl.j2noc.com:/ovirt-store/brick1/gv1<br>
Brick2: ovirt2-gl.j2noc.com:/ovirt-store/brick1/gv1<br>
Brick3: ovirt3-gl.j2noc.com:/ovirt-store/brick1/gv1<br>
Options Reconfigured:<br>
performance.readdir-ahead: on<br>
performance.quick-read: off<br>
performance.read-ahead: off<br>
performance.io-cache: off<br>
performance.stat-prefetch: off<br>
cluster.eager-lock: enable<br>
network.remote-dio: enable<br>
cluster.quorum-type: auto<br>
cluster.server-quorum-type: server<br>
features.shard: on<br>
features.shard-block-size: 64MB<br>
storage.owner-uid: 36<br>
storage.owner-gid: 36<br>
<br>
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# gluster
volume status gv1<br>
Status of volume: gv1<br>
Gluster process TCP Port RDMA Port
Online Pid<br>
------------------------------------------------------------------------------<br>
Brick ovirt1-gl.j2noc.com:/ovirt-store/bric<br>
k1/gv1 49152 0
Y 2046 <br>
Brick ovirt2-gl.j2noc.com:/ovirt-store/bric<br>
k1/gv1 49152 0
Y 22532<br>
Brick ovirt3-gl.j2noc.com:/ovirt-store/bric<br>
k1/gv1 49152 0
Y 59683<br>
NFS Server on localhost 2049 0
Y 2200 <br>
Self-heal Daemon on localhost N/A N/A
Y 2232 <br>
NFS Server on ovirt3-gl.j2noc.com 2049 0
Y 65363<br>
Self-heal Daemon on ovirt3-gl.j2noc.com N/A N/A
Y 65371<br>
NFS Server on ovirt2-gl.j2noc.com 2049 0
Y 17621<br>
Self-heal Daemon on ovirt2-gl.j2noc.com N/A N/A
Y 17629<br>
<br>
Task Status of Volume gv1<br>
------------------------------------------------------------------------------<br>
There are no active volume tasks<br>
<br>
<br>
<br>
any ideas on why ovirt thinks it needs group of qemu??<br>
<br>
<br>
</div>
<br>
</body>
</html>
--------------060108010807030305050803--
8 years, 6 months
100% memory usage on desktop environments
by Nicolás
Hi,
Probably not an oVirt issue, but maybe someone can help. I've deployed a
pretty basic VM (ubuntu 14.04 server, 4GB RAM, 4 CPUs, 15GB storage).
Each time I install an additional desktop environment (Gnome, KDE,
whatever...), CPU usage rises to 100% all time to the extreme that
interacting with the machine becomes impossible (maybe a mouse movement
is propagated 3 minutes later or so...).
To debug this, I installed LXDE, which is a lightweight desktop
environment based on Xorg. I could see there is an Xorg process
consuming one of the CPUs and the machine stops responding as far as the
desktop environment goes. I have not changed anything in the
configuration file.
I could also see this only happens when QXL is chosen as the display
driver. When CIRRUS is chosen, everything works smoothly and CPU is
~100% idle. The downside is that we want to use SPICE and CIRRUS won't
allow it.
Why does this happen? Is this an OS-side driver issue? Any hint how can
it be fixed?
Thanks.
Nicolás
8 years, 6 months
oVirt All-in-One upgrade path and requested improvements
by Neal Gompa
Hello,
I recently found out that oVirt "deprecated" the All-in-One
configuration option in oVirt 3.6 and removed it in oVirt 4.0. This is
a huge problem for me, as it means that my oVirt machines don't have
an upgrade path.
My experiments with the self-hosted engine have ended in failure for a
couple of reasons:
* The hosted engine deploy expects that a FQDN is already paired with
an IP address. This is obviously false in most home environments, and
even in the work environment where I use oVirt regularly. There's no
workaround for this (except having a third machine to run the
engine!), and this utterly breaks the only way to use oVirt in a
DHCP-centric environment where I may not control the network
addressing.
* Other error states have caused the whole thing to break and just
leave the system a broken mess. With no way to clean up, I'm left
guessing how to undo everything, which is hellish and leads me to just
wipe the whole system and start over.
In addition, I was hoping that there would be improvements with the
single system case, rather than destruction of this capability. Some
of the improvements are things I think would be useful in even a
multi-node setup, too.
For example, I would like to see live migration capabilities with
local storage across datacenters, as this capability in vMotion makes
deployments a lot more flexible. Sometimes, local storage is really
the only way to get the kind of speed needed for some workloads, and
being able to offer some kind of HA for VMs on local storage would be
excellent. In addition to being useful for all-in-one setups, it's
quite useful for self-hosted engine configurations, too.
It's also rather irritating that there's no way to migrate stuff from
shared storage to local storage and back. On top of that, datacenters
that have local storage can't have shared storage or vice versa.
On top of that, it looks like the all-in-one code is being kept around
anyway for the oVirt Live stuff, so why not just keep the capability
and improve it? oVirt should become the best virtualization solution
for everyone, not just people who have access to huge datacenters
where all the conditions are perfect.
--
真実はいつも一つ!/ Always, there's only one truth!
8 years, 6 months
Re: [ovirt-users] oVirt hosted-engine setup stuck in final phase registering host to engine
by Ralf Schenk
This is a multi-part message in MIME format.
--------------0CC9A5CD7BA82A048CB9D135
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Hello,
don't waste time on it. I reinstalled ovirt-hosted-engine-ha.noarch an
then after some time engine magically started. I'm now adding hosts to
the engine and will deploy two other instances of the engine on two
other hosts to get it highly available. So far my gluster seems usable
inside the engine and the hosts.
If its interesing for you: I also setup up HA nfs-ganesha on the hosts
to provide NFS Shares to multiple VM's (will be php-fpm Backends to
Nginx) in an efficient way. I also tested and benchmarked (only
sysbench) using one host as MDS for pNFS with gluster FSAL. So I'm able
to mount my gluster via "mount ... type nfs4 -o minorversion=1" an am
rewarded with pnfs=LAYOUT_NFSV4_1_FILES in "/proc/self/mountstats". I
can see good network distribution and connections to multiple servers of
the cluster when benchmarking an NFS mount.
What I don't understand: Engine and also setup seem to have a problem
with my type 6 bond. That type proved to be best in glusterfs and nfs
performance and distribution over my 2 network-interfaces. Additionally
I'm loosing my IPMI on shared LAN if I use a type 4 802.3ad Bond.
Thats what i have:
eth0___bond0_________br0 (192.168.252.x) for VMs/Hosts
eth1__/ \__bond0.10_ovirtmgmt (172.16.252.x, VLAN 10) for
Gluster, NFS, Migration, Management
Is this ok ?
Thanks a lot for your effort. I hope that I can give back something to
the community by actively using the mailing-list.
Bye
Am 18.05.2016 um 16:36 schrieb Simone Tiraboschi:
> Really really strange,
> adding Martin here.
>
>
>
> On Wed, May 18, 2016 at 4:32 PM, Ralf Schenk <rs(a)databay.de
> <mailto:rs@databay.de>> wrote:
>
> Hello,
>
> When I restart (systemctl restart ovirt-ha-broker ovirt-ha-agent)
> broker seems to fail: (from journalctl -xe)
>
> -- Unit ovirt-ha-agent.service has begun starting up.
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: Traceback
> (most recent call last):
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: File
> "/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker", line 25, in
> <module>
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]:
> broker.Broker().run()
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/broker.py",
> line 56, in run
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]:
> self._initialize_logging(options.daemon)
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/broker.py",
> line 131, in _initialize_logging
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]:
> level=logging.DEBUG)
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: File
> "/usr/lib64/python2.7/logging/__init__.py", line 1529, in basicConfig
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: hdlr =
> FileHandler(filename, mode)
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: File
> "/usr/lib64/python2.7/logging/__init__.py", line 902, in __init__
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]:
> StreamHandler.__init__(self, self._open())
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: File
> "/usr/lib64/python2.7/logging/__init__.py", line 925, in _open
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: stream =
> open(self.baseFilename, self.mode)
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: IOError:
> [Errno 6] No such device or address: '/dev/stdout'
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: Traceback (most
> recent call last):
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: File
> "/usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent", line 25, in
> <module>
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: agent.Agent().run()
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
> line 77, in run
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]:
> self._initialize_logging(options.daemon)
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
> line 159, in _initialize_logging
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]:
> level=logging.DEBUG)
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: File
> "/usr/lib64/python2.7/logging/__init__.py", line 1529, in basicConfig
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: hdlr =
> FileHandler(filename, mode)
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: File
> "/usr/lib64/python2.7/logging/__init__.py", line 902, in __init__
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]:
> StreamHandler.__init__(self, self._open())
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: File
> "/usr/lib64/python2.7/logging/__init__.py", line 925, in _open
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: stream =
> open(self.baseFilename, self.mode)
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: IOError: [Errno
> 6] No such device or address: '/dev/stdout'
> May 18 16:26:19 microcloud21 systemd[1]: ovirt-ha-broker.service:
> main process exited, code=exited, status=1/FAILURE
> May 18 16:26:19 microcloud21 systemd[1]: Unit
> ovirt-ha-broker.service entered failed state.
> May 18 16:26:19 microcloud21 systemd[1]: ovirt-ha-broker.service
> failed.
> May 18 16:26:19 microcloud21 systemd[1]: ovirt-ha-agent.service:
> main process exited, code=exited, status=1/FAILURE
> May 18 16:26:19 microcloud21 systemd[1]: Unit
> ovirt-ha-agent.service entered failed state.
> May 18 16:26:19 microcloud21 systemd[1]: ovirt-ha-agent.service
> failed.
> May 18 16:26:19 microcloud21 systemd[1]: ovirt-ha-broker.service
> holdoff time over, scheduling restart.
>
> Of course /dev/stdout exists:
>
> [root@microcloud21 ~]# ls -al /dev/stdout
> lrwxrwxrwx 1 root root 15 May 18 12:29 /dev/stdout -> /proc/self/fd/1
>
--
*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *rs(a)databay.de* <mailto:rs@databay.de>
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Klaus Scholzen (RA)
------------------------------------------------------------------------
--------------0CC9A5CD7BA82A048CB9D135
Content-Type: multipart/related;
boundary="------------69E1DCACD77B617618016FDF"
--------------69E1DCACD77B617618016FDF
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p>Hello,</p>
<p>don't waste time on it. I reinstalled
ovirt-hosted-engine-ha.noarch an then after some time engine
magically started. I'm now adding hosts to the engine and will
deploy two other instances of the engine on two other hosts to get
it highly available. So far my gluster seems usable inside the
engine and the hosts. <br>
</p>
<p>If its interesing for you: I also setup up HA nfs-ganesha on the
hosts to provide NFS Shares to multiple VM's (will be php-fpm
Backends to Nginx) in an efficient way. I also tested and
benchmarked (only sysbench) using one host as MDS for pNFS with
gluster FSAL. So I'm able to mount my gluster via "mount ... type
nfs4 -o minorversion=1" an am rewarded with
pnfs=LAYOUT_NFSV4_1_FILES in "/proc/self/mountstats". I can see
good network distribution and connections to multiple servers of
the cluster when benchmarking an NFS mount.</p>
<p>What I don't understand: Engine and also setup seem to have a
problem with my type 6 bond. That type proved to be best in
glusterfs and nfs performance and distribution over my 2
network-interfaces. Additionally I'm loosing my IPMI on shared LAN
if I use a type 4 802.3ad Bond.<br>
</p>
<p>Thats what i have:</p>
<p><tt>eth0___bond0_________br0 (192.168.252.x) for VMs/Hosts</tt><tt><br>
</tt><tt>eth1__/ \__bond0.10_ovirtmgmt (172.16.252.x,
VLAN 10) for Gluster, NFS, Migration, Management</tt><br>
</p>
<p>Is this ok ?<br>
</p>
<p>Thanks a lot for your effort. I hope that I can give back
something to the community by actively using the mailing-list.</p>
<p>Bye<br>
</p>
<div class="moz-cite-prefix">Am 18.05.2016 um 16:36 schrieb Simone
Tiraboschi:<br>
</div>
<blockquote
cite="mid:CAN8-ONr-48CYUXhTNs571DQu=_jCN3uOXUhDBKPSGyTdVzZw0w@mail.gmail.com"
type="cite">Really really strange,
<div>adding Martin here.</div>
<div><br>
</div>
<div><br>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Wed, May 18, 2016 at 4:32 PM, Ralf
Schenk <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:rs@databay.de" target="_blank">rs(a)databay.de</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<p>Hello,</p>
<p>When I restart (systemctl restart ovirt-ha-broker
ovirt-ha-agent) broker seems to fail: (from journalctl
-xe)<br>
</p>
<p><tt>-- Unit ovirt-ha-agent.service has begun starting
up.</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-broker[2429]: Traceback (most recent call
last):</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-broker[2429]: File
"/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker",
line 25, in <module></tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-broker[2429]: broker.Broker().run()</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-broker[2429]: File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/broker.py",
line 56, in run</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-broker[2429]:
self._initialize_logging(options.daemon)</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-broker[2429]: File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/broker.py",
line 131, in _initialize_logging</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-broker[2429]: level=logging.DEBUG)</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-broker[2429]: File
"/usr/lib64/python2.7/logging/__init__.py", line 1529,
in basicConfig</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-broker[2429]: hdlr = FileHandler(filename,
mode)</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-broker[2429]: File
"/usr/lib64/python2.7/logging/__init__.py", line 902,
in __init__</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-broker[2429]: StreamHandler.__init__(self,
self._open())</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-broker[2429]: File
"/usr/lib64/python2.7/logging/__init__.py", line 925,
in _open</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-broker[2429]: stream =
open(self.baseFilename, self.mode)</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-broker[2429]: IOError: [Errno 6] No such
device or address: '/dev/stdout'</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-agent[2430]: Traceback (most recent call
last):</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-agent[2430]: File
"/usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent",
line 25, in <module></tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-agent[2430]: agent.Agent().run()</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-agent[2430]: File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
line 77, in run</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-agent[2430]:
self._initialize_logging(options.daemon)</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-agent[2430]: File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
line 159, in _initialize_logging</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-agent[2430]: level=logging.DEBUG)</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-agent[2430]: File
"/usr/lib64/python2.7/logging/__init__.py", line 1529,
in basicConfig</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-agent[2430]: hdlr = FileHandler(filename,
mode)</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-agent[2430]: File
"/usr/lib64/python2.7/logging/__init__.py", line 902,
in __init__</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-agent[2430]: StreamHandler.__init__(self,
self._open())</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-agent[2430]: File
"/usr/lib64/python2.7/logging/__init__.py", line 925,
in _open</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-agent[2430]: stream = open(self.baseFilename,
self.mode)</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21
ovirt-ha-agent[2430]: IOError: [Errno 6] No such
device or address: '/dev/stdout'</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21 systemd[1]:
ovirt-ha-broker.service: main process exited,
code=exited, status=1/FAILURE</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21 systemd[1]: Unit
ovirt-ha-broker.service entered failed state.</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21 systemd[1]:
ovirt-ha-broker.service failed.</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21 systemd[1]:
ovirt-ha-agent.service: main process exited,
code=exited, status=1/FAILURE</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21 systemd[1]: Unit
ovirt-ha-agent.service entered failed state.</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21 systemd[1]:
ovirt-ha-agent.service failed.</tt><tt><br>
</tt><tt>May 18 16:26:19 microcloud21 systemd[1]:
ovirt-ha-broker.service holdoff time over, scheduling
restart.</tt><tt><br>
</tt></p>
<p>Of course /dev/stdout exists:</p>
<p><tt>[root@microcloud21 ~]# ls -al /dev/stdout</tt><tt><br>
</tt><tt>lrwxrwxrwx 1 root root 15 May 18 12:29
/dev/stdout -> /proc/self/fd/1</tt><tt><br>
</tt></p>
</div>
</blockquote>
</div>
</div>
</blockquote>
<br>
<div class="moz-signature">-- <br>
<p>
</p>
<table border="0" cellpadding="0" cellspacing="0">
<tbody>
<tr>
<td colspan="3"><img
src="cid:part2.29C79524.2BD951A4@databay.de" height="30"
border="0" width="151"></td>
</tr>
<tr>
<td valign="top"> <font face="Verdana, Arial, sans-serif"
size="-1"><br>
<b>Ralf Schenk</b><br>
fon +49 (0) 24 05 / 40 83 70<br>
fax +49 (0) 24 05 / 40 83 759<br>
mail <a href="mailto:rs@databay.de"><font
color="#FF0000"><b>rs(a)databay.de</b></font></a><br>
</font> </td>
<td width="30"> </td>
<td valign="top"> <font face="Verdana, Arial, sans-serif"
size="-1"><br>
<b>Databay AG</b><br>
Jens-Otto-Krag-Straße 11<br>
D-52146 Würselen<br>
<a href="http://www.databay.de"><font color="#FF0000"><b>www.databay.de</b></font></a>
</font> </td>
</tr>
<tr>
<td colspan="3" valign="top"> <font face="Verdana, Arial,
sans-serif" size="1"><br>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE
210844202<br>
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch
Yavari, Dipl.-Kfm. Philipp Hermanns<br>
Aufsichtsratsvorsitzender: Klaus Scholzen (RA) </font>
</td>
</tr>
</tbody>
</table>
<hr color="#000000" noshade="noshade" size="1" width="100%">
</div>
</body>
</html>
--------------69E1DCACD77B617618016FDF
Content-Type: image/gif;
name="logo_databay_email.gif"
Content-Transfer-Encoding: base64
Content-ID: <part2.29C79524.2BD951A4(a)databay.de>
Content-Disposition: inline;
filename="logo_databay_email.gif"
R0lGODlhlwAeAMQAAObm5v9QVf/R0oKBgfDw8NfX105MTLi3t/r6+sfHx/+rrf98gC0sLP8L
EhIQEKalpf/g4ZmYmHd2dmppaf8uNP/y8v8cIv+Ym//AwkE/P46NjRwbG11cXP8ABwUDA///
/yH5BAAAAAAALAAAAACXAB4AAAX/4CeOYnUJZKqubOu+cCzPNA0tVnfVfO//wGAKk+t0Ap+K
QMFUYCDCqHRKJVUWDaPRUsFktZ1G4AKtms9o1gKsFVS+7I5ll67bpd647hPQawNld4KDMQJF
bA07F35aFBiEkJEpfXEBjx8KjI0Vkp2DEIdaCySgFBShbEgrCQOtrq+uEQcALQewrQUjEbe8
rgkkD7y5KhMZB3drqSoVFQhdlHGXKQYe1dbX2BvHKwzY1RMiAN7j1xEjBeTmKeIeD3cYCxRf
FigvChRxFJwkBBvk5A7cpZhAjgGCDwn+kfslgto4CSoSehh2BwEEBQvowDAUR0EKdArHZTg4
4oDCXBFC/3qj9SEluZEpHnjYQFIGgpo1KgSasYjNKBImrzF4NaFbNgIjCGRQeIyVKwneOLzS
cLCAg38OWI4Y4GECgQcSOEwYcADnh6/FNjAwoGFYAQ0atI4AAFeEFwsLFLiJUQEfGH0kNGAD
x8+oNQdIRQg+7NCaOhIgD8sVgYADNsPVGI5YWjRqzQTdHDDIYHRDLokaUhCglkFEJi0NKJhl
0RP2TsvXUg88KiLBVWsZrF6DmMKlNYMqglqTik1guN8OBgAgkGCpB+L9ugK4iSCBvwEfECw1
kILrBpa1jVCQIQBRvbP+rlEcQVAoSevWyv6uhpwE12uEkQAAZucpVw1xIsjkgf8B863mQVYt
eQATCZYJZJ5WBfij2wfpHcEeHGG8Z+BMszVWDXkfKLhceJhBSAJ+1ThH32AfRFZNayNAtUFi
wFSTSwEHJIYAAQU84IADwyjIEALU9MchG+vFgIF7W2GDI2T7HfjBgNcgKQKMHmwjgnCSpeCb
ULRkdxhF1CDY40RjgmUAA/v1J5FAKW2gGSZscBFDMraNgJs1AYpAAGYP5jJoNQ4Y4Gh8jpFg
HH9mgbmWo1l6oA4C3Ygp6UwEIFBfNRtkMIBlKMLnAXgAXLWhXXH85EIFqMhGGZgDEKArABGA
ed0HI4bk5qgnprCYSt88B6dqS0FEEAMPJDCdCJYViur/B1BlwGMJqDTwnhqxJgUpo0ceOQ4D
0yEakpMm/jqCRMgWm2I1j824Y6vLvuuPjHnqOJkIgP6xzwp5sCFNsCFp88Gxh11lrjfDcNrc
CEx64/CD3iAHlQcMUEQXvcA+qBkBB4Q2X1CusjBlJdKMYAKI6g28MbKN5hJsBAXknHOwutn4
oFYqkpqAzjnPbE0u1PxmwAQGXLWBbvhuIIEGEnRjlAHO4SvhbCNAkwoGzEBwgV9U0lfu2WiX
OkDEGaCdKgl0nk2YkWdPOCDabvaGdkAftL1LlgwCM+7Tq11V71IO7LkM2XE0YAHMYMhqqK6U
V165CpaHukLmiXFO8XSVzzakX+UH6TrmAajPNxfqByTQec41AeBPvSwIALkmAnuiexCsca3C
BajgfsROuxcPA8kHQJX4DAIwjnsAvhsvfXHWKEwDAljg7sj03L9wwAQTxOWD2AE0YP75eCkw
cPfs+xACADs=
--------------69E1DCACD77B617618016FDF--
--------------0CC9A5CD7BA82A048CB9D135--
8 years, 6 months
thin provisioned vm suspend
by Dobó László
Hi,
I have an annoying problem with thin provisioned vms, which are on
iscsi lvm.
When I'm copying big files, qemu often suspending the vm while vdsm
extending the volume. (VM test2 has been paused due to no Storage space
error)
Can i tune this free space detection behavior somehow? - should be good
to start lvextend much more earlier.
Regards,
enax
8 years, 6 months
Re: [ovirt-users] 100% memory usage on desktop environments
by Karli Sjöberg
--_000_5534f35e19cb423287f13a0e5159163cexch24sluse_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
DQpEZW4gMTggbWFqIDIwMTYgNzo0OCBlbSBza3JldiBOaWNvbMOhcyA8bmljb2xhc0BkZXZlbHMu
ZXM+Og0KPg0KPg0KPg0KPiBFbCAxOC8wNS8xNiBhIGxhcyAxODoxMSwgS2FybGkgU2rDtmJlcmcg
ZXNjcmliacOzOg0KPj4NCj4+DQo+PiBEZW4gMTggbWFqIDIwMTYgNzowMyBlbSBza3JldiBOaWNv
bMOhcyA8bmljb2xhc0BkZXZlbHMuZXM+Og0KPj4gPg0KPj4gPiBIaSBLYXJsaSwNCj4+ID4NCj4+
ID4gRWwgMTgvMDUvMTYgYSBsYXMgMTY6NTksIEthcmxpIFNqw7ZiZXJnIGVzY3JpYmnDszoNCj4+
ID4+DQo+PiA+Pg0KPj4gPj4gRGVuIDE4IG1haiAyMDE2IDU6NDkgZW0gc2tyZXYgTmljb2zDoXMg
PG5pY29sYXNAZGV2ZWxzLmVzPjoNCj4+ID4+ID4NCj4+ID4+ID4gSGksDQo+PiA+PiA+DQo+PiA+
PiA+IFByb2JhYmx5IG5vdCBhbiBvVmlydCBpc3N1ZSwgYnV0IG1heWJlIHNvbWVvbmUgY2FuIGhl
bHAuIEkndmUgZGVwbG95ZWQgYQ0KPj4gPj4gPiBwcmV0dHkgYmFzaWMgVk0gKHVidW50dSAxNC4w
NCBzZXJ2ZXIsIDRHQiBSQU0sIDQgQ1BVcywgMTVHQiBzdG9yYWdlKS4NCj4+ID4+DQo+PiA+PiBK
dXN0IHNwaXRiYWxsaW5nIGhlcmU6IDE0LjA0IG9ubHk/IFRyaWVkIDE2LjA0LCBvciBhbnkgb3Ro
ZXIgT1MgZm9yIHRoYXQgbWF0dGVyPyBGb3Igbm93LCBpdCBzb3VuZHMgbW9yZSBndWVzdCByZWxh
dGVkIHJhdGhlciB0aGFuIG9WaXJ0Lg0KPj4gPj4NCj4+ID4+IC9LDQo+PiA+DQo+PiA+DQo+PiA+
IEkgdHJpZWQgYSB2YW5pbGxhIGNlbnRvcy03LjEgYXMgd2VsbCBhbmQgdGhlIHNhbWUgaGFwcGVu
cy4gSSdtIG9mIHRoZSBzYW1lIG9waW5pb24gdGhhdCB0aGlzIGlzIG1vcmUgYSBndWVzdCByZWxh
dGVkIGlzc3VlLCBpdCdzIGp1c3QgSSdkIGxpa2UgdG8gZmluZCBvdXQgd2h5IHRoaXMgb25seSBo
YXBwZW5zIHdpdGggUVhMIGFuZCBub3Qgd2l0aCBDSVJSVVMuDQo+PiA+DQo+PiA+IFRoYW5rcy4N
Cj4+DQo+PiBWZXJ5IGludGVyZXN0aW5nLiBBcmUgeW91ciBob3N0cyBhbGwgb2YgdGhlIHNhbWUg
YXJjaGl0ZWN0dXJlKGZhbWlseSk/DQo+Pg0KPj4gL0sNCj4NCj4NCj4gQWN0dWFsbHkgd2UgaGF2
ZSBhIG5pY2UgbWl4dHVyZSBvZiBtYW51ZmFjdHVyZXJzLiBXZSBydW4gNyBob3N0cywgcGFpcmVk
IDQtMi0xIGluIHJlbGF0aW9uIHRvIGFyY2hpdGVjdHVyZS4gQWxsIG9mIHRoZW0gaGF2ZSB0aGUg
c2FtZSByZXNvdXJjZXMsIHRob3VnaCAoQ1BVcyArIFJBTSkuIEluIHRoaXMgY2FzZSwgSSBjYW4n
dCB0ZXN0IHRoZSBWTSBvbiBkaWZmZXJlbnQgaG9zdHMgYmVjYXVzZSB3ZSd2ZSBzZXBhcmF0ZWQg
b25lIG9mIHRoZW0gKG9uZSBvZiB0aGUgIjQiKSBvbiBhIHN0YW5kYWxvbmUgb1ZpcnQgZGF0YWNl
bnRlciBhcyB3ZSdyZSBtYWtpbmcgdGVzdHMgb24gaXQsIGJ1dCBJIHJlbWVtYmVyIHRoaXMgaGFz
IGFscmVhZHkgaGFwcGVuZWQgdG8gbWUgaW4gdGhlIHBhc3QgKEkgZGlkbid0IGhhdmUgdGhlIHRp
bWUgdG8gZGVidWcgaXQgYXQgdGhhdCB0aW1lLCB0aG91Z2gpLg0KDQpBbmQgaXQgZG9lc24ndCBo
YXBwZW4gd2l0aCBhIHNpbWlsYXIgVk0gaW4gdGhlIG90aGVyIGRhdGFjZW50ZXIsIG9uIHRoZSBz
YW1lIGhhcmR3YXJlPw0KDQovSw0KDQo+DQo+PiA+DQo+PiA+PiA+IEVhY2ggdGltZSBJIGluc3Rh
bGwgYW4gYWRkaXRpb25hbCBkZXNrdG9wIGVudmlyb25tZW50IChHbm9tZSwgS0RFLA0KPj4gPj4g
PiB3aGF0ZXZlci4uLiksIENQVSB1c2FnZSByaXNlcyB0byAxMDAlIGFsbCB0aW1lIHRvIHRoZSBl
eHRyZW1lIHRoYXQNCj4+ID4+ID4gaW50ZXJhY3Rpbmcgd2l0aCB0aGUgbWFjaGluZSBiZWNvbWVz
IGltcG9zc2libGUgKG1heWJlIGEgbW91c2UgbW92ZW1lbnQNCj4+ID4+ID4gaXMgcHJvcGFnYXRl
ZCAzIG1pbnV0ZXMgbGF0ZXIgb3Igc28uLi4pLg0KPj4gPj4gPg0KPj4gPj4gPiBUbyBkZWJ1ZyB0
aGlzLCBJIGluc3RhbGxlZCBMWERFLCB3aGljaCBpcyBhIGxpZ2h0d2VpZ2h0IGRlc2t0b3ANCj4+
ID4+ID4gZW52aXJvbm1lbnQgYmFzZWQgb24gWG9yZy4gSSBjb3VsZCBzZWUgdGhlcmUgaXMgYW4g
WG9yZyBwcm9jZXNzDQo+PiA+PiA+IGNvbnN1bWluZyBvbmUgb2YgdGhlIENQVXMgYW5kIHRoZSBt
YWNoaW5lIHN0b3BzIHJlc3BvbmRpbmcgYXMgZmFyIGFzIHRoZQ0KPj4gPj4gPiBkZXNrdG9wIGVu
dmlyb25tZW50IGdvZXMuIEkgaGF2ZSBub3QgY2hhbmdlZCBhbnl0aGluZyBpbiB0aGUNCj4+ID4+
ID4gY29uZmlndXJhdGlvbiBmaWxlLg0KPj4gPj4gPg0KPj4gPj4gPiBJIGNvdWxkIGFsc28gc2Vl
IHRoaXMgb25seSBoYXBwZW5zIHdoZW4gUVhMIGlzIGNob3NlbiBhcyB0aGUgZGlzcGxheQ0KPj4g
Pj4gPiBkcml2ZXIuIFdoZW4gQ0lSUlVTIGlzIGNob3NlbiwgZXZlcnl0aGluZyB3b3JrcyBzbW9v
dGhseSBhbmQgQ1BVIGlzDQo+PiA+PiA+IH4xMDAlIGlkbGUuIFRoZSBkb3duc2lkZSBpcyB0aGF0
IHdlIHdhbnQgdG8gdXNlIFNQSUNFIGFuZCBDSVJSVVMgd29uJ3QNCj4+ID4+ID4gYWxsb3cgaXQu
DQo+PiA+PiA+DQo+PiA+PiA+IFdoeSBkb2VzIHRoaXMgaGFwcGVuPyBJcyB0aGlzIGFuIE9TLXNp
ZGUgZHJpdmVyIGlzc3VlPyBBbnkgaGludCBob3cgY2FuDQo+PiA+PiA+IGl0IGJlIGZpeGVkPw0K
Pj4gPj4gPg0KPj4gPj4gPiBUaGFua3MuDQo+PiA+PiA+DQo+PiA+PiA+IE5pY29sw6FzDQo+PiA+
PiA+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+PiA+
PiA+IFVzZXJzIG1haWxpbmcgbGlzdA0KPj4gPj4gPiBVc2Vyc0BvdmlydC5vcmcNCj4+ID4+ID4g
aHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzDQo+PiA+DQo+PiA+
DQo+DQo+DQo=
--_000_5534f35e19cb423287f13a0e5159163cexch24sluse_
Content-Type: text/html; charset="utf-8"
Content-ID: <FB71D56CC5642648899F4418B36F144C(a)ad.slu.se>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHAgZGlyPSJsdHIi
Pjxicj4NCkRlbiAxOCBtYWogMjAxNiA3OjQ4IGVtIHNrcmV2IE5pY29sw6FzICZsdDtuaWNvbGFz
QGRldmVscy5lcyZndDs6PGJyPg0KJmd0Ozxicj4NCiZndDs8YnI+DQomZ3Q7PGJyPg0KJmd0OyBF
bCAxOC8wNS8xNiBhIGxhcyAxODoxMSwgS2FybGkgU2rDtmJlcmcgZXNjcmliacOzOjxicj4NCiZn
dDsmZ3Q7PGJyPg0KJmd0OyZndDs8YnI+DQomZ3Q7Jmd0OyBEZW4gMTggbWFqIDIwMTYgNzowMyBl
bSBza3JldiBOaWNvbMOhcyAmbHQ7bmljb2xhc0BkZXZlbHMuZXMmZ3Q7Ojxicj4NCiZndDsmZ3Q7
ICZndDs8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IEhpIEthcmxpLDxicj4NCiZndDsmZ3Q7ICZndDs8YnI+
DQomZ3Q7Jmd0OyAmZ3Q7IEVsIDE4LzA1LzE2IGEgbGFzIDE2OjU5LCBLYXJsaSBTasO2YmVyZyBl
c2NyaWJpw7M6PGJyPg0KJmd0OyZndDsgJmd0OyZndDs8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0Ozxi
cj4NCiZndDsmZ3Q7ICZndDsmZ3Q7IERlbiAxOCBtYWogMjAxNiA1OjQ5IGVtIHNrcmV2IE5pY29s
w6FzICZsdDtuaWNvbGFzQGRldmVscy5lcyZndDs6PGJyPg0KJmd0OyZndDsgJmd0OyZndDsgJmd0
Ozxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7ICZndDsgSGksPGJyPg0KJmd0OyZndDsgJmd0OyZndDsg
Jmd0Ozxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7ICZndDsgUHJvYmFibHkgbm90IGFuIG9WaXJ0IGlz
c3VlLCBidXQgbWF5YmUgc29tZW9uZSBjYW4gaGVscC4gSSd2ZSBkZXBsb3llZCBhIDxicj4NCiZn
dDsmZ3Q7ICZndDsmZ3Q7ICZndDsgcHJldHR5IGJhc2ljIFZNICh1YnVudHUgMTQuMDQgc2VydmVy
LCA0R0IgUkFNLCA0IENQVXMsIDE1R0Igc3RvcmFnZSkuPGJyPg0KJmd0OyZndDsgJmd0OyZndDs8
YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyBKdXN0IHNwaXRiYWxsaW5nIGhlcmU6IDE0LjA0IG9ubHk/
IFRyaWVkIDE2LjA0LCBvciBhbnkgb3RoZXIgT1MgZm9yIHRoYXQgbWF0dGVyPyBGb3Igbm93LCBp
dCBzb3VuZHMgbW9yZSBndWVzdCByZWxhdGVkIHJhdGhlciB0aGFuIG9WaXJ0Ljxicj4NCiZndDsm
Z3Q7ICZndDsmZ3Q7PGJyPg0KJmd0OyZndDsgJmd0OyZndDsgL0s8YnI+DQomZ3Q7Jmd0OyAmZ3Q7
PGJyPg0KJmd0OyZndDsgJmd0Ozxicj4NCiZndDsmZ3Q7ICZndDsgSSB0cmllZCBhIHZhbmlsbGEg
Y2VudG9zLTcuMSBhcyB3ZWxsIGFuZCB0aGUgc2FtZSBoYXBwZW5zLiBJJ20gb2YgdGhlIHNhbWUg
b3BpbmlvbiB0aGF0IHRoaXMgaXMgbW9yZSBhIGd1ZXN0IHJlbGF0ZWQgaXNzdWUsIGl0J3MganVz
dCBJJ2QgbGlrZSB0byBmaW5kIG91dCB3aHkgdGhpcyBvbmx5IGhhcHBlbnMgd2l0aCBRWEwgYW5k
IG5vdCB3aXRoIENJUlJVUy48YnI+DQomZ3Q7Jmd0OyAmZ3Q7PGJyPg0KJmd0OyZndDsgJmd0OyBU
aGFua3MuPGJyPg0KJmd0OyZndDs8YnI+DQomZ3Q7Jmd0OyBWZXJ5IGludGVyZXN0aW5nLiBBcmUg
eW91ciBob3N0cyBhbGwgb2YgdGhlIHNhbWUgYXJjaGl0ZWN0dXJlKGZhbWlseSk/PGJyPg0KJmd0
OyZndDs8YnI+DQomZ3Q7Jmd0OyAvSzxicj4NCiZndDs8YnI+DQomZ3Q7PGJyPg0KJmd0OyBBY3R1
YWxseSB3ZSBoYXZlIGEgbmljZSBtaXh0dXJlIG9mIG1hbnVmYWN0dXJlcnMuIFdlIHJ1biA3IGhv
c3RzLCBwYWlyZWQgNC0yLTEgaW4gcmVsYXRpb24gdG8gYXJjaGl0ZWN0dXJlLiBBbGwgb2YgdGhl
bSBoYXZlIHRoZSBzYW1lIHJlc291cmNlcywgdGhvdWdoIChDUFVzICYjNDM7IFJBTSkuIEluIHRo
aXMgY2FzZSwgSSBjYW4ndCB0ZXN0IHRoZSBWTSBvbiBkaWZmZXJlbnQgaG9zdHMgYmVjYXVzZSB3
ZSd2ZSBzZXBhcmF0ZWQgb25lIG9mIHRoZW0NCiAob25lIG9mIHRoZSAmcXVvdDs0JnF1b3Q7KSBv
biBhIHN0YW5kYWxvbmUgb1ZpcnQgZGF0YWNlbnRlciBhcyB3ZSdyZSBtYWtpbmcgdGVzdHMgb24g
aXQsIGJ1dCBJIHJlbWVtYmVyIHRoaXMgaGFzIGFscmVhZHkgaGFwcGVuZWQgdG8gbWUgaW4gdGhl
IHBhc3QgKEkgZGlkbid0IGhhdmUgdGhlIHRpbWUgdG8gZGVidWcgaXQgYXQgdGhhdCB0aW1lLCB0
aG91Z2gpLjwvcD4NCjxwIGRpcj0ibHRyIj5BbmQgaXQgZG9lc24ndCBoYXBwZW4gd2l0aCBhIHNp
bWlsYXIgVk0gaW4gdGhlIG90aGVyIGRhdGFjZW50ZXIsIG9uIHRoZSBzYW1lIGhhcmR3YXJlPzwv
cD4NCjxwIGRpcj0ibHRyIj4vSzwvcD4NCjxwIGRpcj0ibHRyIj4mZ3Q7PGJyPg0KJmd0OyZndDsg
Jmd0Ozxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7ICZndDsgRWFjaCB0aW1lIEkgaW5zdGFsbCBhbiBh
ZGRpdGlvbmFsIGRlc2t0b3AgZW52aXJvbm1lbnQgKEdub21lLCBLREUsIDxicj4NCiZndDsmZ3Q7
ICZndDsmZ3Q7ICZndDsgd2hhdGV2ZXIuLi4pLCBDUFUgdXNhZ2UgcmlzZXMgdG8gMTAwJSBhbGwg
dGltZSB0byB0aGUgZXh0cmVtZSB0aGF0IDxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7ICZndDsgaW50
ZXJhY3Rpbmcgd2l0aCB0aGUgbWFjaGluZSBiZWNvbWVzIGltcG9zc2libGUgKG1heWJlIGEgbW91
c2UgbW92ZW1lbnQgPGJyPg0KJmd0OyZndDsgJmd0OyZndDsgJmd0OyBpcyBwcm9wYWdhdGVkIDMg
bWludXRlcyBsYXRlciBvciBzby4uLikuPGJyPg0KJmd0OyZndDsgJmd0OyZndDsgJmd0Ozxicj4N
CiZndDsmZ3Q7ICZndDsmZ3Q7ICZndDsgVG8gZGVidWcgdGhpcywgSSBpbnN0YWxsZWQgTFhERSwg
d2hpY2ggaXMgYSBsaWdodHdlaWdodCBkZXNrdG9wIDxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7ICZn
dDsgZW52aXJvbm1lbnQgYmFzZWQgb24gWG9yZy4gSSBjb3VsZCBzZWUgdGhlcmUgaXMgYW4gWG9y
ZyBwcm9jZXNzIDxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7ICZndDsgY29uc3VtaW5nIG9uZSBvZiB0
aGUgQ1BVcyBhbmQgdGhlIG1hY2hpbmUgc3RvcHMgcmVzcG9uZGluZyBhcyBmYXIgYXMgdGhlIDxi
cj4NCiZndDsmZ3Q7ICZndDsmZ3Q7ICZndDsgZGVza3RvcCBlbnZpcm9ubWVudCBnb2VzLiBJIGhh
dmUgbm90IGNoYW5nZWQgYW55dGhpbmcgaW4gdGhlIDxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7ICZn
dDsgY29uZmlndXJhdGlvbiBmaWxlLjxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7ICZndDs8YnI+DQom
Z3Q7Jmd0OyAmZ3Q7Jmd0OyAmZ3Q7IEkgY291bGQgYWxzbyBzZWUgdGhpcyBvbmx5IGhhcHBlbnMg
d2hlbiBRWEwgaXMgY2hvc2VuIGFzIHRoZSBkaXNwbGF5IDxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7
ICZndDsgZHJpdmVyLiBXaGVuIENJUlJVUyBpcyBjaG9zZW4sIGV2ZXJ5dGhpbmcgd29ya3Mgc21v
b3RobHkgYW5kIENQVSBpcyA8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyAmZ3Q7IH4xMDAlIGlkbGUu
IFRoZSBkb3duc2lkZSBpcyB0aGF0IHdlIHdhbnQgdG8gdXNlIFNQSUNFIGFuZCBDSVJSVVMgd29u
J3QgPGJyPg0KJmd0OyZndDsgJmd0OyZndDsgJmd0OyBhbGxvdyBpdC48YnI+DQomZ3Q7Jmd0OyAm
Z3Q7Jmd0OyAmZ3Q7PGJyPg0KJmd0OyZndDsgJmd0OyZndDsgJmd0OyBXaHkgZG9lcyB0aGlzIGhh
cHBlbj8gSXMgdGhpcyBhbiBPUy1zaWRlIGRyaXZlciBpc3N1ZT8gQW55IGhpbnQgaG93IGNhbiA8
YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyAmZ3Q7IGl0IGJlIGZpeGVkPzxicj4NCiZndDsmZ3Q7ICZn
dDsmZ3Q7ICZndDs8YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyAmZ3Q7IFRoYW5rcy48YnI+DQomZ3Q7
Jmd0OyAmZ3Q7Jmd0OyAmZ3Q7PGJyPg0KJmd0OyZndDsgJmd0OyZndDsgJmd0OyBOaWNvbMOhczxi
cj4NCiZndDsmZ3Q7ICZndDsmZ3Q7ICZndDsgX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX188YnI+DQomZ3Q7Jmd0OyAmZ3Q7Jmd0OyAmZ3Q7IFVzZXJzIG1haWxp
bmcgbGlzdDxicj4NCiZndDsmZ3Q7ICZndDsmZ3Q7ICZndDsgVXNlcnNAb3ZpcnQub3JnPGJyPg0K
Jmd0OyZndDsgJmd0OyZndDsgJmd0OyBodHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlz
dGluZm8vdXNlcnM8YnI+DQomZ3Q7Jmd0OyAmZ3Q7PGJyPg0KJmd0OyZndDsgJmd0Ozxicj4NCiZn
dDs8YnI+DQomZ3Q7PGJyPg0KPC9wPg0KPC9ib2R5Pg0KPC9odG1sPg0K
--_000_5534f35e19cb423287f13a0e5159163cexch24sluse_--
8 years, 6 months
Linux guests vs. Spice/QXL?
by Uwe Laverenz
Hi all,
I'm running some tests on OVirt (3.6.5.3) on CentOS 7 and almost
everything works quite well so far.
CentOS 6.x, Windows 7 and 2008R2 work fine with Spice/QXL, so my setup
seems to be ok.
Other Linux systems don't work: Debian 8, Fedora 23/24, CentOS 7.x,
Kubuntu 16.04... CentOS 7.x even kills his X server every time the user
logs out. X-)
They all have in common that they show a fixed display resolution of
1024x768 pixels. This can not be changed manually and of course
automatic display resizing doesn't work either.
All machines have spice-vdagent and ovirt-guest-agent installed and running.
Is this a local problem or is this known/expected behaviour? Is there
anything I can do to improve this?
thank you,
Uwe
8 years, 6 months
Re: [ovirt-users] 100% memory usage on desktop environments
by Karli Sjöberg
--_000_d37306c3c698445a8f4a6718ef2cc97eexch24sluse_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
DQpEZW4gMTggbWFqIDIwMTYgNzowMyBlbSBza3JldiBOaWNvbMOhcyA8bmljb2xhc0BkZXZlbHMu
ZXM+Og0KPg0KPiBIaSBLYXJsaSwNCj4NCj4gRWwgMTgvMDUvMTYgYSBsYXMgMTY6NTksIEthcmxp
IFNqw7ZiZXJnIGVzY3JpYmnDszoNCj4+DQo+Pg0KPj4gRGVuIDE4IG1haiAyMDE2IDU6NDkgZW0g
c2tyZXYgTmljb2zDoXMgPG5pY29sYXNAZGV2ZWxzLmVzPjoNCj4+ID4NCj4+ID4gSGksDQo+PiA+
DQo+PiA+IFByb2JhYmx5IG5vdCBhbiBvVmlydCBpc3N1ZSwgYnV0IG1heWJlIHNvbWVvbmUgY2Fu
IGhlbHAuIEkndmUgZGVwbG95ZWQgYQ0KPj4gPiBwcmV0dHkgYmFzaWMgVk0gKHVidW50dSAxNC4w
NCBzZXJ2ZXIsIDRHQiBSQU0sIDQgQ1BVcywgMTVHQiBzdG9yYWdlKS4NCj4+DQo+PiBKdXN0IHNw
aXRiYWxsaW5nIGhlcmU6IDE0LjA0IG9ubHk/IFRyaWVkIDE2LjA0LCBvciBhbnkgb3RoZXIgT1Mg
Zm9yIHRoYXQgbWF0dGVyPyBGb3Igbm93LCBpdCBzb3VuZHMgbW9yZSBndWVzdCByZWxhdGVkIHJh
dGhlciB0aGFuIG9WaXJ0Lg0KPj4NCj4+IC9LDQo+DQo+DQo+IEkgdHJpZWQgYSB2YW5pbGxhIGNl
bnRvcy03LjEgYXMgd2VsbCBhbmQgdGhlIHNhbWUgaGFwcGVucy4gSSdtIG9mIHRoZSBzYW1lIG9w
aW5pb24gdGhhdCB0aGlzIGlzIG1vcmUgYSBndWVzdCByZWxhdGVkIGlzc3VlLCBpdCdzIGp1c3Qg
SSdkIGxpa2UgdG8gZmluZCBvdXQgd2h5IHRoaXMgb25seSBoYXBwZW5zIHdpdGggUVhMIGFuZCBu
b3Qgd2l0aCBDSVJSVVMuDQo+DQo+IFRoYW5rcy4NCg0KVmVyeSBpbnRlcmVzdGluZy4gQXJlIHlv
dXIgaG9zdHMgYWxsIG9mIHRoZSBzYW1lIGFyY2hpdGVjdHVyZShmYW1pbHkpPw0KDQovSw0KDQo+
DQo+PiA+IEVhY2ggdGltZSBJIGluc3RhbGwgYW4gYWRkaXRpb25hbCBkZXNrdG9wIGVudmlyb25t
ZW50IChHbm9tZSwgS0RFLA0KPj4gPiB3aGF0ZXZlci4uLiksIENQVSB1c2FnZSByaXNlcyB0byAx
MDAlIGFsbCB0aW1lIHRvIHRoZSBleHRyZW1lIHRoYXQNCj4+ID4gaW50ZXJhY3Rpbmcgd2l0aCB0
aGUgbWFjaGluZSBiZWNvbWVzIGltcG9zc2libGUgKG1heWJlIGEgbW91c2UgbW92ZW1lbnQNCj4+
ID4gaXMgcHJvcGFnYXRlZCAzIG1pbnV0ZXMgbGF0ZXIgb3Igc28uLi4pLg0KPj4gPg0KPj4gPiBU
byBkZWJ1ZyB0aGlzLCBJIGluc3RhbGxlZCBMWERFLCB3aGljaCBpcyBhIGxpZ2h0d2VpZ2h0IGRl
c2t0b3ANCj4+ID4gZW52aXJvbm1lbnQgYmFzZWQgb24gWG9yZy4gSSBjb3VsZCBzZWUgdGhlcmUg
aXMgYW4gWG9yZyBwcm9jZXNzDQo+PiA+IGNvbnN1bWluZyBvbmUgb2YgdGhlIENQVXMgYW5kIHRo
ZSBtYWNoaW5lIHN0b3BzIHJlc3BvbmRpbmcgYXMgZmFyIGFzIHRoZQ0KPj4gPiBkZXNrdG9wIGVu
dmlyb25tZW50IGdvZXMuIEkgaGF2ZSBub3QgY2hhbmdlZCBhbnl0aGluZyBpbiB0aGUNCj4+ID4g
Y29uZmlndXJhdGlvbiBmaWxlLg0KPj4gPg0KPj4gPiBJIGNvdWxkIGFsc28gc2VlIHRoaXMgb25s
eSBoYXBwZW5zIHdoZW4gUVhMIGlzIGNob3NlbiBhcyB0aGUgZGlzcGxheQ0KPj4gPiBkcml2ZXIu
IFdoZW4gQ0lSUlVTIGlzIGNob3NlbiwgZXZlcnl0aGluZyB3b3JrcyBzbW9vdGhseSBhbmQgQ1BV
IGlzDQo+PiA+IH4xMDAlIGlkbGUuIFRoZSBkb3duc2lkZSBpcyB0aGF0IHdlIHdhbnQgdG8gdXNl
IFNQSUNFIGFuZCBDSVJSVVMgd29uJ3QNCj4+ID4gYWxsb3cgaXQuDQo+PiA+DQo+PiA+IFdoeSBk
b2VzIHRoaXMgaGFwcGVuPyBJcyB0aGlzIGFuIE9TLXNpZGUgZHJpdmVyIGlzc3VlPyBBbnkgaGlu
dCBob3cgY2FuDQo+PiA+IGl0IGJlIGZpeGVkPw0KPj4gPg0KPj4gPiBUaGFua3MuDQo+PiA+DQo+
PiA+IE5pY29sw6FzDQo+PiA+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fDQo+PiA+IFVzZXJzIG1haWxpbmcgbGlzdA0KPj4gPiBVc2Vyc0BvdmlydC5vcmcN
Cj4+ID4gaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzDQo+DQo+
DQo=
--_000_d37306c3c698445a8f4a6718ef2cc97eexch24sluse_
Content-Type: text/html; charset="utf-8"
Content-ID: <00292E3F88DC80458009E189EFB6ABC9(a)ad.slu.se>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHAgZGlyPSJsdHIi
Pjxicj4NCkRlbiAxOCBtYWogMjAxNiA3OjAzIGVtIHNrcmV2IE5pY29sw6FzICZsdDtuaWNvbGFz
QGRldmVscy5lcyZndDs6PGJyPg0KJmd0Ozxicj4NCiZndDsgSGkgS2FybGksPGJyPg0KJmd0Ozxi
cj4NCiZndDsgRWwgMTgvMDUvMTYgYSBsYXMgMTY6NTksIEthcmxpIFNqw7ZiZXJnIGVzY3JpYmnD
szo8YnI+DQomZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7PGJyPg0KJmd0OyZndDsgRGVuIDE4IG1haiAy
MDE2IDU6NDkgZW0gc2tyZXYgTmljb2zDoXMgJmx0O25pY29sYXNAZGV2ZWxzLmVzJmd0Ozo8YnI+
DQomZ3Q7Jmd0OyAmZ3Q7PGJyPg0KJmd0OyZndDsgJmd0OyBIaSw8YnI+DQomZ3Q7Jmd0OyAmZ3Q7
PGJyPg0KJmd0OyZndDsgJmd0OyBQcm9iYWJseSBub3QgYW4gb1ZpcnQgaXNzdWUsIGJ1dCBtYXli
ZSBzb21lb25lIGNhbiBoZWxwLiBJJ3ZlIGRlcGxveWVkIGEgPGJyPg0KJmd0OyZndDsgJmd0OyBw
cmV0dHkgYmFzaWMgVk0gKHVidW50dSAxNC4wNCBzZXJ2ZXIsIDRHQiBSQU0sIDQgQ1BVcywgMTVH
QiBzdG9yYWdlKS48YnI+DQomZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7IEp1c3Qgc3BpdGJhbGxpbmcg
aGVyZTogMTQuMDQgb25seT8gVHJpZWQgMTYuMDQsIG9yIGFueSBvdGhlciBPUyBmb3IgdGhhdCBt
YXR0ZXI/IEZvciBub3csIGl0IHNvdW5kcyBtb3JlIGd1ZXN0IHJlbGF0ZWQgcmF0aGVyIHRoYW4g
b1ZpcnQuPGJyPg0KJmd0OyZndDs8YnI+DQomZ3Q7Jmd0OyAvSzxicj4NCiZndDs8YnI+DQomZ3Q7
PGJyPg0KJmd0OyBJIHRyaWVkIGEgdmFuaWxsYSBjZW50b3MtNy4xIGFzIHdlbGwgYW5kIHRoZSBz
YW1lIGhhcHBlbnMuIEknbSBvZiB0aGUgc2FtZSBvcGluaW9uIHRoYXQgdGhpcyBpcyBtb3JlIGEg
Z3Vlc3QgcmVsYXRlZCBpc3N1ZSwgaXQncyBqdXN0IEknZCBsaWtlIHRvIGZpbmQgb3V0IHdoeSB0
aGlzIG9ubHkgaGFwcGVucyB3aXRoIFFYTCBhbmQgbm90IHdpdGggQ0lSUlVTLjxicj4NCiZndDs8
YnI+DQomZ3Q7IFRoYW5rcy48L3A+DQo8cCBkaXI9Imx0ciI+VmVyeSBpbnRlcmVzdGluZy4gQXJl
IHlvdXIgaG9zdHMgYWxsIG9mIHRoZSBzYW1lIGFyY2hpdGVjdHVyZShmYW1pbHkpPzwvcD4NCjxw
IGRpcj0ibHRyIj4vSzwvcD4NCjxwIGRpcj0ibHRyIj4mZ3Q7PGJyPg0KJmd0OyZndDsgJmd0OyBF
YWNoIHRpbWUgSSBpbnN0YWxsIGFuIGFkZGl0aW9uYWwgZGVza3RvcCBlbnZpcm9ubWVudCAoR25v
bWUsIEtERSwgPGJyPg0KJmd0OyZndDsgJmd0OyB3aGF0ZXZlci4uLiksIENQVSB1c2FnZSByaXNl
cyB0byAxMDAlIGFsbCB0aW1lIHRvIHRoZSBleHRyZW1lIHRoYXQgPGJyPg0KJmd0OyZndDsgJmd0
OyBpbnRlcmFjdGluZyB3aXRoIHRoZSBtYWNoaW5lIGJlY29tZXMgaW1wb3NzaWJsZSAobWF5YmUg
YSBtb3VzZSBtb3ZlbWVudCA8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IGlzIHByb3BhZ2F0ZWQgMyBtaW51
dGVzIGxhdGVyIG9yIHNvLi4uKS48YnI+DQomZ3Q7Jmd0OyAmZ3Q7PGJyPg0KJmd0OyZndDsgJmd0
OyBUbyBkZWJ1ZyB0aGlzLCBJIGluc3RhbGxlZCBMWERFLCB3aGljaCBpcyBhIGxpZ2h0d2VpZ2h0
IGRlc2t0b3AgPGJyPg0KJmd0OyZndDsgJmd0OyBlbnZpcm9ubWVudCBiYXNlZCBvbiBYb3JnLiBJ
IGNvdWxkIHNlZSB0aGVyZSBpcyBhbiBYb3JnIHByb2Nlc3MgPGJyPg0KJmd0OyZndDsgJmd0OyBj
b25zdW1pbmcgb25lIG9mIHRoZSBDUFVzIGFuZCB0aGUgbWFjaGluZSBzdG9wcyByZXNwb25kaW5n
IGFzIGZhciBhcyB0aGUgPGJyPg0KJmd0OyZndDsgJmd0OyBkZXNrdG9wIGVudmlyb25tZW50IGdv
ZXMuIEkgaGF2ZSBub3QgY2hhbmdlZCBhbnl0aGluZyBpbiB0aGUgPGJyPg0KJmd0OyZndDsgJmd0
OyBjb25maWd1cmF0aW9uIGZpbGUuPGJyPg0KJmd0OyZndDsgJmd0Ozxicj4NCiZndDsmZ3Q7ICZn
dDsgSSBjb3VsZCBhbHNvIHNlZSB0aGlzIG9ubHkgaGFwcGVucyB3aGVuIFFYTCBpcyBjaG9zZW4g
YXMgdGhlIGRpc3BsYXkgPGJyPg0KJmd0OyZndDsgJmd0OyBkcml2ZXIuIFdoZW4gQ0lSUlVTIGlz
IGNob3NlbiwgZXZlcnl0aGluZyB3b3JrcyBzbW9vdGhseSBhbmQgQ1BVIGlzIDxicj4NCiZndDsm
Z3Q7ICZndDsgfjEwMCUgaWRsZS4gVGhlIGRvd25zaWRlIGlzIHRoYXQgd2Ugd2FudCB0byB1c2Ug
U1BJQ0UgYW5kIENJUlJVUyB3b24ndCA8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IGFsbG93IGl0Ljxicj4N
CiZndDsmZ3Q7ICZndDs8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IFdoeSBkb2VzIHRoaXMgaGFwcGVuPyBJ
cyB0aGlzIGFuIE9TLXNpZGUgZHJpdmVyIGlzc3VlPyBBbnkgaGludCBob3cgY2FuIDxicj4NCiZn
dDsmZ3Q7ICZndDsgaXQgYmUgZml4ZWQ/PGJyPg0KJmd0OyZndDsgJmd0Ozxicj4NCiZndDsmZ3Q7
ICZndDsgVGhhbmtzLjxicj4NCiZndDsmZ3Q7ICZndDs8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IE5pY29s
w6FzPGJyPg0KJmd0OyZndDsgJmd0OyBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fXzxicj4NCiZndDsmZ3Q7ICZndDsgVXNlcnMgbWFpbGluZyBsaXN0PGJyPg0K
Jmd0OyZndDsgJmd0OyBVc2Vyc0BvdmlydC5vcmc8YnI+DQomZ3Q7Jmd0OyAmZ3Q7IGh0dHA6Ly9s
aXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vyczxicj4NCiZndDs8YnI+DQomZ3Q7
PGJyPg0KPC9wPg0KPC9ib2R5Pg0KPC9odG1sPg0K
--_000_d37306c3c698445a8f4a6718ef2cc97eexch24sluse_--
8 years, 6 months