Ovirt Engine - Wait For Launch
by Maton, Brett
I did a yum update on one of my Ovirt Hosts a few days ago.
All of the VM's on that host have been shown in the web UI with a status of
'Wait For Launch'.
The VM's all appear to still be up and running but I can't manage/migrate
them.
Is there anything I can do to clear this issue ?
Regards,
Brett
8 years, 5 months
dwhd error: host_interface_samples_history_pkey
by Bill James
Ovirt events keeps saying
"ETL service sampling has encountered an error. Please consult the
service log for more details."
ovirt-engine-dwhd.log says:
2016-05-27
10:55:00|UWxd1W|48u96S|MjrqV0|OVIRT_ENGINE_DWH|StatisticsSync|Default|6|Java
Exception|tJDBCOutput_4|org.postgresql.util.PSQLException:ERROR: root page
3 of index "host_interface_samples_history_pkey" has level 0, expected 1|1
Exception in component tJDBCOutput_3
org.postgresql.util.PSQLException: ERROR: current transaction is
aborted, commands ignored until end of transaction block
at
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2157)
(see attached)
I double checked all my VMs, they all have an interface assigned. Not
sure what host_interface_samples_history_pkey is referring to.
Thanks.
8 years, 5 months
What recovers a VM from pause?
by Nicolas Ecarnot
Hello,
We're planning a move from our old building towards a new one a few
meters away.
In a similar way of Martijn
(https://www.mail-archive.com/users@ovirt.org/msg33182.html), I have
maintenance planed on our storage side.
Say an oVirt DC is using a SAN's LUN via iSCSI (Equallogic).
This SAN allows me to setup block replication between two SANs, seen by
oVirt as one (Dell is naming it SyncRep).
Then switch all the iSCSI accesses to the replicated LUN.
When doing this, the iSCSI stack of each oVirt host notices the
de-connection, tries to reconnect, and succeeds.
Amongst our hosts, this happens between 4 and 15 seconds.
When this happens fast enough, oVirt engine and the VMs don't even
notice, and they keep running happily.
When this takes more than 4 seconds, there are 2 cases :
1 - The hosts and/or oVirt and/or the SPM (I actually don't know)
notices that there is a storage failure, and pauses the VMs.
When the iSCSI stack reconnects, the VMs are automatically recovered
from pause, and this all takes less than 30 seconds. That is very
acceptable for us, as this action is extremely rare.
2 - Same storage failure, VMs paused, and some VMs stay in pause mode
forever.
Manual "run" action is mandatory.
When done, everything recovers correctly.
This is also quite acceptable, but here come my questions :
My questions : (!)
- *WHAT* process or piece of code or what oVirt parts is responsible for
deciding when to UN-pause a VM, and at what conditions?
That would help me to understand why some cases are working even more
smoothly than others.
- Are there related timeouts I could play with in engine-config options?
- [a bit off-topic] Is it safe to increase some iSCSI timeouts of
buffer-sizes in the hope this kind of disconnection would get un-noticed?
--
Nicolas ECARNOT
8 years, 5 months
Re: [ovirt-users] How to automate the ovirt host deployment?
by Karli Sjöberg
--_000_5cb4511117cb4aa3aa76ec7ff6744b52Exch23sluse_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
DQpEZW4gMjcgbWFqIDIwMTYgMTg6NDEgc2tyZXYgQXJtYW4gS2hhbGF0eWFuIDxhcm0yYXJtQGdt
YWlsLmNvbT46DQo+DQo+IEhpLCBJIGFtIGxvb2tpbmcgc29tZSBtZXRob2QgdG8gYXV0b21hdGUg
dGhlIGhvc3QgZGVwbG95bWVudHMgaW4gYSBjbHVzdGVyIGVudmlyb25tZW50Lg0KPiBBc3N1bWlu
ZyB3ZSBoYXZlIDIwIG5vZGVzIHdpdGggY2VudG9zIDcgZXRoMC9ldGgxIGNvbmZpZ3VyZWQuIElz
IGl0IHBvc3NpYmxlIHRvIGF1dG9tYXRlIGluc3RhbGxhdGlvbiB3aXRoIG92aXJ0LXNkaz8NCj4g
QXJlIHRoZXJlIHNvbWUgZXhhbXBsZXMgID8NCg0KWW91IGNvdWxkIGRvIHRoYXQsIG9yIGxvb2sg
aW50byBmdWxsIGxpZmUgY3ljbGUgbWFuYWdlbWVudCB3aXRoIFRoZSBGb3JlbWFuLg0KDQovSw0K
DQo+DQo+IFRoYW5rcywNCj4gQXJtYW4uDQo=
--_000_5cb4511117cb4aa3aa76ec7ff6744b52Exch23sluse_
Content-Type: text/html; charset="utf-8"
Content-ID: <9AAA3C04BA12E7489FD0F9CF918A616E(a)ad.slu.se>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHAgZGlyPSJsdHIi
Pjxicj4NCkRlbiAyNyBtYWogMjAxNiAxODo0MSBza3JldiBBcm1hbiBLaGFsYXR5YW4gJmx0O2Fy
bTJhcm1AZ21haWwuY29tJmd0Ozo8YnI+DQomZ3Q7PGJyPg0KJmd0OyBIaSwgSSBhbSBsb29raW5n
IHNvbWUgbWV0aG9kIHRvIGF1dG9tYXRlIHRoZSBob3N0IGRlcGxveW1lbnRzIGluIGEgY2x1c3Rl
ciBlbnZpcm9ubWVudC48YnI+DQomZ3Q7IEFzc3VtaW5nIHdlIGhhdmUgMjAgbm9kZXMgd2l0aCBj
ZW50b3MgNyBldGgwL2V0aDEgY29uZmlndXJlZC4gSXMgaXQgcG9zc2libGUgdG8gYXV0b21hdGUg
aW5zdGFsbGF0aW9uIHdpdGggb3ZpcnQtc2RrPzxicj4NCiZndDsgQXJlIHRoZXJlIHNvbWUgZXhh
bXBsZXMmbmJzcDsgPzwvcD4NCjxwIGRpcj0ibHRyIj5Zb3UgY291bGQgZG8gdGhhdCwgb3IgbG9v
ayBpbnRvIGZ1bGwgbGlmZSBjeWNsZSBtYW5hZ2VtZW50IHdpdGggVGhlIEZvcmVtYW4uPC9wPg0K
PHAgZGlyPSJsdHIiPi9LPC9wPg0KPHAgZGlyPSJsdHIiPiZndDs8YnI+DQomZ3Q7IFRoYW5rcywg
PGJyPg0KJmd0OyBBcm1hbi48L3A+DQo8L2JvZHk+DQo8L2h0bWw+DQo=
--_000_5cb4511117cb4aa3aa76ec7ff6744b52Exch23sluse_--
8 years, 5 months
Can't perform search after setting up an Active Directory
by Alexis HAUSER
Hi,
I added an Active Directory server to RHEV, but I can't perform any search and I don't see any namespace in the interface.
I'm able to perform search using with the same search user DN / passwd and certificate :
LDAPTLS_CACERT=/somewhere/myca.pem ldapsearch -H ldaps://myserver.com -x -D 'CN=Something,DC=myserver,DC=come' -w 'mypaswd' -b 'CN=users,DC=something,DC=com'
in the engine.log, if I grep warn, I can see the following messages :
2016-05-25 05:54:55,840 WARN [org.ovirt.engine.core.bll.SearchQuery] (ajp-/127.0.0.1:8702-3) [] Illegal search: ADUSER@AD-authz:undefined: allnames=*: null
2016-05-25 05:54:55,843 WARN [org.ovirt.engine.core.bll.SearchQuery] (ajp-/127.0.0.1:8702-3) [] Illegal search: ADGROUP@AD-authz:undefined: name=*: null
2016-05-25 05:54:58,160 WARN [org.ovirt.engine.core.bll.SearchQuery] (ajp-/127.0.0.1:8702-9) [] Illegal search: ADUSER@AD-authz:undefined: allnames=*: null
2016-05-25 05:54:58,162 WARN [org.ovirt.engine.core.bll.SearchQuery] (ajp-/127.0.0.1:8702-9) [] Illegal search: ADGROUP@AD-authz:undefined: name=*: null
I also tried adding the following configuration but it didn't solve my problem :
sequence-init.init.100-my-basedn-init-vars = my-basedn-init-vars
sequence.my-basedn-init-vars.010.description = set baseDN
sequence.my-basedn-init-vars.010.type = var-set
sequence.my-basedn-init-vars.010.var-set.variable = simple_baseDN
sequence.my-basedn-init-vars.010.var-set.value = CN=Users,DC=something,DC=com
Any ideas ?
By the way, if I didn't rename my .profile and auth* files from my LDAP configuration, I had the LDAP namespace suggested by the web interface in my AD domain when trying to perform a search. Is that a bug ?
8 years, 5 months
listing domains fails
by Fabrice Bacchella
--Apple-Mail=_943A44EE-DB36-4E2B-A9A5-F7AE7F010E4A
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=us-ascii
I'm trying to list domains using the api, and I'm getting a strange =
message :
> GET /api/storagedomains;case_sensitive=3DTrue HTTP/1.1
...
> Version: 3
> Content-Type: application/xml
> Accept: application/xml
> Filter: False
> Prefer: persistent-auth
> Content-Length: 0
< HTTP/1.1 404 Not Found
< <?xml version=3D"1.0" encoding=3D"UTF-8" standalone=3D"yes"?>
< <fault>
< <reason>Operation Failed</reason>
< <detail>Entity not found: Storage server connection: =
id=3D6860d96f-557e-4d82-a209-401d72bd6e16</detail>
< </fault>
And in engine.log, I got:
Before:
2016-05-30 13:07:31,017 INFO =
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSComma=
nd] (default task-25) [4c2e1822] START, =
DisconnectStorageServerVDSCommand(HostName =3D ng320, =
StorageServerConnectionManagementVDSParameters:{runAsync=3D'true', =
hostId=3D'4426ed42-0805-43b1-92d9-3e5a680eaf38', =
storagePoolId=3D'00000000-0000-0000-0000-000000000000', =
storageType=3D'LOCALFS', =
connectionList=3D'[StorageServerConnections:{id=3D'6860d96f-557e-4d82-a209=
-401d72bd6e16', connection=3D'/data/ovirt/data', iqn=3D'null', =
vfsType=3D'null', mountOptions=3D'null', nfsVersion=3D'null', =
nfsRetrans=3D'null', nfsTimeo=3D'null', iface=3D'null', =
netIfaceName=3D'null'}]'}), log id: 63f890
...
2016-05-30 13:07:32,312 INFO =
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSComma=
nd] (default task-25) [4c2e1822] FINISH, =
DisconnectStorageServerVDSCommand, return: =
{6860d96f-557e-4d82-a209-401d72bd6e16=3D0}, log id: 63f890
When I run the command:
2016-05-31 15:34:11,259 ERROR =
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default =
task-13) [] Operation Failed: Entity not found: Storage server =
connection: id=3D6860d96f-557e-4d82-a209-401d72bd6e16
What's wrong ?=
--Apple-Mail=_943A44EE-DB36-4E2B-A9A5-F7AE7F010E4A
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
charset=us-ascii
<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Dus-ascii"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" =
class=3D"">I'm trying to list domains using the api, and I'm getting a =
strange message :<div class=3D""><br class=3D""></div><div class=3D""><div=
style=3D"margin: 0px;" class=3D""><font face=3D"Menlo" =
style=3D"font-size: 11px;" class=3D"">> GET =
/api/storagedomains;case_sensitive=3DTrue =
HTTP/1.1</font></div></div><div class=3D""><font face=3D"Menlo" =
style=3D"font-size: 11px;" class=3D"">...</font></div><div class=3D""><div=
style=3D"margin: 0px;" class=3D""><font face=3D"Menlo" =
style=3D"font-size: 11px;" class=3D"">> Version: 3</font></div><div =
style=3D"margin: 0px;" class=3D""><font face=3D"Menlo" style=3D"font-size:=
11px;" class=3D"">> Content-Type: application/xml</font></div><div =
style=3D"margin: 0px;" class=3D""><font face=3D"Menlo" style=3D"font-size:=
11px;" class=3D"">> Accept: application/xml</font></div><div =
style=3D"margin: 0px;" class=3D""><font face=3D"Menlo" style=3D"font-size:=
11px;" class=3D"">> Filter: False</font></div><div style=3D"margin: =
0px;" class=3D""><font face=3D"Menlo" style=3D"font-size: 11px;" =
class=3D"">> Prefer: persistent-auth</font></div><div style=3D"margin: =
0px;" class=3D""><font face=3D"Menlo" style=3D"font-size: 11px;" =
class=3D"">> Content-Length: 0</font></div></div><div class=3D""><font =
face=3D"Menlo" style=3D"font-size: 11px;" class=3D""><br =
class=3D""></font></div><div class=3D""><div style=3D"margin: 0px;" =
class=3D""><font face=3D"Menlo" style=3D"font-size: 11px;" class=3D""><=
HTTP/1.1 404 Not Found</font></div></div><div class=3D""><div =
style=3D"margin: 0px;" class=3D""><font face=3D"Menlo" style=3D"font-size:=
11px;" class=3D"">< <?xml version=3D"1.0" encoding=3D"UTF-8" =
standalone=3D"yes"?></font></div><div style=3D"margin: 0px;" =
class=3D""><font face=3D"Menlo" style=3D"font-size: 11px;" class=3D""><=
<fault></font></div><div style=3D"margin: 0px;" class=3D""><font =
face=3D"Menlo" style=3D"font-size: 11px;" class=3D"">< =
<reason>Operation Failed</reason></font></div><div =
style=3D"margin: 0px;" class=3D""><font face=3D"Menlo" style=3D"font-size:=
11px;" class=3D"">< <detail>Entity not found: =
Storage server connection: =
id=3D6860d96f-557e-4d82-a209-401d72bd6e16</detail></font></div><div =
style=3D"margin: 0px;" class=3D""><font face=3D"Menlo" style=3D"font-size:=
11px;" class=3D"">< </fault></font></div></div><div =
class=3D""><br class=3D""></div><div class=3D"">And in engine.log, I =
got:</div><div class=3D"">Before:</div><div class=3D""><font =
face=3D"Menlo" class=3D""><span style=3D"font-size: 11px;" =
class=3D"">2016-05-30 13:07:31,017 =
INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStora=
geServerVDSCommand] (default task-25) [4c2e1822] START, =
DisconnectStorageServerVDSCommand(HostName =3D ng320, =
StorageServerConnectionManagementVDSParameters:{runAsync=3D'true', ho=
stId=3D'4426ed42-0805-43b1-92d9-3e5a680eaf38', =
storagePoolId=3D'00000000-0000-0000-0000-000000000000', =
storageType=3D'LOCALFS', connectionList=3D'[StorageServerConnections:=
{id=3D'6860d96f-557e-4d82-a209-401d72bd6e16', =
connection=3D'/data/ovirt/data', iqn=3D'null', vfsType=3D'null', =
mountOptions=3D'null', nfsVersion=3D'null', nfsRetrans=3D'null', =
nfsTimeo=3D'null', iface=3D'null', netIfaceName=3D'null'}]'}), =
log id: 63f890<br class=3D"">...</span></font></div><div =
class=3D""><div style=3D"margin: 0px; font-size: 11px; font-family: =
Menlo;" class=3D"">2016-05-30 13:07:32,312 INFO =
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSComma=
nd] (default task-25) [4c2e1822] FINISH, =
DisconnectStorageServerVDSCommand, return: =
{6860d96f-557e-4d82-a209-401d72bd6e16=3D0}, log id: 63f890</div><div =
style=3D"margin: 0px; font-size: 11px; font-family: Menlo;" class=3D""><br=
class=3D""></div><div style=3D"margin: 0px;" class=3D"">When I run the =
command:</div><div style=3D"margin: 0px;" class=3D""><font face=3D"Menlo" =
style=3D"font-size: 11px;" class=3D"">2016-05-31 15:34:11,259 ERROR =
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default =
task-13) [] Operation Failed: Entity not found: Storage server =
connection: =
id=3D6860d96f-557e-4d82-a209-401d72bd6e16</font></div></div><div =
style=3D"margin: 0px;" class=3D""><br class=3D""></div><div =
style=3D"margin: 0px;" class=3D"">What's wrong ?</div></body></html>=
--Apple-Mail=_943A44EE-DB36-4E2B-A9A5-F7AE7F010E4A--
8 years, 5 months
VMs using hugepages
by Ralf Schenk
This is a multi-part message in MIME format.
--------------1F41AA982CD26EC47BC0B407
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Hello,
I try to get VM's to use hugepages by default. We use them on our manual
VM's set up for libvirt and experience performance advantages. I
installed vdsm-hook-hugepages, but according to
http://www.ovirt.org/develop/developer-guide/vdsm/hook/hugepages/ I have
to set hugepages=SIZE. Engine Web-Fronted doesn't show an option
anywhere to specify this.
I want the VMs to have:
<memoryBacking>
<hugepages/>
</memoryBacking>
Any hint ?
Versions:
vdsm.noarch 4.17.28-0.el7.centos @ovirt-3.6
vdsm-cli.noarch 4.17.28-0.el7.centos @ovirt-3.6
vdsm-gluster.noarch 4.17.28-0.el7.centos @ovirt-3.6
vdsm-hook-hugepages.noarch 4.17.28-0.el7.centos @ovirt-3.6
vdsm-hook-vmfex-dev.noarch 4.17.28-0.el7.centos @ovirt-3.6
vdsm-infra.noarch 4.17.28-0.el7.centos @ovirt-3.6
vdsm-jsonrpc.noarch 4.17.28-0.el7.centos @ovirt-3.6
vdsm-python.noarch 4.17.28-0.el7.centos @ovirt-3.6
vdsm-xmlrpc.noarch 4.17.28-0.el7.centos @ovirt-3.6
vdsm-yajsonrpc.noarch 4.17.28-0.el7.centos @ovirt-3.6
Engine:
ovirt-engine.noarch 3.6.6.2-1.el7.centos
@ovirt-3.6
--
*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *rs(a)databay.de* <mailto:rs@databay.de>
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Klaus Scholzen (RA)
------------------------------------------------------------------------
--------------1F41AA982CD26EC47BC0B407
Content-Type: multipart/related;
boundary="------------42116FB0C46FABA6A6982AF7"
--------------42116FB0C46FABA6A6982AF7
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p>Hello,</p>
<p>I try to get VM's to use hugepages by default. We use them on our
manual VM's set up for libvirt and experience performance
advantages. I installed vdsm-hook-hugepages, but according to
<a class="moz-txt-link-freetext" href="http://www.ovirt.org/develop/developer-guide/vdsm/hook/hugepages/">http://www.ovirt.org/develop/developer-guide/vdsm/hook/hugepages/</a>
I have to set hugepages=SIZE. Engine Web-Fronted doesn't show an
option anywhere to specify this. <br>
</p>
<p>I want the VMs to have:</p>
<p><tt> <memoryBacking><br>
<hugepages/><br>
</memoryBacking><br>
</tt><br>
</p>
<p>Any hint ?<br>
</p>
<p>Versions:</p>
<p><tt>vdsm.noarch
4.17.28-0.el7.centos @ovirt-3.6</tt><tt><br>
</tt><tt>vdsm-cli.noarch
4.17.28-0.el7.centos @ovirt-3.6</tt><tt><br>
</tt><tt>vdsm-gluster.noarch
4.17.28-0.el7.centos @ovirt-3.6</tt><tt><br>
</tt><tt>vdsm-hook-hugepages.noarch
4.17.28-0.el7.centos @ovirt-3.6</tt><tt><br>
</tt><tt>vdsm-hook-vmfex-dev.noarch
4.17.28-0.el7.centos @ovirt-3.6</tt><tt><br>
</tt><tt>vdsm-infra.noarch
4.17.28-0.el7.centos @ovirt-3.6</tt><tt><br>
</tt><tt>vdsm-jsonrpc.noarch
4.17.28-0.el7.centos @ovirt-3.6</tt><tt><br>
</tt><tt>vdsm-python.noarch
4.17.28-0.el7.centos @ovirt-3.6</tt><tt><br>
</tt><tt>vdsm-xmlrpc.noarch
4.17.28-0.el7.centos @ovirt-3.6</tt><tt><br>
</tt><tt>vdsm-yajsonrpc.noarch
4.17.28-0.el7.centos @ovirt-3.6</tt><tt><br>
</tt><br>
</p>
Engine:<br>
ovirt-engine.noarch
3.6.6.2-1.el7.centos @ovirt-3.6<br>
<br>
<div class="moz-signature">-- <br>
<p>
</p>
<table border="0" cellpadding="0" cellspacing="0">
<tbody>
<tr>
<td colspan="3"><img
src="cid:part1.FD451D75.B4E6C315@databay.de" height="30"
border="0" width="151"></td>
</tr>
<tr>
<td valign="top"> <font face="Verdana, Arial, sans-serif"
size="-1"><br>
<b>Ralf Schenk</b><br>
fon +49 (0) 24 05 / 40 83 70<br>
fax +49 (0) 24 05 / 40 83 759<br>
mail <a href="mailto:rs@databay.de"><font
color="#FF0000"><b>rs(a)databay.de</b></font></a><br>
</font> </td>
<td width="30"> </td>
<td valign="top"> <font face="Verdana, Arial, sans-serif"
size="-1"><br>
<b>Databay AG</b><br>
Jens-Otto-Krag-Straße 11<br>
D-52146 Würselen<br>
<a href="http://www.databay.de"><font color="#FF0000"><b>www.databay.de</b></font></a>
</font> </td>
</tr>
<tr>
<td colspan="3" valign="top"> <font face="Verdana, Arial,
sans-serif" size="1"><br>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE
210844202<br>
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch
Yavari, Dipl.-Kfm. Philipp Hermanns<br>
Aufsichtsratsvorsitzender: Klaus Scholzen (RA) </font>
</td>
</tr>
</tbody>
</table>
<hr color="#000000" noshade="noshade" size="1" width="100%">
</div>
</body>
</html>
--------------42116FB0C46FABA6A6982AF7
Content-Type: image/gif;
name="logo_databay_email.gif"
Content-Transfer-Encoding: base64
Content-ID: <part1.FD451D75.B4E6C315(a)databay.de>
Content-Disposition: inline;
filename="logo_databay_email.gif"
R0lGODlhlwAeAMQAAObm5v9QVf/R0oKBgfDw8NfX105MTLi3t/r6+sfHx/+rrf98gC0sLP8L
EhIQEKalpf/g4ZmYmHd2dmppaf8uNP/y8v8cIv+Ym//AwkE/P46NjRwbG11cXP8ABwUDA///
/yH5BAAAAAAALAAAAACXAB4AAAX/4CeOYnUJZKqubOu+cCzPNA0tVnfVfO//wGAKk+t0Ap+K
QMFUYCDCqHRKJVUWDaPRUsFktZ1G4AKtms9o1gKsFVS+7I5ll67bpd647hPQawNld4KDMQJF
bA07F35aFBiEkJEpfXEBjx8KjI0Vkp2DEIdaCySgFBShbEgrCQOtrq+uEQcALQewrQUjEbe8
rgkkD7y5KhMZB3drqSoVFQhdlHGXKQYe1dbX2BvHKwzY1RMiAN7j1xEjBeTmKeIeD3cYCxRf
FigvChRxFJwkBBvk5A7cpZhAjgGCDwn+kfslgto4CSoSehh2BwEEBQvowDAUR0EKdArHZTg4
4oDCXBFC/3qj9SEluZEpHnjYQFIGgpo1KgSasYjNKBImrzF4NaFbNgIjCGRQeIyVKwneOLzS
cLCAg38OWI4Y4GECgQcSOEwYcADnh6/FNjAwoGFYAQ0atI4AAFeEFwsLFLiJUQEfGH0kNGAD
x8+oNQdIRQg+7NCaOhIgD8sVgYADNsPVGI5YWjRqzQTdHDDIYHRDLokaUhCglkFEJi0NKJhl
0RP2TsvXUg88KiLBVWsZrF6DmMKlNYMqglqTik1guN8OBgAgkGCpB+L9ugK4iSCBvwEfECw1
kILrBpa1jVCQIQBRvbP+rlEcQVAoSevWyv6uhpwE12uEkQAAZucpVw1xIsjkgf8B863mQVYt
eQATCZYJZJ5WBfij2wfpHcEeHGG8Z+BMszVWDXkfKLhceJhBSAJ+1ThH32AfRFZNayNAtUFi
wFSTSwEHJIYAAQU84IADwyjIEALU9MchG+vFgIF7W2GDI2T7HfjBgNcgKQKMHmwjgnCSpeCb
ULRkdxhF1CDY40RjgmUAA/v1J5FAKW2gGSZscBFDMraNgJs1AYpAAGYP5jJoNQ4Y4Gh8jpFg
HH9mgbmWo1l6oA4C3Ygp6UwEIFBfNRtkMIBlKMLnAXgAXLWhXXH85EIFqMhGGZgDEKArABGA
ed0HI4bk5qgnprCYSt88B6dqS0FEEAMPJDCdCJYViur/B1BlwGMJqDTwnhqxJgUpo0ceOQ4D
0yEakpMm/jqCRMgWm2I1j824Y6vLvuuPjHnqOJkIgP6xzwp5sCFNsCFp88Gxh11lrjfDcNrc
CEx64/CD3iAHlQcMUEQXvcA+qBkBB4Q2X1CusjBlJdKMYAKI6g28MbKN5hJsBAXknHOwutn4
oFYqkpqAzjnPbE0u1PxmwAQGXLWBbvhuIIEGEnRjlAHO4SvhbCNAkwoGzEBwgV9U0lfu2WiX
OkDEGaCdKgl0nk2YkWdPOCDabvaGdkAftL1LlgwCM+7Tq11V71IO7LkM2XE0YAHMYMhqqK6U
V165CpaHukLmiXFO8XSVzzakX+UH6TrmAajPNxfqByTQec41AeBPvSwIALkmAnuiexCsca3C
BajgfsROuxcPA8kHQJX4DAIwjnsAvhsvfXHWKEwDAljg7sj03L9wwAQTxOWD2AE0YP75eCkw
cPfs+xACADs=
--------------42116FB0C46FABA6A6982AF7--
--------------1F41AA982CD26EC47BC0B407--
8 years, 5 months
Clone, template, pools : how does it uses disk space ?
by Alexis HAUSER
Hi,
I would like to know what happens to storage when using the different method of cloning or generating VMs using templates / pools.
I'd like to know also in what case VM and virtual disks are totally independent and in what case they are not.
Sadly the RHEV documentation doesn't really provide these informations and I don't find any explicit informations about it.
For example, when making a VM from template, using pre-allocated disk option, for a 50GB Virtual disk, it only uses 3GB on the physical disk.
Another example, when making a pool of 10 VMs, based on a VM with a 50 GB virtual disk, only 2GB more space is used on the physical disk.
What is exactly done when this happens ?
Here are the case I would like to have informations about (physical storage, and independence of VMs) :
- using simple "clone function"
- making VM from template with "clone" mode
- making VM from template with "thin" mode
- making VM in pools
Is there modes calculating only the difference from the original VM, and other modes copying totally the informations from the virtual disk from the original VM ?
8 years, 5 months
Re: [ovirt-users] oVirt 4.0 hosted-engine deploy exits without messages or logs
by Gianluca Cecchi
On Fri, May 13, 2016 at 10:43 AM, Gianluca Cecchi <gianluca.cecchi(a)gmail.com
> wrote:
> On Thu, May 12, 2016 at 3:32 PM, Sandro Bonazzola wrote:
>
>>
>>
>>
>> I strongly suggest to use master. In particular, I strongly suggest to us
>> oVirt Node Next ISO for hosts and the engine appliance for the deployment.
>> Installing Hosted Engine using Cockpit Web UI has been a nice experience
>> yesterday.
>> https://twitter.com/SandroBonazzola/status/730426730515673092
>>
>> Be sure to install also the new dashboard package once your Hosted Engine
>> is up, that's another nice thing to see in action.
>>
>>
> It seems master from yesterday, attempting self hosted engine on nfs3 with
> hypervisor on CentOS 7.2 gives:
>
> [ INFO ] Stage: Transaction setup
> [ INFO ] Stage: Misc configuration
> [ INFO ] Stage: Package installation
> [ INFO ] Stage: Misc configuration
> [ INFO ] Configuring libvirt
> [ INFO ] Configuring VDSM
> [ INFO ] Starting vdsmd
> [ INFO ] Creating Storage Domain
> [ INFO ] Creating Storage Pool
> [ INFO ] Connecting Storage Pool
> [ INFO ] Verifying sanlock lockspace initialization
> [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101]
> Network is unreachable
> [ INFO ] Stage: Clean up
>
>
It seems I get the same problem with 4.0 beta too...
At least basic sh engine install should work in a beta, I think....
[ INFO ] Creating Storage Domain
[ INFO ] Creating Storage Pool
[ INFO ] Connecting Storage Pool
[ INFO ] Verifying sanlock lockspace initialization
[ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network
is unreachable
[ INFO ] Stage: Clean up
[ INFO ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20160531110857.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: this system is not reliable,
please check the issue, fix and redeploy
Inside ovirt-hosted-engine-setup initially I see an error about SSL and
vdsmd
May 31 11:03:58 ovirtita.localdomain.local vdsm[8571]: vdsm vds.dispatcher
ERROR SSL error during reading data: unexpected eof
2016-05-31 11:04:36 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:926 execute-output: ('/bin/systemctl', 'status',
'vdsmd.service') stderr:
I see it also inside messages
And then
2016-05-31 11:08:51 DEBUG otopi.plugins.gr_he_setup.sanlock.lockspace
storage_backends.create_volume
:270 Connecting to VDSM
2016-05-31 11:08:51 DEBUG otopi.context context._executeMethod:142 method
exception
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, in
_executeMethod
method['method']()
File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-setup/sanlock/lockspace.py",
l
ine 143, in _misc
lockspace + '.metadata': md_size,
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
line 369,
in create
service_size=size)
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
line 279,
in create_volume
volUUID=volume_uuid
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
line 245,
in _get_volume_path
volUUID
File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__
return self.__send(self.__name, args)
File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request
verbose=self.__verbose
File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request
return self.single_request(host, handler, request_body, verbose)
File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request
self.send_content(h, request_body)
File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content
connection.endheaders(request_body)
File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders
self._send_output(message_body)
File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output
self.send(msg)
File "/usr/lib64/python2.7/httplib.py", line 797, in send
self.connect()
File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in
connect
sock = socket.create_connection((self.host, self.port), self.timeout)
File "/usr/lib64/python2.7/socket.py", line 571, in create_connection
raise err
error: [Errno 101] Network is unreachable
2016-05-31 11:08:51 ERROR otopi.context context._executeMethod:151 Failed
to execute stage 'Misc configuration': [Errno 101] Network is unreachable
2016-05-31 11:08:51 DEBUG otopi.transaction transaction.abort:119 aborting
'File transaction for
'/etc/ovirt-hosted-engine/firewalld/hosted-console.xml''
2016-05-31 11:08:51 DEBUG otopi.transaction transaction.abort:119 aborting
'File transaction for '/etc/ovirt-hosted-engine/iptables.example''
2016-05-31 11:08:51 DEBUG otopi.transaction transaction.abort:119 aborting
'File transaction for '/etc/sysconfig/iptables''
The NFS file system is indeed mounted together with a loop device
/dev/loop1 2.0G 3.1M 1.9G 1%
/rhev/data-center/mnt/_var_lib_ovirt-hosted-engine-setup_tmpazuZQZ
ovirtita.localdomain.local:/SHE_DOMAIN 25G 34M 25G 1%
/rhev/data-center/mnt/ovirtita.localdomain.local:_SHE__DOMAIN
And getting this in messages:
May 31 11:03:51 ovirtita rpc.mountd[937]: authenticated mount request from
192.168.122.51:668 for /S
HE_DOMAIN (/SHE_DOMAIN)
May 31 11:03:51 ovirtita rpc.mountd[937]: authenticated unmount request
from 192.168.122.51:927 for
/SHE_DOMAIN (/SHE_DOMAIN)
May 31 11:03:51 ovirtita journal: vdsm SchemaCache WARNING Provided value
"1" not defined in Storage
DomainType enum for StoragePool.connectStorageServer
May 31 11:03:51 ovirtita journal: vdsm SchemaCache WARNING Provided
parameters {u'protocol_version':
3, u'connection': u'ovirtita.localdomain.local:/SHE_DOMAIN', u'user':
u'kvm', u'id': u'd8094512-9c9
e-4138-abbb-20715ea7fb96'} do not match any of union
ConnectionRefParameters values
May 31 11:03:51 ovirtita rpc.mountd[937]: authenticated mount request from
192.168.122.51:899 for /S
HE_DOMAIN (/SHE_DOMAIN)
sanlock.log contains the lines below.
It seems strange the truncate of hostname from "ovirtita.localdomain.local"
to "ovirtita.l" in many lines
2016-05-31 11:07:42+0200 6143 [8341]: s1 lockspace
9501187c-41a3-4e80-aa97-772bacae4c84:250:/rhev/data-center/mnt/_var_lib_ovirt-hosted-engine-setup_tmpazuZQZ/9501187c-41a3-4e80-aa97-772bacae4c84/dom_
md/ids:0
2016-05-31 11:08:03+0200 6164 [8342]: s1:r1 resource
9501187c-41a3-4e80-aa97-772bacae4c84:SDM:/rhev/data-center/mnt/_var_lib_ovirt-hosted-engine-setup_tmpazuZQZ/9501187c-41a3-4e80-aa97-772bacae4c84/do
m_md/leases:1048576 for 3,11,9192
2016-05-31 11:08:03+0200 6164 [8342]: s2 lockspace
cd9f23e2-1b3b-46fb-bf56-8687958c4ff8:250:/rhev/data-center/mnt/ovirtita.localdomain.local:_SHE__DOMAIN/cd9f23e2-1b3b-46fb-bf56-8687958c4ff8/dom_md/ids:0
2016-05-31 11:08:03+0200 6164 [8327]: s1 host 250 1 6143
2dd31d21-99d9-4c28-8368-2414f31eec55.ovirtita.l
2016-05-31 11:08:24+0200 6185 [8341]: s2:r2 resource
cd9f23e2-1b3b-46fb-bf56-8687958c4ff8:SDM:/rhev/data-center/mnt/ovirtita.localdomain.local:_SHE__DOMAIN/cd9f23e2-1b3b-46fb-bf56-8687958c4ff8/dom_md/
leases:1048576 for 3,11,9192
2016-05-31 11:08:24+0200 6185 [8327]: s2 host 250 1 6164
2dd31d21-99d9-4c28-8368-2414f31eec55.ovirtita.l
2016-05-31 11:08:30+0200 6191 [8342]: s3 lockspace
9501187c-41a3-4e80-aa97-772bacae4c84:1:/rhev/data-center/mnt/_var_lib_ovirt-hosted-engine-setup_tmpazuZQZ/9501187c-41a3-4e80-aa97-772bacae4c84/dom_md
/ids:0
2016-05-31 11:08:30+0200 6191 [8341]: s4 lockspace
cd9f23e2-1b3b-46fb-bf56-8687958c4ff8:1:/rhev/data-center/mnt/ovirtita.localdomain.local:_SHE__DOMAIN/cd9f23e2-1b3b-46fb-bf56-8687958c4ff8/dom_md/ids:0
2016-05-31 11:08:51+0200 6212 [8327]: s4 host 1 1 6191
2dd31d21-99d9-4c28-8368-2414f31eec55.ovirtita.l
2016-05-31 11:08:51+0200 6212 [8327]: s4 host 250 1 0
2dd31d21-99d9-4c28-8368-2414f31eec55.ovirtita.l
2016-05-31 11:08:51+0200 6212 [8327]: s3 host 1 1 6191
2dd31d21-99d9-4c28-8368-2414f31eec55.ovirtita.l
2016-05-31 11:08:51+0200 6212 [8327]: s3 host 250 1 0
2dd31d21-99d9-4c28-8368-2414f31eec55.ovirtita.l
2016-05-31 11:08:51+0200 6212 [9662]: s3:r3 resource
9501187c-41a3-4e80-aa97-772bacae4c84:SDM:/rhev/data-center/mnt/_var_lib_ovirt-hosted-engine-setup_tmpazuZQZ/9501187c-41a3-4e80-aa97-772bacae4c84/dom_md/leases:1048576
for 3,11,9192
2016-05-31 11:11:31+0200 6372 [8342]: s5 lockspace
d260b12c-3bf2-405a-9537-2949064e58b5:250:/rhev/data-center/mnt/_var_lib_ovirt-hosted-engine-setup_tmpugQ9c3/d260b12c-3bf2-405a-9537-2949064e58b5/dom_md/ids:0
Any way to understand why the "create_connect" fails and why it decodes it
as "Network unreachable"?
BTW: during setup I see this message
[WARNING] Cannot locate gluster packages, Hyper Converged setup support
will be disabled.
Does it mean hyper converged setup could be supported in 4.0?
Thanks,
Gianluca
8 years, 5 months
Re: [ovirt-users] On 3.6.6, tried doing a live VM storage migration... didn't work
by Markus Stockhausen
------=_NextPartTM-000-7fc17005-f5d8-4745-92f6-4a4dc02fab36
Content-Type: multipart/alternative;
boundary="_000_c00167c8394c4c46a22eaecc541ae43cemailandroidcom_"
--_000_c00167c8394c4c46a22eaecc541ae43cemailandroidcom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SSBrbm93IG9mIGF0IGxlYXN0IG9uZSBsaXZlIERpc2sgTWlncmF0aW9uIGlzc3VlIHdpdGggTXVs
dGkgRGlzayBWTXMuDQoNCmh0dHBzOi8vYnVnemlsbGEucmVkaGF0LmNvbS9zaG93X2J1Zy5jZ2k/
aWQ9MTMxOTQwMA0KDQpNaWdodCBiZSB0b3RhbGx5IGRpZmZlcmVudCBidXQgSSBtdXN0IGFkbWl0
IHRoYXQgdGhpcyBmZWF0dXJlIGhhZCBzZXZlcmFsIHVwcyBhbmQgZG93bnMgdGhlIGxhc3QgeWVh
cnMuDQoNCk1hcmt1cw0KDQpBbSAyNi4wNS4yMDE2IDM6NTAgdm9ybS4gc2NocmllYiBDaHJpc3Rv
cGhlciBDb3ggPGNjb3hAZW5kbGVzc25vdy5jb20+Og0KSW4gb3VyIG9sZCAzLjQgb3ZpcnQsIEkg
a25vdyBJJ3ZlIG1pZ3JhdGVkIHN0b3JhZ2Ugb24gbGl2ZSBWTXMgYW5kIGV2ZXJ5dGhpbmcNCnNl
ZW1lZCB0byB3b3JrLg0KDQpIb3dldmVyIG9uIDMuNi42LCBJIHRyaWVkIHRoaXMgYW5kIEkgc2F3
IHRoZSB3YXJuaW5nIGFib3V0IG1vdmluZyBzdG9yYWdlIG9uIGENCmxpdmUgVk0gKGl0IHdhc24n
dCBkb2luZyBtdWNoIG9mIGFueXRoaW5nKSBhbmQgSSB3ZW50IGFoZWFkIGFuZCBtaWdyYXRlZCB0
aGUNCnN0b3JhZ2UgZnJvbSBvbmUgc3RvcmFnZSBkb21haW4gdG8gYW5vdGhlci4gICBCdXQgd2hl
biBpdCB3YXMgdGhyb3VnaCwgZXZlbg0KdGhvdWdoIHRoZSBWTSB3YXMgc3RpbGwgYWxpdmUsIHdo
ZW4gSSB0cmllZCB0byB3cml0ZSB0byBhIHZpcnR1YWwgZGlzayB0aGF0IHdhcw0KcGFydCBvZiB0
aGUgbW92ZSwgaXQgcGF1c2VkIHRoZSBWTSBzYXlpbmcgdGhlcmUgd2Fzbid0IGVub3VnaCBzdG9y
YWdlLg0KDQpJIGNvdWxkIHVucGF1c2UgdGhlIFZNLCBidXQgaW4gYSBmZXcgc2Vjb25kcywgd2l0
aCB0aGluZ3Mgd3JpdGluZyB0byB0aGUgdmlydHVhbA0KZGlzaywgYWdhaW4gaXQgd2FzIHBhdXNl
ZCB3aXRoIHRoZSBzYW1lIG91dCBvZiBzcGFjZSBtZXNzYWdlLiAgVmRzbSBsb2dzIHNob3dlZA0K
dGhlIGVub3NwYyByZXR1cm4gY29kZS4uLiBzbyBpdCBtYWRlIHNlbnNlLCBpdCdzIGp1c3QgdGhh
dCB0aGUgVk0gc2hvd3MgcGxlbnR5DQpvZiBzdG9yYWdlIHRoZXJlLiAgT25jZSBJIHJlYm9vdGVk
IHRoZSBWTSwgZXZlcnl0aGluZyB3ZW50IGJhY2sgdG8gbm9ybWFsLg0KDQpTbyBpcyBtb3Zpbmcg
c3RvcmFnZSBmb3IgYSBsaXZlIFZNIG5vdCBzdXBwb3J0ZWQ/ICBJIGd1ZXNzIHdlIGdvdCBsdWNr
eSBpbiBvdXINCjMuNCBzeXN0ZW0gKD8pDQoNCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fDQpVc2VycyBtYWlsaW5nIGxpc3QNClVzZXJzQG92aXJ0Lm9yZw0K
aHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzDQo=
--_000_c00167c8394c4c46a22eaecc541ae43cemailandroidcom_
Content-Type: text/html; charset="utf-8"
Content-ID: <CB7FDCFAC1461E4489DDC562BF4BDB5A(a)collogia.de>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHAgZGlyPSJsdHIi
Pkkga25vdyBvZiBhdCBsZWFzdCBvbmUgbGl2ZSBEaXNrIE1pZ3JhdGlvbiBpc3N1ZSB3aXRoIE11
bHRpIERpc2sgVk1zLjwvcD4NCjxwIGRpcj0ibHRyIj5odHRwczovL2J1Z3ppbGxhLnJlZGhhdC5j
b20vc2hvd19idWcuY2dpP2lkPTEzMTk0MDA8L3A+DQo8cCBkaXI9Imx0ciI+TWlnaHQgYmUgdG90
YWxseSBkaWZmZXJlbnQgYnV0IEkgbXVzdCBhZG1pdCB0aGF0IHRoaXMgZmVhdHVyZSBoYWQgc2V2
ZXJhbCB1cHMgYW5kIGRvd25zIHRoZSBsYXN0IHllYXJzLjwvcD4NCjxwIGRpcj0ibHRyIj5NYXJr
dXM8L3A+DQo8ZGl2IGNsYXNzPSJnbWFpbF9xdW90ZSI+QW0gMjYuMDUuMjAxNiAzOjUwIHZvcm0u
IHNjaHJpZWIgQ2hyaXN0b3BoZXIgQ294ICZsdDtjY294QGVuZGxlc3Nub3cuY29tJmd0Ozo8YnIg
dHlwZT0iYXR0cmlidXRpb24iPg0KPGJsb2NrcXVvdGUgY2xhc3M9InF1b3RlIiBzdHlsZT0ibWFy
Z2luOjAgMCAwIC44ZXg7Ym9yZGVyLWxlZnQ6MXB4ICNjY2Mgc29saWQ7cGFkZGluZy1sZWZ0OjFl
eCI+DQo8ZGl2Pjxmb250IHNpemU9IjIiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTBwdCI+PC9z
cGFuPjwvZm9udD4NCjxkaXY+SW4gb3VyIG9sZCAzLjQgb3ZpcnQsIEkga25vdyBJJ3ZlIG1pZ3Jh
dGVkIHN0b3JhZ2Ugb24gbGl2ZSBWTXMgYW5kIGV2ZXJ5dGhpbmcNCjxicj4NCnNlZW1lZCB0byB3
b3JrLjxicj4NCjxicj4NCkhvd2V2ZXIgb24gMy42LjYsIEkgdHJpZWQgdGhpcyBhbmQgSSBzYXcg
dGhlIHdhcm5pbmcgYWJvdXQgbW92aW5nIHN0b3JhZ2Ugb24gYSA8YnI+DQpsaXZlIFZNIChpdCB3
YXNuJ3QgZG9pbmcgbXVjaCBvZiBhbnl0aGluZykgYW5kIEkgd2VudCBhaGVhZCBhbmQgbWlncmF0
ZWQgdGhlIDxicj4NCnN0b3JhZ2UgZnJvbSBvbmUgc3RvcmFnZSBkb21haW4gdG8gYW5vdGhlci4m
bmJzcDsmbmJzcDsgQnV0IHdoZW4gaXQgd2FzIHRocm91Z2gsIGV2ZW4gPGJyPg0KdGhvdWdoIHRo
ZSBWTSB3YXMgc3RpbGwgYWxpdmUsIHdoZW4gSSB0cmllZCB0byB3cml0ZSB0byBhIHZpcnR1YWwg
ZGlzayB0aGF0IHdhcyA8YnI+DQpwYXJ0IG9mIHRoZSBtb3ZlLCBpdCBwYXVzZWQgdGhlIFZNIHNh
eWluZyB0aGVyZSB3YXNuJ3QgZW5vdWdoIHN0b3JhZ2UuPGJyPg0KPGJyPg0KSSBjb3VsZCB1bnBh
dXNlIHRoZSBWTSwgYnV0IGluIGEgZmV3IHNlY29uZHMsIHdpdGggdGhpbmdzIHdyaXRpbmcgdG8g
dGhlIHZpcnR1YWwgPGJyPg0KZGlzaywgYWdhaW4gaXQgd2FzIHBhdXNlZCB3aXRoIHRoZSBzYW1l
IG91dCBvZiBzcGFjZSBtZXNzYWdlLiZuYnNwOyBWZHNtIGxvZ3Mgc2hvd2VkIDxicj4NCnRoZSBl
bm9zcGMgcmV0dXJuIGNvZGUuLi4gc28gaXQgbWFkZSBzZW5zZSwgaXQncyBqdXN0IHRoYXQgdGhl
IFZNIHNob3dzIHBsZW50eSA8YnI+DQpvZiBzdG9yYWdlIHRoZXJlLiZuYnNwOyBPbmNlIEkgcmVi
b290ZWQgdGhlIFZNLCBldmVyeXRoaW5nIHdlbnQgYmFjayB0byBub3JtYWwuPGJyPg0KPGJyPg0K
U28gaXMgbW92aW5nIHN0b3JhZ2UgZm9yIGEgbGl2ZSBWTSBub3Qgc3VwcG9ydGVkPyZuYnNwOyBJ
IGd1ZXNzIHdlIGdvdCBsdWNreSBpbiBvdXIgPGJyPg0KMy40IHN5c3RlbSAoPyk8YnI+DQo8YnI+
DQpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXzxicj4NClVz
ZXJzIG1haWxpbmcgbGlzdDxicj4NClVzZXJzQG92aXJ0Lm9yZzxicj4NCjxhIGhyZWY9Imh0dHA6
Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2VycyI+aHR0cDovL2xpc3RzLm92
aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzPC9hPjxicj4NCjwvZGl2Pg0KPC9kaXY+DQo8
L2Jsb2NrcXVvdGU+DQo8L2Rpdj4NCjwvYm9keT4NCjwvaHRtbD4NCg==
--_000_c00167c8394c4c46a22eaecc541ae43cemailandroidcom_--
------=_NextPartTM-000-7fc17005-f5d8-4745-92f6-4a4dc02fab36
Content-Type: text/plain;
name="InterScan_Disclaimer.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
filename="InterScan_Disclaimer.txt"
****************************************************************************
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.
Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
Vorstand:
Kadir Akin
Dr. Michael Höhnerbach
Vorsitzender des Aufsichtsrates:
Hans Kristian Langva
Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497
This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.
e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
executive board:
Kadir Akin
Dr. Michael Höhnerbach
President of the supervisory board:
Hans Kristian Langva
Registry office: district court Cologne
Register number: HRB 52 497
****************************************************************************
------=_NextPartTM-000-7fc17005-f5d8-4745-92f6-4a4dc02fab36--
8 years, 5 months