Re: [ovirt-users] Fake power management?
by mots
------=_Part_39_1687046635.1416266377877
Content-Type: multipart/alternative;
boundary="=_t8EqHT3xdZg+XwUZZBd4cSFdl9lEQF49Im0BRY50j2FqCbqe"
This is a multi-part message in MIME format. Your mail reader does not
understand MIME message format.
--=_t8EqHT3xdZg+XwUZZBd4cSFdl9lEQF49Im0BRY50j2FqCbqe
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
-----Urspr=C3=BCngliche Nachricht-----=0D=0A=0D=0A> Von:Barak Azulay <baz=
ulay(a)redhat.com <mailto:bazulay@redhat.com> >=0D=0A> Gesendet: Mon 17 Nov=
ember 2014 23:30=0D=0A> An: Patrick Lottenbach <pl(a)a-bot.ch <mailto:pl@a-=
bot.ch> >=0D=0A> CC: users(a)ovirt.org <mailto:users@ovirt.org>=20=0D=0A> B=
etreff: Re: AW: [ovirt-users] Fake power management=3F=0D=0A>=20=0D=0A> W=
ell you can hack the solution in the form of replacing the fencing master=
script to always return success (Eli can help you with that),=0D=0A> and=
define an imaginary fencing device on each host ... meaning that the fen=
cing command will always succeeds.=0D=0A>=20=0D=0A=0D=0AThis sounds inter=
esting. It's exactly what I need.=0D=0A=0D=0A> But this may be risky ... =
as you might end up with the same VM running on 2 hosts.=20=0D=0A=0D=0AAs=
I see it, this would only happen if someone unplugs the network interfac=
e. I know this is a way to break the cluster. If someone unplugs the inte=
rface, then everything gets started twice anyways thanks to pacemaker bei=
ng configured to ignore the lack of quorum and it would look silly in fro=
nt of the customer.=0D=0A=0D=0A> And one last note ... when you disconnec=
t one of the hosts in the demo you mentioned, I think you'll be better to=
disconnect the host that does not run the engine ...=0D=0A=20=0D=0AIt ju=
st gets restarted on the remaining node and resumes operation. It even re=
members which guests ran on which host.=0D=0AThat part is really safe. Th=
e storage is configured to only report data as written when the write ope=
ration has finished on all (currently online) nodes, disk write caches ar=
e turned off in lvm.conf. PostreSQL is resilient enough to survive a cras=
h like this.=0D=0A=0D=0AOr am I missing something that might break=3F=0D=0A=
=0D=0A> Barak=20=0D=0A=0D=0Amots=0D=0A=0D=0A>=20=0D=0A> ----- Original Me=
ssage -----=0D=0A> > From: "mots" <mots(a)nepu.moe <mailto:mots@nepu.moe> >=
=0D=0A> > To: "Barak Azulay" <bazulay(a)redhat.com <mailto:bazulay@redhat.c=
om> >=0D=0A> > Cc: users(a)ovirt.org <mailto:users@ovirt.org>=20=0D=0A> > S=
ent: Monday, November 17, 2014 12:58:20 PM=0D=0A> > Subject: AW: [ovirt-u=
sers] Fake power management=3F=0D=0A> >=20=0D=0A> > Yes, pacemaker manage=
s the engine. That part is working fine, the engine=0D=0A> > restarts on =
the remaining node without problems.=0D=0A> > It's just that the guests d=
on't come back up until the powered down node has=0D=0A> > been fenced ma=
nually.=0D=0A> >=20=0D=0A> > -----Urspr=C3=BCngliche Nachricht-----=0D=0A=
> > > Von:Barak Azulay <bazulay(a)redhat.com <mailto:bazulay@redhat.com> <=
mailto:bazulay@redhat.com <mailto:bazulay@redhat.com> > >=0D=0A> > > Gese=
ndet: Mon 17 November 2014 11:35=0D=0A> > > An: Patrick Lottenbach <pl@a-=
bot.ch <mailto:pl@a-bot.ch> <mailto:pl@a-bot.ch <mailto:pl@a-bot.ch> > >=
=0D=0A> > > CC: users(a)ovirt.org <mailto:users@ovirt.org> <mailto:users@o=
virt.org <mailto:users@ovirt.org> >=0D=0A> > > Betreff: Re: [ovirt-users]=
Fake power management=3F=0D=0A> > >=20=0D=0A> > >=20=0D=0A> > >=20=0D=0A=
> > > ----- Original Message -----=0D=0A> > > > From: "mots" <mots(a)nepu.m=
oe <mailto:mots@nepu.moe> <mailto:mots@nepu.moe <mailto:mots@nepu.moe> >=
>=0D=0A> > > > To: users(a)ovirt.org <mailto:users@ovirt.org> <mailto:use=
rs(a)ovirt.org <mailto:users@ovirt.org> >=0D=0A> > > > Sent: Friday, Novemb=
er 14, 2014 4:54:08 PM=0D=0A> > > > Subject: [ovirt-users] Fake power man=
agement=3F=0D=0A> > > >=20=0D=0A> > > > Fake power management=3F Hello,=0D=
=0A> > > >=20=0D=0A> > > > I'm building a small demonstration system for =
our sales team to take to a=0D=0A> > > > customer so that they can show t=
hem our solutions.=0D=0A> > > > Hardware: Two Intel NUC's, a 4 port switc=
h and a laptop.=0D=0A> > > > Engine: Runs as a VM on one of the NUCs, whi=
ch one it runs on is=0D=0A> > > > determined=0D=0A> > > > by pacemaker.=0D=
=0A> > > > Storage: Also managed by pacemaker, it's drbd backed and acces=
sed with=0D=0A> > > > iscsi.=0D=0A> > > > oVirt version: 3.5=0D=0A> > > >=
OS: CentOS 6.6=0D=0A> > > >=20=0D=0A> > > > The idea is to have our sale=
s representative (or the potential customer=0D=0A> > > > himself) randoml=
y pull the plug on one of the NUCs to show that the=0D=0A> > > > system=0D=
=0A> > > > stays operational when part of the hardware fails.=0D=0A> > >=20=
=0D=0A> > > I assume you are aware that the engine might fence the node i=
t is running=0D=0A> > > on ...=0D=0A> > > Or do you use pacemaker to run =
the engine as well =3F=0D=0A> > >=20=0D=0A> > > > My problem is that I do=
n't have any way to implement power management, so=0D=0A> > > > the=0D=0A=
> > > > Engine can't fence nodes and won't restart guests that were runni=
ng on=0D=0A> > > > the=0D=0A> > > > node which lost power. In pacemaker I=
can just configure fencing over SSH=0D=0A> > > > or=0D=0A> > > > even di=
sable the requirement to do so completely. Is there something=0D=0A> > > =
> similar=0D=0A> > > > for oVirt, so that the Engine will consider a node=
which it can't connect=0D=0A> > > > to=0D=0A> > > > to be powered down=3F=
=0D=0A> > > >=20=0D=0A> > > > Regards,=0D=0A> > > >=20=0D=0A> > > > mots=0D=
=0A> > > >=20=0D=0A> > > > ______________________________________________=
_=0D=0A> > > > Users mailing list=0D=0A> > > > Users(a)ovirt.org <mailto:Us=
ers(a)ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> >=0D=0A>=
> > > http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.=
org/mailman/listinfo/users>=20=0D=0A> > > > <" target=3D"_blank">http://l=
ists.ovirt.org/mailman/listinfo/users> <http://lists.ovirt.org/mailman/li=
stinfo/users> ;=0D=0A> > > >=20=0D=0A> > >=20=0D=0A> >=20=0D=0A> >=0D=0A>=
=20=0D=0A=0D=0A
--=_t8EqHT3xdZg+XwUZZBd4cSFdl9lEQF49Im0BRY50j2FqCbqe
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://ww=
w.w3.org/TR/html4/loose.dtd"><html>=0A<head>=0A <meta name=3D"Generator"=
content=3D"Zarafa WebApp v7.1.10-44973">=0A <meta http-equiv=3D"Content=
-Type" content=3D"text/html; charset=3Dutf-8">=0A <title>AW: [ovirt-user=
s] Fake power management=3F</title>=0A</head>=0A<body>=0A-----Urspr=C3=BC=
ngliche Nachricht-----<br><pre style=3D"white-space: -moz-pre-wrap; white=
-space: -pre-wrap; white-space: -o-pre-wrap; white-space: pre-wrap; word-=
wrap: break-word;" wrap=3D"">> Von:Barak Azulay <<a href=3D"mailto:=
bazulay(a)redhat.com">bazulay(a)redhat.com</a>><br>> Gesendet: Mon 17 N=
ovember 2014 23:30<br>> An: Patrick Lottenbach <<a href=3D"mailto:p=
l(a)a-bot.ch">pl(a)a-bot.ch</a>><br>> CC: <a href=3D"mailto:users@ovirt=
=2Eorg">users(a)ovirt.org</a><br>> Betreff: Re: AW: [ovirt-users] Fake p=
ower management=3F<br>> <br>> Well you can hack the solution in the=
form of replacing the fencing master script to always return success (El=
i can help you with that),<br>> and define an imaginary fencing device=
on each host ... meaning that the fencing command will always succeeds.<=
br>> <br><br>This sounds interesting. It's exactly what I need.<br><br=
>> But this may be risky ... as you might end up with the same VM runn=
ing on 2 hosts. <br><br>As I see it, this would only happen if someone un=
plugs the network interface. I know this is a way to break the cluster. I=
f someone unplugs the interface, then everything gets started twice anywa=
ys thanks to pacemaker being configured to ignore the lack of quorum and =
it would look silly in front of the customer.<br><br>> And one last no=
te ... when you disconnect one of the hosts in the demo you mentioned, I =
think you'll be better to disconnect the host that does not run the engin=
e ...<br> <br>It just gets restarted on the remaining node and resumes op=
eration. It even remembers which guests ran on which host.<br>That part i=
s really safe. The storage is configured to only report data as written w=
hen the write operation has finished on all (currently online) nodes, dis=
k write caches are turned off in lvm.conf. PostreSQL is resilient enough =
to survive a crash like this.<br><br>Or am I missing something that might=
break=3F<br><br>> Barak <br><br>mots<br><br>> <br>> ----- Origi=
nal Message -----<br>> > From: "mots" <<a href=3D"mailto:mots@ne=
pu.moe">mots(a)nepu.moe</a>><br>> > To: "Barak Azulay" <<a href=
=3D"mailto:bazulay@redhat.com">bazulay(a)redhat.com</a>><br>> > Cc=
: <a href=3D"mailto:users@ovirt.org">users(a)ovirt.org</a><br>> > Sen=
t: Monday, November 17, 2014 12:58:20 PM<br>> > Subject: AW: [ovirt=
-users] Fake power management=3F<br>> > <br>> > Yes, pacemake=
r manages the engine. That part is working fine, the engine<br>> > =
restarts on the remaining node without problems.<br>> > It's just t=
hat the guests don't come back up until the powered down node has<br>>=
> been fenced manually.<br>> > <br>> > -----Urspr=C3=BCng=
liche Nachricht-----<br>> > > Von:Barak Azulay <<a href=3D"ma=
ilto:bazulay@redhat.com">bazulay(a)redhat.com</a> <mailto:<a href=3D"mai=
lto:bazulay@redhat.com">bazulay(a)redhat.com</a>> ><br>> > >=
Gesendet: Mon 17 November 2014 11:35<br>> > > An: Patrick Lotte=
nbach <<a href=3D"mailto:pl@a-bot.ch">pl(a)a-bot.ch</a> <mailto:<a hr=
ef=3D"mailto:pl@a-bot.ch">pl(a)a-bot.ch</a>> ><br>> > > CC: =
<a href=3D"mailto:users@ovirt.org">users(a)ovirt.org</a> <mailto:<a href=
=3D"mailto:users@ovirt.org">users(a)ovirt.org</a>><br>> > > Bet=
reff: Re: [ovirt-users] Fake power management=3F<br>> > > <br>&g=
t; > > <br>> > > <br>> > > ----- Original Message=
-----<br>> > > > From: "mots" <<a href=3D"mailto:mots@nep=
u.moe">mots(a)nepu.moe</a> <mailto:<a href=3D"mailto:mots@nepu.moe">mots=
@nepu.moe</a>> ><br>> > > > To: <a href=3D"mailto:users=
@ovirt.org">users(a)ovirt.org</a> <mailto:<a href=3D"mailto:users@ovirt.=
org">users(a)ovirt.org</a>><br>> > > > Sent: Friday, Novembe=
r 14, 2014 4:54:08 PM<br>> > > > Subject: [ovirt-users] Fake =
power management=3F<br>> > > > <br>> > > > Fake p=
ower management=3F Hello,<br>> > > > <br>> > > > =
I'm building a small demonstration system for our sales team to take to a=
<br>> > > > customer so that they can show them our solutions=
=2E<br>> > > > Hardware: Two Intel NUC's, a 4 port switch and=
a laptop.<br>> > > > Engine: Runs as a VM on one of the NUCs=
, which one it runs on is<br>> > > > determined<br>> > =
> > by pacemaker.<br>> > > > Storage: Also managed by p=
acemaker, it's drbd backed and accessed with<br>> > > > iscsi=
=2E<br>> > > > oVirt version: 3.5<br>> > > > OS: =
CentOS 6.6<br>> > > > <br>> > > > The idea is to =
have our sales representative (or the potential customer<br>> > >=
; > himself) randomly pull the plug on one of the NUCs to show that th=
e<br>> > > > system<br>> > > > stays operational =
when part of the hardware fails.<br>> > > <br>> > > I a=
ssume you are aware that the engine might fence the node it is running<br=
>> > > on ...<br>> > > Or do you use pacemaker to run t=
he engine as well =3F<br>> > > <br>> > > > My proble=
m is that I don't have any way to implement power management, so<br>> =
> > > the<br>> > > > Engine can't fence nodes and wo=
n't restart guests that were running on<br>> > > > the<br>>=
; > > > node which lost power. In pacemaker I can just configure=
fencing over SSH<br>> > > > or<br>> > > > even d=
isable the requirement to do so completely. Is there something<br>> &g=
t; > > similar<br>> > > > for oVirt, so that the Engine=
will consider a node which it can't connect<br>> > > > to<br=
>> > > > to be powered down=3F<br>> > > > <br>>=
; > > > Regards,<br>> > > > <br>> > > > =
mots<br>> > > > <br>> > > > _____________________=
__________________________<br>> > > > Users mailing list<br>&=
gt; > > > <a href=3D"mailto:Users@ovirt.org">Users(a)ovirt.org</a>=
<mailto:<a href=3D"mailto:Users@ovirt.org">Users(a)ovirt.org</a>><br=
>> > > > <a href=3D"http://lists.ovirt.org/mailman/listinfo/u=
sers" target=3D"_blank">http://lists.ovirt.org/mailman/listinfo/users</a>=
<br>> > > > <<a href=3D"http://lists.ovirt.org/mailman/lis=
tinfo/users>" target=3D"_blank">http://lists.ovirt.org/mailman/listinfo/u=
sers></a>;<br>> > > > <br>> > > <br>> > <br=
>> ><br>> </pre>=0A</body>=0A</html>
--=_t8EqHT3xdZg+XwUZZBd4cSFdl9lEQF49Im0BRY50j2FqCbqe--
------=_Part_39_1687046635.1416266377877
Content-Type: application/pgp-signature; name=signature.asc
Content-Transfer-Encoding: 7bit
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: CIPHERMAIL (2.8.6-4)
iQEcBAEBCAAGBQJUaoKJAAoJELfzdzVzTtoKUzQH/3pNM2Xw4MjY7OkJRj2c7VbR
Lk454UhQ3kBTz/OiFLI5burVnapyzNcrIOAkwwWdseZkFulgqJhh+wiwS4EN/3aV
cc3FrESB5sbmObO87tqLiQMkVvc8nUASZhIOcQJEiUhRmDMTGerR58YHR3cKBTiN
hygbEkcmPw9pSYrJeQ/jXMlapju4Xet5FHbd4EYcgW8Gh7QmQszdMcNc9qwZ3dyM
rzsY8OpaHogzQv1DbnXJAjdGR8LMm/XDP2ZBazuafnjQZli3kEl2WwNFrda+kZhj
HTpdJFwt/yC05hkz3hXzkRwfqXDsYmCLsetn4Ym8w/pcA80iLiUp/oDqi7BheTE=
=V9lP
-----END PGP SIGNATURE-----
------=_Part_39_1687046635.1416266377877--
10 years
Re: [ovirt-users] Live Merge Functionality disabled on CentOS 6.6 Node and oVirt 3.5.0
by Markus Stockhausen
------=_NextPartTM-000-c83c49cc-dedf-4344-afc7-1c0140733acf
Content-Type: multipart/alternative;
boundary="_000_12EF8D94C6F8734FB2FF37B9FBEDD1735F8C0C6CEXCHANGEcollogi_"
--_000_12EF8D94C6F8734FB2FF37B9FBEDD1735F8C0C6CEXCHANGEcollogi_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SWlyYyB5b3Ugc2ltcGx5IG5lZWQgbGlidmlydCAxLjIuOQ0KDQpBbSAyMC4xMS4yMDE0IDE2OjIw
IHNjaHJpZWIgQm9iIERvb2xpdHRsZSA8Ym9iQGRvb2xpdHRsZS51cy5jb20+Og0KQXJlIHRoZXJl
IGFueSBidWdzIHJlbGF0ZWQgdG8gdGhlIGNoYW5nZXMgaW4gcXVlc3Rpb24gdGhhdCB3ZSBjYW4g
dHJhY2sNCnNvIHdlIGtub3cgd2hlbiB0aGUgY2hhbmdlcyBhcmUgcmVmbGVjdGVkIGluIG91ciBk
aXN0cm9zIG9mIGludGVyZXN0Pw0KDQpUaGFua3MsDQogICAgQm9iDQoNCk9uIDExLzIwLzIwMTQg
MDM6NTEgQU0sIHMgayB3cm90ZToNCj4gSGksDQo+DQo+DQo+IExpdmUgc25hcHNob3QgaW5kZWVk
IHdvcmtzLCBvbmx5IGxpdmUgbWVyZ2UgaXMgbm90IHdvcmtpbmcuIEkgZ3Vlc3Mgd2UNCj4gd2ls
bCBoYXZlIHRvIHdhaXQgdW50aWwgaXMgYXZhaWxhYmxlIG9uIENlbnRPUy4NCj4NCj4gVGhhbmtz
IQ0KPg0KPiBTb2tyYXRpcw0KPg0KPg0KPiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4gRGF0ZTogVGh1LCAy
MCBOb3YgMjAxNCAwOTo0OTo0MCArMDgwMA0KPiBTdWJqZWN0OiBSZTogW292aXJ0LXVzZXJzXSBM
aXZlIE1lcmdlIEZ1bmN0aW9uYWxpdHkgZGlzYWJsZWQgb24gQ2VudE9TDQo+IDYuNiBOb2RlIGFu
ZCBvVmlydCAzLjUuMA0KPiBGcm9tOiBjb2ZmZWUuenlyQGdtYWlsLmNvbQ0KPiBUbzogZGFuaWVs
LmhlbGdlbmJlcmdlckBtLWJveC5kZQ0KPiBDQzogc29rcmF0aXMxMjNrQG91dGxvb2suY29tOyB1
c2Vyc0BvdmlydC5vcmcNCj4NCj4gSGksDQo+ICAgICBhcyBpIGtub3csIGxpdmUgbWVyZ2UgaXMg
b25seSBhdmFpbGFibGUgZnJvbSB0aGUgdmVyc2lvbnMgaW4gdGhlDQo+IGZlZG9yYSB2aXJ0LXBy
ZXZpZXcgcmVwby4gcGxlYXNlIHNlZVsxXQ0KPg0KPiBbMV0gaHR0cDovL3d3dy5vdmlydC5vcmcv
RmVhdHVyZXMvTGl2ZV9NZXJnZSNDdXJyZW50X3N0YXR1cw0KPg0KPiAyMDE0LTExLTIwIDI6MTEg
R01UKzA4OjAwIERhbmllbCBIZWxnZW5iZXJnZXINCj4gPGRhbmllbC5oZWxnZW5iZXJnZXJAbS1i
b3guZGUgPG1haWx0bzpkYW5pZWwuaGVsZ2VuYmVyZ2VyQG0tYm94LmRlPj46DQo+DQo+ICAgICBP
biAxOS4xMS4yMDE0IDE0OjA2LCBzIGsgd3JvdGU6DQo+ICAgICA+IEhlbGxvLA0KPiAgICAgPg0K
PiAgICAgPiBJIHBlcmZvcm1lZCBhIGZ1bGwgeXVtIHVwZ3JhZGUgb24gYSBDZW50T1MgNi41IE5v
ZGUgd2hpY2ggd2FzDQo+ICAgICB1cGdyYWRlZCB0byA2LjYNCj4gICAgID4gYW5kIEkgY3VycmVu
dGx5IHRoZSBmb2xsb3dpbmcgUlBNIHZlcnNpb25zIGFyZSBpbnN0YWxsZWQ6DQo+ICAgICA+DQo+
ICAgICA+IFtyb290QG5vZGUwMSB+XSMgdW5hbWUgLWENCj4gICAgID4gTGludXggbm9kZTAxIDIu
Ni4zMi01MDQuMS4zLmVsNi54ODZfNjQgIzEgU01QIFR1ZSBOb3YgMTEgMTc6NTc6MjUNCj4gICAg
IFVUQyAyMDE0DQo+ICAgICA+IHg4Nl82NCB4ODZfNjQgeDg2XzY0IEdOVS9MaW51eA0KPiAgICAg
PiBbcm9vdEBub2RlMDEgfl0jIHJwbSAtcWEgfGdyZXAgbGlidmlydA0KPiAgICAgPiBsaWJ2aXJ0
LXB5dGhvbi0wLjEwLjItNDYuZWw2XzYuMS54ODZfNjQNCj4gICAgID4gbGlidmlydC1jbGllbnQt
MC4xMC4yLTQ2LmVsNl82LjEueDg2XzY0DQo+ICAgICA+IGxpYnZpcnQtbG9jay1zYW5sb2NrLTAu
MTAuMi00Ni5lbDZfNi4xLng4Nl82NA0KPiAgICAgPiBsaWJ2aXJ0LTAuMTAuMi00Ni5lbDZfNi4x
Lng4Nl82NA0KPiAgICAgPiBbcm9vdEBub2RlMDEgfl0jDQo+ICAgICA+IFtyb290QG5vZGUwMSB+
XSMgcnBtIC1xYSB8Z3JlcCBrdm0NCj4gICAgID4gcWVtdS1rdm0tcmhldi1kZWJ1Z2luZm8tMC4x
Mi4xLjItMi40MTUuZWw2XzUuMTQueDg2XzY0DQo+ICAgICA+IHFlbXUta3ZtLXJoZXYtdG9vbHMt
MC4xMi4xLjItMi40MTUuZWw2XzUuMTQueDg2XzY0DQo+ICAgICA+IHFlbXUta3ZtLXJoZXYtMC4x
Mi4xLjItMi40MTUuZWw2XzUuMTQueDg2XzY0DQo+ICAgICA+IFtyb290QG5vZGUwMSB+XSMgcnBt
IC1xYSB8Z3JlcCBxZW11DQo+ICAgICA+IGdweGUtcm9tcy1xZW11LTAuOS43LTYuMTIuZWw2Lm5v
YXJjaA0KPiAgICAgPiBxZW11LWt2bS1yaGV2LWRlYnVnaW5mby0wLjEyLjEuMi0yLjQxNS5lbDZf
NS4xNC54ODZfNjQNCj4gICAgID4gcWVtdS1pbWctcmhldi0wLjEyLjEuMi0yLjQxNS5lbDZfNS4x
NC54ODZfNjQNCj4gICAgID4gcWVtdS1rdm0tcmhldi10b29scy0wLjEyLjEuMi0yLjQxNS5lbDZf
NS4xNC54ODZfNjQNCj4gICAgID4gcWVtdS1rdm0tcmhldi0wLjEyLjEuMi0yLjQxNS5lbDZfNS4x
NC54ODZfNjQNCj4gICAgID4gW3Jvb3RAbm9kZTAxIH5dIyBycG0gLXFhIHxncmVwIHZkc20NCj4g
ICAgID4gdmRzbS1qc29ucnBjLTQuMTYuNy0xLmdpdGRiODM5NDMuZWw2Lm5vYXJjaA0KPiAgICAg
PiB2ZHNtLXB5dGhvbi16b21iaWVyZWFwZXItNC4xNi43LTEuZ2l0ZGI4Mzk0My5lbDYubm9hcmNo
DQo+ICAgICA+IHZkc20teWFqc29ucnBjLTQuMTYuNy0xLmdpdGRiODM5NDMuZWw2Lm5vYXJjaA0K
PiAgICAgPiB2ZHNtLXhtbHJwYy00LjE2LjctMS5naXRkYjgzOTQzLmVsNi5ub2FyY2gNCj4gICAg
ID4gdmRzbS00LjE2LjctMS5naXRkYjgzOTQzLmVsNi54ODZfNjQNCj4gICAgID4gdmRzbS1weXRo
b24tNC4xNi43LTEuZ2l0ZGI4Mzk0My5lbDYubm9hcmNoDQo+ICAgICA+IHZkc20tY2xpLTQuMTYu
Ny0xLmdpdGRiODM5NDMuZWw2Lm5vYXJjaA0KPiAgICAgPg0KPiAgICAgPiBUaGUgaG9zdCByZXBv
cnRzIHRoYXQgIExpdmUgU25hcHNub3QgU3VwcG9ydCBpcyBBY3RpdmUgb24gdGhlDQo+ICAgICBH
ZW5lcmFsIFRhYiBidXQNCj4gICAgID4gSSdtIHVuYWJsZSB0byBkZWxldGUgYSBzbmFwc2hvdC4N
Cj4gICAgID4NCj4gICAgID4gQW55IGlkZWFzPw0KPiAgICAgSG0gY291bGQgaXQgYmUgeW91IG1p
eCB1cCBsaXZlIHNuYXBzaG90IGFuZCBsaXZlIG1lcmdlPyBMaXZlIHNuYXBzaG90DQo+ICAgICB3
b3JrcyBzaW5jZSAzLjQuMyBxdWl0ZSB3aWxsLiBMaXZlIG1lcmdlIGhvd2V2ZXIgaXMgc3RpbGwg
dW5zdXBwb3J0ZWQgYXMNCj4gICAgIEkgdGhpbmsgaXQgcmVxdWlyZXMgc29tZSBxdWl0ZSBuZXcg
c3R1ZmYgZnJvbSBsaWJ2aXJ0LiBJdCB3aWxsDQo+ICAgICBldmVudHVhbGx5IHdvcmsgb24gRUw3
IFsxXS4NCj4NCj4gICAgIFsxXSBodHRwczovL2J1Z3ppbGxhLnJlZGhhdC5jb20vc2hvd19idWcu
Y2dpP2lkPTEwNjIxNDINCj4gICAgID4NCj4gICAgID4gVGhhbmsgeW91LA0KPiAgICAgPg0KPiAg
ICAgPiBTb2tyYXRpcw0KPiAgICAgPg0KPg0KPiAgICAgLS0NCj4gICAgIERhbmllbCBIZWxnZW5i
ZXJnZXINCj4gICAgIG0gYm94IGJld2VndGJpbGQgR21iSA0KPg0KPiAgICAgUDogKzQ5LzMwLzI0
MDg3ODEtMjINCj4gICAgIEY6ICs0OS8zMC8yNDA4NzgxLTEwDQo+DQo+ICAgICBBQ0tFUlNUUi4g
MTkNCj4gICAgIEQtMTAxMTUgQkVSTElODQo+DQo+DQo+ICAgICB3d3cubS1ib3guZGU8aHR0cDov
L3d3dy5tLWJveC5kZT4gPGh0dHA6Ly93d3cubS1ib3guZGU+ICB3d3cubW9ua2V5bWVuLnR2PGh0
dHA6Ly93d3cubW9ua2V5bWVuLnR2Pg0KPiAgICAgPGh0dHA6Ly93d3cubW9ua2V5bWVuLnR2Pg0K
Pg0KPiAgICAgR2VzY2jDpGZ0c2bDvGhyZXI6IE1hcnRpbiBSZXRzY2hpdHplZ2dlciAvIE1pY2hh
ZWxhIEfDtmxsbmVyDQo+ICAgICBIYW5kZXNscmVnaXN0ZXI6IEFtdHNnZXJpY2h0IENoYXJsb3R0
ZW5idXJnIC8gSFJCIDExMjc2Nw0KPiAgICAgX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX18NCj4gICAgIFVzZXJzIG1haWxpbmcgbGlzdA0KPiAgICAgVXNlcnNA
b3ZpcnQub3JnIDxtYWlsdG86VXNlcnNAb3ZpcnQub3JnPg0KPiAgICAgaHR0cDovL2xpc3RzLm92
aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzDQo+DQo+DQo+DQo+DQo+IF9fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+IFVzZXJzIG1haWxpbmcgbGlz
dA0KPiBVc2Vyc0BvdmlydC5vcmcNCj4gaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xp
c3RpbmZvL3VzZXJzDQo+DQpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fXw0KVXNlcnMgbWFpbGluZyBsaXN0DQpVc2Vyc0BvdmlydC5vcmcNCmh0dHA6Ly9saXN0
cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vycw0K
--_000_12EF8D94C6F8734FB2FF37B9FBEDD1735F8C0C6CEXCHANGEcollogi_
Content-Type: text/html; charset="utf-8"
Content-ID: <CE8A9AFF8C43E949889520DB6219D914(a)collogia.de>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHAgZGlyPSJsdHIi
PklpcmMgeW91IHNpbXBseSBuZWVkIGxpYnZpcnQgMS4yLjk8L3A+DQo8ZGl2IGNsYXNzPSJnbWFp
bF9xdW90ZSI+QW0gMjAuMTEuMjAxNCAxNjoyMCBzY2hyaWViIEJvYiBEb29saXR0bGUgJmx0O2Jv
YkBkb29saXR0bGUudXMuY29tJmd0Ozo8YnIgdHlwZT0iYXR0cmlidXRpb24iPg0KPGJsb2NrcXVv
dGUgY2xhc3M9InF1b3RlIiBzdHlsZT0ibWFyZ2luOjAgMCAwIC44ZXg7Ym9yZGVyLWxlZnQ6MXB4
ICNjY2Mgc29saWQ7cGFkZGluZy1sZWZ0OjFleCI+DQo8ZGl2Pjxmb250IHNpemU9IjIiPjxzcGFu
IHN0eWxlPSJmb250LXNpemU6MTBwdCI+PC9zcGFuPjwvZm9udD4NCjxkaXY+QXJlIHRoZXJlIGFu
eSBidWdzIHJlbGF0ZWQgdG8gdGhlIGNoYW5nZXMgaW4gcXVlc3Rpb24gdGhhdCB3ZSBjYW4gdHJh
Y2s8YnI+DQpzbyB3ZSBrbm93IHdoZW4gdGhlIGNoYW5nZXMgYXJlIHJlZmxlY3RlZCBpbiBvdXIg
ZGlzdHJvcyBvZiBpbnRlcmVzdD88YnI+DQo8YnI+DQpUaGFua3MsPGJyPg0KJm5ic3A7Jm5ic3A7
Jm5ic3A7IEJvYjxicj4NCjxicj4NCk9uIDExLzIwLzIwMTQgMDM6NTEgQU0sIHMgayB3cm90ZTo8
YnI+DQomZ3Q7IEhpLDxicj4NCiZndDsgPGJyPg0KJmd0OyA8YnI+DQomZ3Q7IExpdmUgc25hcHNo
b3QgaW5kZWVkIHdvcmtzLCBvbmx5IGxpdmUgbWVyZ2UgaXMgbm90IHdvcmtpbmcuIEkgZ3Vlc3Mg
d2U8YnI+DQomZ3Q7IHdpbGwgaGF2ZSB0byB3YWl0IHVudGlsIGlzIGF2YWlsYWJsZSBvbiBDZW50
T1MuPGJyPg0KJmd0OyA8YnI+DQomZ3Q7IFRoYW5rcyE8YnI+DQomZ3Q7IDxicj4NCiZndDsgU29r
cmF0aXM8YnI+DQomZ3Q7IDxicj4NCiZndDsgPGJyPg0KJmd0OyAtLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS08YnI+
DQomZ3Q7IERhdGU6IFRodSwgMjAgTm92IDIwMTQgMDk6NDk6NDAgJiM0MzswODAwPGJyPg0KJmd0
OyBTdWJqZWN0OiBSZTogW292aXJ0LXVzZXJzXSBMaXZlIE1lcmdlIEZ1bmN0aW9uYWxpdHkgZGlz
YWJsZWQgb24gQ2VudE9TPGJyPg0KJmd0OyA2LjYgTm9kZSBhbmQgb1ZpcnQgMy41LjA8YnI+DQom
Z3Q7IEZyb206IGNvZmZlZS56eXJAZ21haWwuY29tPGJyPg0KJmd0OyBUbzogZGFuaWVsLmhlbGdl
bmJlcmdlckBtLWJveC5kZTxicj4NCiZndDsgQ0M6IHNva3JhdGlzMTIza0BvdXRsb29rLmNvbTsg
dXNlcnNAb3ZpcnQub3JnPGJyPg0KJmd0OyA8YnI+DQomZ3Q7IEhpLDxicj4NCiZndDsmbmJzcDsm
bmJzcDsmbmJzcDsmbmJzcDsgYXMgaSBrbm93LCBsaXZlIG1lcmdlIGlzIG9ubHkgYXZhaWxhYmxl
IGZyb20gdGhlIHZlcnNpb25zIGluIHRoZTxicj4NCiZndDsgZmVkb3JhIHZpcnQtcHJldmlldyBy
ZXBvLiBwbGVhc2Ugc2VlWzFdPGJyPg0KJmd0OyA8YnI+DQomZ3Q7IFsxXSA8YSBocmVmPSJodHRw
Oi8vd3d3Lm92aXJ0Lm9yZy9GZWF0dXJlcy9MaXZlX01lcmdlI0N1cnJlbnRfc3RhdHVzIj5odHRw
Oi8vd3d3Lm92aXJ0Lm9yZy9GZWF0dXJlcy9MaXZlX01lcmdlI0N1cnJlbnRfc3RhdHVzPC9hPjxi
cj4NCiZndDsgPGJyPg0KJmd0OyAyMDE0LTExLTIwIDI6MTEgR01UJiM0MzswODowMCBEYW5pZWwg
SGVsZ2VuYmVyZ2VyPGJyPg0KJmd0OyAmbHQ7ZGFuaWVsLmhlbGdlbmJlcmdlckBtLWJveC5kZSAm
bHQ7PGEgaHJlZj0ibWFpbHRvOmRhbmllbC5oZWxnZW5iZXJnZXJAbS1ib3guZGUiPm1haWx0bzpk
YW5pZWwuaGVsZ2VuYmVyZ2VyQG0tYm94LmRlPC9hPiZndDsmZ3Q7Ojxicj4NCiZndDsgPGJyPg0K
Jmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyBPbiAxOS4xMS4yMDE0IDE0OjA2LCBzIGsgd3Jv
dGU6PGJyPg0KJmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyAmZ3Q7IEhlbGxvLDxicj4NCiZn
dDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgJmd0Ozxicj4NCiZndDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsgJmd0OyBJIHBlcmZvcm1lZCBhIGZ1bGwgeXVtIHVwZ3JhZGUgb24gYSBDZW50T1Mg
Ni41IE5vZGUgd2hpY2ggd2FzPGJyPg0KJmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyB1cGdy
YWRlZCB0byA2LjY8YnI+DQomZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7ICZndDsgYW5kIEkg
Y3VycmVudGx5IHRoZSBmb2xsb3dpbmcgUlBNIHZlcnNpb25zIGFyZSBpbnN0YWxsZWQ6PGJyPg0K
Jmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyAmZ3Q7PGJyPg0KJmd0OyZuYnNwOyZuYnNwOyZu
YnNwOyZuYnNwOyAmZ3Q7IFtyb290QG5vZGUwMSB+XSMgdW5hbWUgLWE8YnI+DQomZ3Q7Jm5ic3A7
Jm5ic3A7Jm5ic3A7Jm5ic3A7ICZndDsgTGludXggbm9kZTAxIDIuNi4zMi01MDQuMS4zLmVsNi54
ODZfNjQgIzEgU01QIFR1ZSBOb3YgMTEgMTc6NTc6MjU8YnI+DQomZ3Q7Jm5ic3A7Jm5ic3A7Jm5i
c3A7Jm5ic3A7IFVUQyAyMDE0PGJyPg0KJmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyAmZ3Q7
IHg4Nl82NCB4ODZfNjQgeDg2XzY0IEdOVS9MaW51eDxicj4NCiZndDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsgJmd0OyBbcm9vdEBub2RlMDEgfl0jIHJwbSAtcWEgfGdyZXAgbGlidmlydDxicj4N
CiZndDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgJmd0OyBsaWJ2aXJ0LXB5dGhvbi0wLjEwLjIt
NDYuZWw2XzYuMS54ODZfNjQ8YnI+DQomZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7ICZndDsg
bGlidmlydC1jbGllbnQtMC4xMC4yLTQ2LmVsNl82LjEueDg2XzY0PGJyPg0KJmd0OyZuYnNwOyZu
YnNwOyZuYnNwOyZuYnNwOyAmZ3Q7IGxpYnZpcnQtbG9jay1zYW5sb2NrLTAuMTAuMi00Ni5lbDZf
Ni4xLng4Nl82NDxicj4NCiZndDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgJmd0OyBsaWJ2aXJ0
LTAuMTAuMi00Ni5lbDZfNi4xLng4Nl82NDxicj4NCiZndDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz
cDsgJmd0OyBbcm9vdEBub2RlMDEgfl0jPGJyPg0KJmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw
OyAmZ3Q7IFtyb290QG5vZGUwMSB+XSMgcnBtIC1xYSB8Z3JlcCBrdm08YnI+DQomZ3Q7Jm5ic3A7
Jm5ic3A7Jm5ic3A7Jm5ic3A7ICZndDsgcWVtdS1rdm0tcmhldi1kZWJ1Z2luZm8tMC4xMi4xLjIt
Mi40MTUuZWw2XzUuMTQueDg2XzY0PGJyPg0KJmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyAm
Z3Q7IHFlbXUta3ZtLXJoZXYtdG9vbHMtMC4xMi4xLjItMi40MTUuZWw2XzUuMTQueDg2XzY0PGJy
Pg0KJmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyAmZ3Q7IHFlbXUta3ZtLXJoZXYtMC4xMi4x
LjItMi40MTUuZWw2XzUuMTQueDg2XzY0PGJyPg0KJmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw
OyAmZ3Q7IFtyb290QG5vZGUwMSB+XSMgcnBtIC1xYSB8Z3JlcCBxZW11PGJyPg0KJmd0OyZuYnNw
OyZuYnNwOyZuYnNwOyZuYnNwOyAmZ3Q7IGdweGUtcm9tcy1xZW11LTAuOS43LTYuMTIuZWw2Lm5v
YXJjaDxicj4NCiZndDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgJmd0OyBxZW11LWt2bS1yaGV2
LWRlYnVnaW5mby0wLjEyLjEuMi0yLjQxNS5lbDZfNS4xNC54ODZfNjQ8YnI+DQomZ3Q7Jm5ic3A7
Jm5ic3A7Jm5ic3A7Jm5ic3A7ICZndDsgcWVtdS1pbWctcmhldi0wLjEyLjEuMi0yLjQxNS5lbDZf
NS4xNC54ODZfNjQ8YnI+DQomZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7ICZndDsgcWVtdS1r
dm0tcmhldi10b29scy0wLjEyLjEuMi0yLjQxNS5lbDZfNS4xNC54ODZfNjQ8YnI+DQomZ3Q7Jm5i
c3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7ICZndDsgcWVtdS1rdm0tcmhldi0wLjEyLjEuMi0yLjQxNS5l
bDZfNS4xNC54ODZfNjQ8YnI+DQomZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7ICZndDsgW3Jv
b3RAbm9kZTAxIH5dIyBycG0gLXFhIHxncmVwIHZkc208YnI+DQomZ3Q7Jm5ic3A7Jm5ic3A7Jm5i
c3A7Jm5ic3A7ICZndDsgdmRzbS1qc29ucnBjLTQuMTYuNy0xLmdpdGRiODM5NDMuZWw2Lm5vYXJj
aDxicj4NCiZndDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgJmd0OyB2ZHNtLXB5dGhvbi16b21i
aWVyZWFwZXItNC4xNi43LTEuZ2l0ZGI4Mzk0My5lbDYubm9hcmNoPGJyPg0KJmd0OyZuYnNwOyZu
YnNwOyZuYnNwOyZuYnNwOyAmZ3Q7IHZkc20teWFqc29ucnBjLTQuMTYuNy0xLmdpdGRiODM5NDMu
ZWw2Lm5vYXJjaDxicj4NCiZndDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgJmd0OyB2ZHNtLXht
bHJwYy00LjE2LjctMS5naXRkYjgzOTQzLmVsNi5ub2FyY2g8YnI+DQomZ3Q7Jm5ic3A7Jm5ic3A7
Jm5ic3A7Jm5ic3A7ICZndDsgdmRzbS00LjE2LjctMS5naXRkYjgzOTQzLmVsNi54ODZfNjQ8YnI+
DQomZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7ICZndDsgdmRzbS1weXRob24tNC4xNi43LTEu
Z2l0ZGI4Mzk0My5lbDYubm9hcmNoPGJyPg0KJmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyAm
Z3Q7IHZkc20tY2xpLTQuMTYuNy0xLmdpdGRiODM5NDMuZWw2Lm5vYXJjaDxicj4NCiZndDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsgJmd0Ozxicj4NCiZndDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz
cDsgJmd0OyBUaGUgaG9zdCByZXBvcnRzIHRoYXQmbmJzcDsgTGl2ZSBTbmFwc25vdCBTdXBwb3J0
IGlzIEFjdGl2ZSBvbiB0aGU8YnI+DQomZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7IEdlbmVy
YWwgVGFiIGJ1dDxicj4NCiZndDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgJmd0OyBJJ20gdW5h
YmxlIHRvIGRlbGV0ZSBhIHNuYXBzaG90Ljxicj4NCiZndDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz
cDsgJmd0Ozxicj4NCiZndDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgJmd0OyBBbnkgaWRlYXM/
PGJyPg0KJmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyBIbSBjb3VsZCBpdCBiZSB5b3UgbWl4
IHVwIGxpdmUgc25hcHNob3QgYW5kIGxpdmUgbWVyZ2U/IExpdmUgc25hcHNob3Q8YnI+DQomZ3Q7
Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7IHdvcmtzIHNpbmNlIDMuNC4zIHF1aXRlIHdpbGwuIExp
dmUgbWVyZ2UgaG93ZXZlciBpcyBzdGlsbCB1bnN1cHBvcnRlZCBhczxicj4NCiZndDsmbmJzcDsm
bmJzcDsmbmJzcDsmbmJzcDsgSSB0aGluayBpdCByZXF1aXJlcyBzb21lIHF1aXRlIG5ldyBzdHVm
ZiBmcm9tIGxpYnZpcnQuIEl0IHdpbGw8YnI+DQomZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7
IGV2ZW50dWFsbHkgd29yayBvbiBFTDcgWzFdLjxicj4NCiZndDsgPGJyPg0KJmd0OyZuYnNwOyZu
YnNwOyZuYnNwOyZuYnNwOyBbMV0gPGEgaHJlZj0iaHR0cHM6Ly9idWd6aWxsYS5yZWRoYXQuY29t
L3Nob3dfYnVnLmNnaT9pZD0xMDYyMTQyIj5odHRwczovL2J1Z3ppbGxhLnJlZGhhdC5jb20vc2hv
d19idWcuY2dpP2lkPTEwNjIxNDI8L2E+PGJyPg0KJmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw
OyAmZ3Q7PGJyPg0KJmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyAmZ3Q7IFRoYW5rIHlvdSw8
YnI+DQomZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7ICZndDs8YnI+DQomZ3Q7Jm5ic3A7Jm5i
c3A7Jm5ic3A7Jm5ic3A7ICZndDsgU29rcmF0aXM8YnI+DQomZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7
Jm5ic3A7ICZndDs8YnI+DQomZ3Q7IDxicj4NCiZndDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsg
LS08YnI+DQomZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7IERhbmllbCBIZWxnZW5iZXJnZXI8
YnI+DQomZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7IG0gYm94IGJld2VndGJpbGQgR21iSDxi
cj4NCiZndDsgPGJyPg0KJmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyBQOiAmIzQzOzQ5LzMw
LzI0MDg3ODEtMjI8YnI+DQomZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7IEY6ICYjNDM7NDkv
MzAvMjQwODc4MS0xMDxicj4NCiZndDsgPGJyPg0KJmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw
OyBBQ0tFUlNUUi4gMTk8YnI+DQomZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7IEQtMTAxMTUg
QkVSTElOPGJyPg0KJmd0OyA8YnI+DQomZ3Q7IDxicj4NCiZndDsmbmJzcDsmbmJzcDsmbmJzcDsm
bmJzcDsgPGEgaHJlZj0iaHR0cDovL3d3dy5tLWJveC5kZSI+d3d3Lm0tYm94LmRlPC9hPiAmbHQ7
PGEgaHJlZj0iaHR0cDovL3d3dy5tLWJveC5kZSI+aHR0cDovL3d3dy5tLWJveC5kZTwvYT4mZ3Q7
Jm5ic3A7DQo8YSBocmVmPSJodHRwOi8vd3d3Lm1vbmtleW1lbi50diI+d3d3Lm1vbmtleW1lbi50
djwvYT48YnI+DQomZ3Q7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7ICZsdDs8YSBocmVmPSJodHRw
Oi8vd3d3Lm1vbmtleW1lbi50diI+aHR0cDovL3d3dy5tb25rZXltZW4udHY8L2E+Jmd0Ozxicj4N
CiZndDsgPGJyPg0KJmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyBHZXNjaMOkZnRzZsO8aHJl
cjogTWFydGluIFJldHNjaGl0emVnZ2VyIC8gTWljaGFlbGEgR8O2bGxuZXI8YnI+DQomZ3Q7Jm5i
c3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7IEhhbmRlc2xyZWdpc3RlcjogQW10c2dlcmljaHQgQ2hhcmxv
dHRlbmJ1cmcgLyBIUkIgMTEyNzY3PGJyPg0KJmd0OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyBf
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXzxicj4NCiZndDsm
bmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgVXNlcnMgbWFpbGluZyBsaXN0PGJyPg0KJmd0OyZuYnNw
OyZuYnNwOyZuYnNwOyZuYnNwOyBVc2Vyc0BvdmlydC5vcmcgJmx0OzxhIGhyZWY9Im1haWx0bzpV
c2Vyc0BvdmlydC5vcmciPm1haWx0bzpVc2Vyc0BvdmlydC5vcmc8L2E+Jmd0Ozxicj4NCiZndDsm
bmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgPGEgaHJlZj0iaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9t
YWlsbWFuL2xpc3RpbmZvL3VzZXJzIj5odHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlz
dGluZm8vdXNlcnM8L2E+PGJyPg0KJmd0OyA8YnI+DQomZ3Q7IDxicj4NCiZndDsgPGJyPg0KJmd0
OyA8YnI+DQomZ3Q7IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fPGJyPg0KJmd0OyBVc2VycyBtYWlsaW5nIGxpc3Q8YnI+DQomZ3Q7IFVzZXJzQG92aXJ0Lm9y
Zzxicj4NCiZndDsgPGEgaHJlZj0iaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3Rp
bmZvL3VzZXJzIj5odHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnM8
L2E+PGJyPg0KJmd0OyA8YnI+DQpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fXzxicj4NClVzZXJzIG1haWxpbmcgbGlzdDxicj4NClVzZXJzQG92aXJ0Lm9yZzxi
cj4NCjxhIGhyZWY9Imh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vy
cyI+aHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzPC9hPjxicj4N
CjwvZGl2Pg0KPC9kaXY+DQo8L2Jsb2NrcXVvdGU+DQo8L2Rpdj4NCjwvYm9keT4NCjwvaHRtbD4N
Cg==
--_000_12EF8D94C6F8734FB2FF37B9FBEDD1735F8C0C6CEXCHANGEcollogi_--
------=_NextPartTM-000-c83c49cc-dedf-4344-afc7-1c0140733acf
Content-Type: text/plain;
name="InterScan_Disclaimer.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
filename="InterScan_Disclaimer.txt"
****************************************************************************
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.
Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
Vorstand:
Kadir Akin
Dr. Michael Höhnerbach
Vorsitzender des Aufsichtsrates:
Hans Kristian Langva
Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497
This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.
e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
executive board:
Kadir Akin
Dr. Michael Höhnerbach
President of the supervisory board:
Hans Kristian Langva
Registry office: district court Cologne
Register number: HRB 52 497
****************************************************************************
------=_NextPartTM-000-c83c49cc-dedf-4344-afc7-1c0140733acf--
10 years
Live Merge Functionality disabled on CentOS 6.6 Node and oVirt 3.5.0
by s k
--_d7ba62e5-cc0a-4b02-9e57-68cd7267ac34_
Content-Type: text/plain; charset="iso-8859-7"
Content-Transfer-Encoding: quoted-printable
Hello=2C
I performed a full yum upgrade on a CentOS 6.5 Node which was upgraded to 6=
.6 and I currently the following RPM versions are installed:
[root@node01 ~]# uname -aLinux node01 2.6.32-504.1.3.el6.x86_64 #1 SMP Tue =
Nov 11 17:57:25 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux[root@node01 ~]# rpm=
-qa |grep libvirtlibvirt-python-0.10.2-46.el6_6.1.x86_64libvirt-client-0.1=
0.2-46.el6_6.1.x86_64libvirt-lock-sanlock-0.10.2-46.el6_6.1.x86_64libvirt-0=
.10.2-46.el6_6.1.x86_64[root@node01 ~]# [root@node01 ~]# rpm -qa |grep kvmq=
emu-kvm-rhev-debuginfo-0.12.1.2-2.415.el6_5.14.x86_64qemu-kvm-rhev-tools-0.=
12.1.2-2.415.el6_5.14.x86_64qemu-kvm-rhev-0.12.1.2-2.415.el6_5.14.x86_64[ro=
ot@node01 ~]# rpm -qa |grep qemugpxe-roms-qemu-0.9.7-6.12.el6.noarchqemu-kv=
m-rhev-debuginfo-0.12.1.2-2.415.el6_5.14.x86_64qemu-img-rhev-0.12.1.2-2.415=
.el6_5.14.x86_64qemu-kvm-rhev-tools-0.12.1.2-2.415.el6_5.14.x86_64qemu-kvm-=
rhev-0.12.1.2-2.415.el6_5.14.x86_64[root@node01 ~]# rpm -qa |grep vdsmvdsm-=
jsonrpc-4.16.7-1.gitdb83943.el6.noarchvdsm-python-zombiereaper-4.16.7-1.git=
db83943.el6.noarchvdsm-yajsonrpc-4.16.7-1.gitdb83943.el6.noarchvdsm-xmlrpc-=
4.16.7-1.gitdb83943.el6.noarchvdsm-4.16.7-1.gitdb83943.el6.x86_64vdsm-pytho=
n-4.16.7-1.gitdb83943.el6.noarchvdsm-cli-4.16.7-1.gitdb83943.el6.noarch
The host reports that Live Snapsnot Support is Active on the General Tab b=
ut I'm unable to delete a snapshot.
Any ideas?
Thank you=2C
Sokratis =
--_d7ba62e5-cc0a-4b02-9e57-68cd7267ac34_
Content-Type: text/html; charset="iso-8859-7"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<style><!--
.hmmessage P
{
margin:0px=3B
padding:0px
}
body.hmmessage
{
font-size: 12pt=3B
font-family:Calibri
}
--></style></head>
<body class=3D'hmmessage'><div dir=3D'ltr'>Hello=2C<div><br></div><div>I pe=
rformed a full yum upgrade on a CentOS 6.5 Node which was upgraded to 6.6 a=
nd I currently the following RPM versions are installed:</div><div><br></di=
v><div><div>[root@node01 ~]# uname -a</div><div>Linux node01 2.6.32-504.1.3=
.el6.x86_64 #1 SMP Tue Nov 11 17:57:25 UTC 2014 x86_64 x86_64 x86_64 GNU/Li=
nux</div><div>[root@node01 ~]# rpm -qa |grep libvirt</div><div>libvirt-pyth=
on-0.10.2-46.el6_6.1.x86_64</div><div>libvirt-client-0.10.2-46.el6_6.1.x86_=
64</div><div>libvirt-lock-sanlock-0.10.2-46.el6_6.1.x86_64</div><div>libvir=
t-0.10.2-46.el6_6.1.x86_64</div><div>[root@node01 ~]# =3B</div><div>[ro=
ot@node01 ~]# rpm -qa |grep kvm</div><div>qemu-kvm-rhev-debuginfo-0.12.1.2-=
2.415.el6_5.14.x86_64</div><div>qemu-kvm-rhev-tools-0.12.1.2-2.415.el6_5.14=
.x86_64</div><div>qemu-kvm-rhev-0.12.1.2-2.415.el6_5.14.x86_64</div><div>[r=
oot@node01 ~]# rpm -qa |grep qemu</div><div>gpxe-roms-qemu-0.9.7-6.12.el6.n=
oarch</div><div>qemu-kvm-rhev-debuginfo-0.12.1.2-2.415.el6_5.14.x86_64</div=
><div>qemu-img-rhev-0.12.1.2-2.415.el6_5.14.x86_64</div><div>qemu-kvm-rhev-=
tools-0.12.1.2-2.415.el6_5.14.x86_64</div><div>qemu-kvm-rhev-0.12.1.2-2.415=
.el6_5.14.x86_64</div><div>[root@node01 ~]# rpm -qa |grep vdsm</div><div>vd=
sm-jsonrpc-4.16.7-1.gitdb83943.el6.noarch</div><div>vdsm-python-zombiereape=
r-4.16.7-1.gitdb83943.el6.noarch</div><div>vdsm-yajsonrpc-4.16.7-1.gitdb839=
43.el6.noarch</div><div>vdsm-xmlrpc-4.16.7-1.gitdb83943.el6.noarch</div><di=
v>vdsm-4.16.7-1.gitdb83943.el6.x86_64</div><div>vdsm-python-4.16.7-1.gitdb8=
3943.el6.noarch</div><div>vdsm-cli-4.16.7-1.gitdb83943.el6.noarch</div></di=
v><div><br></div><div>The host reports that  =3BLive Snapsnot Support i=
s Active on the General Tab but I'm unable to delete a snapshot.</div><div>=
<br></div><div>Any ideas?</div><div><br></div><div>Thank you=2C</div><div><=
br></div><div>Sokratis</div> </div></body>
</html>=
--_d7ba62e5-cc0a-4b02-9e57-68cd7267ac34_--
10 years
Re: [ovirt-users] oVirt 3.5 & NAT
by Phil Daws
How does one add multiple custom properties ? tried:
engine-config -s CustomDeviceProperties='{type=interface;prop={vlan=^[a-zA-Z0-9_ ---]+$}}{type=interface;prop={bridge=^[a-zA-Z0-9_ ---]+$}}'
but ended up with one call vlan and the other prop :) If can add vlan and bridge then should be able to use a vNIC profile for adding an interface directly to OVS using a custom hook.
Thanks, Phil
----- Original Message -----
From: "Phil Daws" <uxbod(a)splatnix.net>
To: users(a)ovirt.org
Sent: Monday, 27 October, 2014 3:04:20 PM
Subject: Re: [ovirt-users] oVirt 3.5 & NAT
Well, in fact have got something to work now! Left ovirtmgmt and em1 alone but ran:
$ ovs-vsctl add-br ovsbr0
$ ip link add name veth0 type veth peer name veth1
$ brctl addif ovirtmgmt veth0
$ ovs-vsctl add-port ovsbr veth1
$ ip add add XXX.XXX.XXX.XXX/29 dev veth1
$ ip link set veth0 up && ip link set veth1 up
and now veth1 is responding as-well as veth0.
ovs-vsctl show
08554d11-3ba7-4303-b9d5-6a09f23c9057
Bridge "ovsbr0"
Port "veth1"
Interface "veth1"
Port "ovsbr0"
Interface "ovsbr0"
type: internal
so what I think should do now is create a custom parameter on the Engine Manager that allows one to define an OVS bridge name and VLAN so when a virtual guest is created it can be assigned to the new bridge; with the use of a custom hook.
Thanks, Phil
----- Original Message -----
From: "Phil Daws" <uxbod(a)splatnix.net>
To: "Antoni Segura Puimedon" <asegurap(a)redhat.com>
Cc: users(a)ovirt.org
Sent: Monday, 27 October, 2014 2:10:34 PM
Subject: Re: [ovirt-users] oVirt 3.5 & NAT
Darn, looks like this will not work :( the problem is that oVirt creates the bridge ovirtmgmt and binds that to your interface eg. em1. So at that point you have network running. If you then try to add that to the OVS stack your networking stop :( I tried to add it as a port using ovs-vsctl add-port ovsbr0 ovirtmgmt which is accepted but then networking stops. As soon as I remove again networking comes back to life. There does not seem to be a way to have two co-existing bridges :( Thanks, Phil
----- Original Message -----
From: "Antoni Segura Puimedon" <asegurap(a)redhat.com>
To: "Phil Daws" <uxbod(a)splatnix.net>
Cc: "Dan Kenigsberg" <danken(a)redhat.com>, users(a)ovirt.org
Sent: Monday, 27 October, 2014 12:13:30 PM
Subject: Re: [ovirt-users] oVirt 3.5 & NAT
----- Original Message -----
> From: "Phil Daws" <uxbod(a)splatnix.net>
> To: "Antoni Segura Puimedon" <asegurap(a)redhat.com>
> Cc: "Dan Kenigsberg" <danken(a)redhat.com>, users(a)ovirt.org
> Sent: Monday, October 27, 2014 11:41:56 AM
> Subject: Re: [ovirt-users] oVirt 3.5 & NAT
>
> Hi Antoni:
>
> Yes, prior to the reboot it did work okay. This is how it should look I
> believe:
>
> Bridge "ovirtmgmt"
> Port "mgmt0"
> Interface "mgmt0"
> type: internal
> Port "ovsbr0"
> Interface "ovsbr0"
> type: internal
>
> So the bridge would be defined by oVirt then I guess with a custom hook that
> would then be added to the OVS stack ?
exactly! You could just make a hook script that runs an after_network_setup
hook that does the ovs-vsctl for you ;-)
Here you can see the presentation I gave last February at devconf about extending
with configurators and hooks.
http://blog.antoni.me/devconf14/#/8/1
I linked directly to a before_network_setup hook sample, because it works just like
the after_network_setup hook. Instead of logging to systemd, just add that if
'remove' is not in data and network == 'ovirtmgmt', it adds the network bridge to
the vswitch with python's subprocess.call or subprocess.check_output.
You can send it if you want me to take a look ;-)
PS: It is possible to write the hooks in bash, c, perl, etc. But we only have the
convenience read_json methods and such for python. If you wanted to, you could have
a simple bash hook that just checked if there was an ovirtmgmt bridge and it would
add it doing ovs-vsctl in the before_vdsm_start hooking point. That would have the
drawback that changing the ovirtmgmt bridge with oVirt UI would leave it disconnected
again.
>
> Thanks, Phil
>
> ----- Original Message -----
> From: "Antoni Segura Puimedon" <asegurap(a)redhat.com>
> To: "Phil Daws" <uxbod(a)splatnix.net>
> Cc: "Dan Kenigsberg" <danken(a)redhat.com>, users(a)ovirt.org
> Sent: Monday, 27 October, 2014 9:56:38 AM
> Subject: Re: [ovirt-users] oVirt 3.5 & NAT
>
>
>
> ----- Original Message -----
> > From: "Phil Daws" <uxbod(a)splatnix.net>
> > To: "Antoni Segura Puimedon" <asegurap(a)redhat.com>
> > Cc: "Dan Kenigsberg" <danken(a)redhat.com>, users(a)ovirt.org
> > Sent: Monday, October 27, 2014 10:37:18 AM
> > Subject: Re: [ovirt-users] oVirt 3.5 & NAT
> >
> > That is what I tried but oVirt appears to overwrite the bridge information
> > on
> > boot :( Thanks, Phil
>
> But before rebooting, does it work as you intended? If so, you could just
> make
> a vdsm hook that adds ovirtmgmt to the ovs bridge after it is set up. (I
> could
> give more directions into how to do it).
>
> >
> > ----- Original Message -----
> > From: "Antoni Segura Puimedon" <asegurap(a)redhat.com>
> > To: "Phil Daws" <uxbod(a)splatnix.net>
> > Cc: "Dan Kenigsberg" <danken(a)redhat.com>, users(a)ovirt.org
> > Sent: Monday, 27 October, 2014 8:00:33 AM
> > Subject: Re: [ovirt-users] oVirt 3.5 & NAT
> >
> >
> >
> > ----- Original Message -----
> > > From: "Phil Daws" <uxbod(a)splatnix.net>
> > > To: "Dan Kenigsberg" <danken(a)redhat.com>
> > > Cc: users(a)ovirt.org
> > > Sent: Saturday, October 25, 2014 5:02:59 PM
> > > Subject: Re: [ovirt-users] oVirt 3.5 & NAT
> > >
> > > Hmmm, this is becoming difficult ..
> > >
> > > I have added into the engine the custom hook and understand how that will
> > > work. The issue is how can a single NIC use two different bridges ?
> > > Example with OVS would be that one requires:
> > >
> > > em1 -+ ovirtmgmt (bridge) -> management IP (public)
> > > + ovs (bridge) -> firewall IP (public)
> > > |
> > > + vlan 1
> > > + vlan 2
> > >
> > > this works fine when using OVS and KVM, without oVirt, so there must be a
> > > way
> > > to hook the two together without a Neutron appliance.
> > >
> > > Any thoughts ? Thanks, Phil.
> >
> > I haven't tried this, and it may not work, but what happens if you add the
> > ovirtmgmt
> > bridge as a port of the ovs bridge?
> > >
> > >
> > > ----- Original Message -----
> > > From: "Dan Kenigsberg" <danken(a)redhat.com>
> > > To: "Phil Daws" <uxbod(a)splatnix.net>
> > > Cc: users(a)ovirt.org
> > > Sent: Wednesday, 22 October, 2014 3:54:46 PM
> > > Subject: Re: [ovirt-users] oVirt 3.5 & NAT
> > >
> > > On Wed, Oct 22, 2014 at 03:12:09PM +0100, Phil Daws wrote:
> > > > Thanks Dan & Antoni:
> > > >
> > > > I wonder then if I could replace the standard libvirt defined network
> > > > with
> > > > an OpenVSwitch one like I have on my dev system? That is just straight
> > > > KVM with OVS integrated. Maybe a bit more overhead in administration
> > > > but
> > > > possibly less than having to spin up a Neutron Appliance.
> > >
> > > Once you start to use the vdsm-hook-extnet, all that you need to do is
> > > to replace the libvirt-side definition of the "external network". This
> > > may well be an OpenVSwitch-based network e.g.
> > > http://libvirt.org/formatnetwork.html#elementVlanTag
> > > _______________________________________________
> > > Users mailing list
> > > Users(a)ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/users
> > >
> >
>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
10 years
Cumulative VM network usage
by Lior Vernia
Hello,
The need to monitor cumulative VM network usage has come up several
times in the past; while this should be handled as part of
(https://bugzilla.redhat.com/show_bug.cgi?id=1063343), in the mean time
I've written a small Python script that monitors those statistics,
attached here.
The script polls the engine via RESTful API periodically and dumps the
up-to-date total usage into a file. The output is a multi-level
map/dictionary in JSON format, where:
* The top level keys are VM names.
* Under each VM, the next level keys are vNIC names.
* Under each vNIC, there are keys for total 'rx' (received) and 'tx'
(transmitted), where the values are in Bytes.
The script is built to run forever. It may be stopped at any time, but
while it's not running VM network usage data will "be lost". When it's
re-run, it'll go back to accumulating data on top of its previous data.
A few disclaimers:
* I haven't tested this with any edge cases (engine service dies, etc.).
* Tested this with tens of VMs, not sure it'll work fine with hundreds.
* The PERIOD_TIME (polling interval) should be set so that it matches
both the engine's and vdsm's polling interval (see comments inside the
script), otherwise data will be either lost or counted multiple times.
>From 3.4 onwards, default configuration should be fine with 15 seconds.
* The precision of traffic measurement on a NIC is 0.1% of the
interface's speed over each PERIOD_TIME interval. For example, on a
1Gbps vNIC, when PERIOD_TIME = 15s, data will only be measured in 15Mb
(~2MB) quanta. Specifically what this means is, that in this example,
any traffic smaller than 2MB over a 15-second period would be negligible
and wouldn't be recorded.
Knock yourselves out :)
10 years
Power Management - but no LO?
by Daniel Helgenberger
Hello,
I am toying an idea involving using oVirt's PM capabilities (esp. the
cluster power_saving policy) in conjuration with some consumer grate
hosts for raw compute applications.
Now, these hosts do not have any lights out or BMC capabilities.
However, they could be started with WOL packets.
Is there a way how this can be done in oVirt? Shutting down hosts with
ssh and starting them with WOL?
Thanks!
--
Daniel Helgenberger
m box bewegtbild GmbH
P: +49/30/2408781-22
F: +49/30/2408781-10
ACKERSTR. 19
D-10115 BERLIN
www.m-box.de www.monkeymen.tv
Geschäftsführer: Martin Retschitzegger / Michaela Göllner
Handeslregister: Amtsgericht Charlottenburg / HRB 112767
10 years
Gluster command[<UNKNOWN>] failed on server ovhost1
by Fumihide Tani
Please help!
I'm hosting an oVirt3.5 Engine server (CentOS6.5) and oVirt3.5 Host servers (CentOS 7.0).
Today, I have stopped all oVirt services (ovirt-engine and vdsmd) and updated oVirt3.5 Engine server and ovirt3.5 Host server by "yum update".
While update, some new Gluster components were installed.
After update, I tried to reboot ovirt3.5 Engine and Node servers, but oVirt is not working.
Portal's Events shows that:
- Status of host ovhost1 was set to NonOperational.
- Gluster command[<UNKNOWN>] failed on server ovhost1.
My oVirt servers and VMs are not operational now.
How to resolve?
Many thanks,
Fumihide Tani
10 years
LDAP
by Koen Vanoppen
Hello everybody,
We updated our ovirt to 3.5, but now we see some errors concerning LDAP. I
already searched oonline for a guide for the AAA config, but can't seem to
find something...
Does anybody already has a clear how-to for the AAA config?
This is the error we get sometimes in our engine.log (we are still able to
login with ldap btw):
2014-11-20 06:42:06,539 ERROR
[org.ovirt.engine.extensions.aaa.builtin.kerberosldap.DirectorySearcher]
(ajp--127.0.0.1-8702-32) Failed ldap search server
ldap://***.brussels.airport:*** using user ****(a)BRUSSELS.AIRPORT due to :
[LDAP: error code 34 - 0000208F: LdapErr: DSID-0C09074B, comment: Error
processing name, data 0, v23f0]; nested exception is
javax.naming.InvalidNameException: : [LDAP: error code 34 - 0000208F:
LdapErr: DSID-0C09074B, comment: Error processing name, data 0, v23f0];
remaining name ''. We should try the next server
Kind regards,
Koen
10 years
IPA-auth: user password expired
by Demeter Tibor
------=_Part_6727133_325728685.1416415592421
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
I have an IPA server 3.0 on centos 6.6.
I successfully attached to my ovirt cluster.
I can see the users on ovirt user tab, but after auth I always get this error:
Cannot Login. User Password has expired. Use the following URL to change the password: (nothing)
I have try out with different long passwords and different users, but it's same.
Is this version compatible with ovirt 3.5?
What did I wrong?
Thanks in advance,
Tibor
------=_Part_6727133_325728685.1416415592421
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: times new roman, new york, times, se=
rif; font-size: 12pt; color: #000000"><div>Hi,</div><div><br></div><div>I h=
ave an IPA server 3.0 on centos 6.6.</div><div>I successfully attached to m=
y ovirt cluster. </div><div>I can see the users on ovirt user tab, but=
after auth I always get this error:</div><div><span aria-hidden=3D"false" =
class=3D"label label-default GLGRQ1ODHEC temp-link-color"><br></span></div>=
<div><span aria-hidden=3D"false" class=3D"label label-default GLGRQ1ODHEC t=
emp-link-color">Cannot Login. User Password has expired. Use the following =
URL to change the password: (nothing)</span></div><div><span aria-hidden=3D=
"false" class=3D"label label-default GLGRQ1ODHEC temp-link-color"><br></spa=
n></div><div><span aria-hidden=3D"false" class=3D"label label-default GLGRQ=
1ODHEC temp-link-color">I have try out with different long passwords and di=
fferent users, but it's same.</span></div><div><span aria-hidden=3D"false" =
class=3D"label label-default GLGRQ1ODHEC temp-link-color"><br>Is this versi=
on compatible with ovirt 3.5?</span></div><div><br></div><div>What did I wr=
ong?</div><div><br></div><div>Thanks in advance,</div><div>Tibor</div></div=
></body></html>
------=_Part_6727133_325728685.1416415592421--
10 years
Re: [ovirt-users] separate ovirtmgmt from glusterfs traffic
by Juan Pablo Lorier
Hi,
In my experience, having ovirt traffic on the same nic that gluster can
make your platafrom unstable. I was using it for large file storage and
gluster has so big traffic that ovirt got confused and started marking
hosts as unavailable because of hi latency.
I've opened an RFE over a year ago, but had no luck with the team to get
it done. In the RFE I was asking to have a way in the UI to decide which
nic to use for gluster other than the MGMT net that is the one ovirt
lets you use.
There's another way to do this and it's from outside ovirt. There you
have to unregister and re register the bricks using gluster console
commands. This way, when you register the bricks, you can specify the IP
address of the spare NIC and then the traffic will not interfere with
the mgmt.
There's a step that I don't recall much, but ovirt is going to need to
know that the bricks are no longer is the mgmt IP, maybe someone else in
the list can help with this. I can tell you that if you search the list
you'll see my posts about this and the replys of those who helped my
back then.
Regards,
10 years