--_004_BFAB40933B3367488CE6299BAF8592D1014E52E492F3SOCRATESasl_
Content-Type: multipart/alternative;
boundary="_000_BFAB40933B3367488CE6299BAF8592D1014E52E492F3SOCRATESasl_"
--_000_BFAB40933B3367488CE6299BAF8592D1014E52E492F3SOCRATESasl_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hi All,
it's the same for me. I've update all my hosts to the latest release and th=
ought it would now use libgfapi since BZ 1022961<https://bugzilla.redhat.co=
m/1022961> is listed in the release notes under enhancements. Are there an=
y steps that need to be taken after upgrading for this to work ?
Thank you,
Sven
Von: users-bounces(a)ovirt.org [mailto:users-bounces@ovirt.org] Im Auftrag vo=
n Mahdi Adnan
Gesendet: Samstag, 8. Juli 2017 09:35
An: Ralf Schenk <rs(a)databay.de>; users(a)ovirt.org; ykaul(a)redhat.com
Betreff: Re: [ovirt-users] Very poor GlusterFS performance
So ovirt access gluster vai FUSE ? i thought its using libgfapi.
When can we expect it to work with libgfapi ?
and what about the changelog of 4.1.3 ?
BZ 1022961 Gluster: running a VM from a gluster domain should use gluster U=
RI instead of a fuse mount"
--
Respectfully
Mahdi A. Mahdi
________________________________
From: users-bounces@ovirt.org<mailto:users-bounces@ovirt.org> <users-bounce=
s@ovirt.org<mailto:users-bounces@ovirt.org>> on behalf of Ralf Schenk <rs@d=
atabay.de<mailto:rs@databay.de>>
Sent: Monday, June 19, 2017 7:32:45 PM
To: users@ovirt.org<mailto:users@ovirt.org>
Subject: Re: [ovirt-users] Very poor GlusterFS performance
Hello,
Gluster-Performance is bad. Thats why I asked for native qemu-libgfapi acce=
ss for Ovirt-VM's to gluster volumes which I thought to be possible since 3=
.6.x. Documentation is misleading and still in 4.1.2 Ovirt is using fuse to=
mount gluster-based VM-Disks.
Bye
Am 19.06.2017 um 17:23 schrieb Darrell Budic:
Chris-
You probably need to head over to gluster-users@gluster.org<mailto:gluster-=
users(a)gluster.org> for help with performance issues.
That said, what kind of performance are you getting, via some form or testi=
ng like bonnie++ or even dd runs? Raw bricks vs gluster performance is usef=
ul to determine what kind of performance you're actually getting.
Beyond that, I'd recommend dropping the arbiter bricks and re-adding them a=
s full replicas, they can't serve distributed data in this configuration an=
d may be slowing things down on you. If you've got a storage network setup,=
make sure it's using the largest MTU it can, and consider adding/testing t=
hese settings that I use on my main storage volume:
performance.io<http://performance.io>-thread-count: 32
client.event-threads: 8
server.event-threads: 3
performance.stat-prefetch: on
Good luck,
-Darrell
On Jun 19, 2017, at 9:46 AM, Chris Boot <bootc@bootc.net<mailto:bootc@bootc=
.net>> wrote:
Hi folks,
I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10
configuration. My VMs run off a replica 3 arbiter 1 volume comprised of
6 bricks, which themselves live on two SSDs in each of the servers (one
brick per SSD). The bricks are XFS on LVM thin volumes straight onto the
SSDs. Connectivity is 10G Ethernet.
Performance within the VMs is pretty terrible. I experience very low
throughput and random IO is really bad: it feels like a latency issue.
On my oVirt nodes the SSDs are not generally very busy. The 10G network
seems to run without errors (iperf3 gives bandwidth measurements of >=3D
9.20 Gbits/sec between the three servers).
To put this into perspective: I was getting better behaviour from NFS4
on a gigabit connection than I am with GlusterFS on 10G: that doesn't
feel right at all.
My volume configuration looks like this:
Volume Name: vmssd
Type: Distributed-Replicate
Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (2 + 1) =3D 6
Transport-type: tcp
Bricks:
Brick1: ovirt3:/gluster/ssd0_vmssd/brick
Brick2: ovirt1:/gluster/ssd0_vmssd/brick
Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
Brick4: ovirt3:/gluster/ssd1_vmssd/brick
Brick5: ovirt1:/gluster/ssd1_vmssd/brick
Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
Options Reconfigured:
nfs.disable: on
transport.address-family: inet6
performance.quick-read: off
performance.read-ahead: off
performance.io<http://performance.io>-cache: off
performance.stat-prefetch: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
features.shard-block-size: 128MB
performance.strict-o-direct: on
network.ping-timeout: 30
cluster.granular-entry-heal: enable
I would really appreciate some guidance on this to try to improve things
because at this rate I will need to reconsider using GlusterFS altogether.
Cheers,
Chris
--
Chris Boot
bootc@bootc.net<mailto:bootc@bootc.net>
_______________________________________________
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
--
[cid:image001.gif@01D2F8B7.C9A43050]
Ralf Schenk
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail rs@databay.de<mailto:rs@databay.de>
Databay AG
Jens-Otto-Krag-Stra=DFe 11
D-52146 W=FCrselen
www.databay.de<http://www.databay.de>
Sitz/Amtsgericht Aachen * HRB:8437 * USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm. Phi=
lipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen
________________________________
--_000_BFAB40933B3367488CE6299BAF8592D1014E52E492F3SOCRATESasl_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml"
xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word"
=
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml"
xmlns=3D"http:=
//www.w3.org/TR/REC-html40"><head><meta http-equiv=3DContent-Type
content=
=3D"text/html; charset=3Diso-8859-1"><meta name=3DGenerator
content=3D"Micr=
osoft Word 15 (filtered medium)"><!--[if !mso]><style>v\:*
{behavior:url(#d=
efault#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]--><style><!--
/* Font Definitions */
@font-face
{font-family:Helvetica;
panose-1:2 11 6 4 2 2 2 2 2 4;}
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Consolas;
panose-1:2 11 6 9 2 2 4 3 2 4;}
@font-face
{font-family:Verdana;
panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:12.0pt;
font-family:"Times New Roman",serif;
color:black;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
p
{mso-style-priority:99;
margin:0cm;
margin-bottom:.0001pt;
font-size:12.0pt;
font-family:"Times New Roman",serif;
color:black;}
pre
{mso-style-priority:99;
mso-style-link:"HTML Vorformatiert Zchn";
margin:0cm;
margin-bottom:.0001pt;
font-size:10.0pt;
font-family:"Courier New";
color:black;}
span.HTMLVorformatiertZchn
{mso-style-name:"HTML Vorformatiert Zchn";
mso-style-priority:99;
mso-style-link:"HTML Vorformatiert";
font-family:"Consolas",serif;
color:black;}
span.E-MailFormatvorlage20
{mso-style-type:personal-reply;
font-family:"Calibri",sans-serif;
color:#1F497D;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:70.85pt 70.85pt 2.0cm 70.85pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body bgcolor=3Dwhite
lang=3DDE li=
nk=3Dblue vlink=3Dpurple><div class=3DWordSection1><p
class=3DMsoNormal><sp=
an
style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D=
;mso-fareast-language:EN-US'>Hi All,
<o:p></o:p></span></p><p class=3DMsoNo=
rmal><span
style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;color=
:#1F497D;mso-fareast-language:EN-US'><o:p> </o:p></span></p><p
class=
=3DMsoNormal><span lang=3DEN-US
style=3D'font-size:11.0pt;font-family:"Cali=
bri",sans-serif;color:#1F497D;mso-fareast-language:EN-US'>it’s the
sa=
me for me. I’ve update all my hosts to the latest release and thought=
it would now use libgfapi since <a
href=3D"https://bugzilla.redhat.com/102=
2961"><span style=3D'color:#1F497D;text-decoration:none'>BZ
1022961</span><=
/a> is listed in the release notes under enhancements. =A0Are there any ste=
ps that need to be taken after upgrading for this to work ?
<o:p></o:p></sp=
an></p><p class=3DMsoNormal><span lang=3DEN-US
style=3D'font-size:11.0pt;fo=
nt-family:"Calibri",sans-serif;color:#1F497D;mso-fareast-language:EN-US'><o=
:p> </o:p></span></p><p class=3DMsoNormal><span
lang=3DEN-US style=3D'=
font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D;mso-fareast=
-language:EN-US'>Thank you, <o:p></o:p></span></p><p
class=3DMsoNormal><spa=
n lang=3DEN-US
style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;c=
olor:#1F497D;mso-fareast-language:EN-US'>Sven
<o:p></o:p></span></p><p clas=
s=3DMsoNormal><a name=3D"_MailEndCompose"><span lang=3DEN-US
style=3D'font-=
size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D;mso-fareast-lang=
uage:EN-US'><o:p> </o:p></span></a></p><div><div
style=3D'border:none;=
border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0cm 0cm 0cm'><p class=3DMsoNor=
mal><b><span
style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;col=
or:windowtext'>Von:</span></b><span
style=3D'font-size:11.0pt;font-family:"=
Calibri",sans-serif;color:windowtext'> users-bounces(a)ovirt.org [mailto:user=
s-bounces(a)ovirt.org] <b>Im Auftrag von </b>Mahdi
Adnan<br><b>Gesendet:</b> =
Samstag, 8. Juli 2017 09:35<br><b>An:</b> Ralf Schenk
&lt;rs(a)databay.de&gt;=
; users(a)ovirt.org; ykaul@redhat.com<br><b>Betreff:</b> Re: [ovirt-users]
Ve=
ry poor GlusterFS
performance<o:p></o:p></span></p></div></div><p
class=3DM=
soNormal><o:p> </o:p></p><div
id=3Ddivtagdefaultwrapper><p><span style=
=3D'font-family:"Calibri",sans-serif'>So ovirt access gluster vai
FUSE ? i =
thought its using libgfapi.<o:p></o:p></span></p><p><span
style=3D'font-fam=
ily:"Calibri",sans-serif'>When can we expect it to work with libgfapi
?&nbs=
p;<o:p></o:p></span></p><p><span
style=3D'font-family:"Calibri",sans-serif'=
and what about the changelog of 4.1.3
?<o:p></o:p></span></p><div><p class=
=3DMsoNormal><span
style=3D'font-family:"Calibri",sans-serif'>BZ 1022961 Gl=
uster: running a VM from a gluster domain should use gluster URI instead of=
a fuse
mount"<o:p></o:p></span></p></div><p><span
style=3D'font-famil=
y:"Calibri",sans-serif'><o:p> </o:p></span></p><div
id=3DSignature><p =
class=3DMsoNormal><span
style=3D'font-family:"Calibri",sans-serif'><o:p>&nb=
sp;</o:p></span></p><div><p class=3DMsoNormal
style=3D'margin-bottom:12.0pt=
'><span style=3D'font-family:"Calibri",sans-serif'>--
<br><br>Respectfully<=
b><br>Mahdi A.
Mahdi</b><o:p></o:p></span></p></div></div></div><div
class=
=3DMsoNormal align=3Dcenter style=3D'text-align:center'><hr size=3D4 width=
=3D"98%" align=3Dcenter></div><div id=3DdivRplyFwdMsg><p
class=3DMsoNormal>=
<b><span
style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif'>From:<=
/span></b><span
style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif'=
<a
href=3D"mailto:users-bounces@ovirt.org">users-bounces@ovirt.org</a>
&l=
t;<a
href=3D"mailto:users-bounces@ovirt.org">users-bounces@ovirt.org</a>>=
; on behalf of Ralf Schenk <<a
href=3D"mailto:rs@databay.de">rs@databay.=
de</a>><br><b>Sent:</b> Monday, June 19, 2017 7:32:45
PM<br><b>To:</b> <=
a
href=3D"mailto:users@ovirt.org">users@ovirt.org</a><br><b>Subject:</b>
Re=
: [ovirt-users] Very poor GlusterFS performance</span>
<o:p></o:p></p><div>=
<p
class=3DMsoNormal> <o:p></o:p></p></div></div><div><p><span
style=
=3D'font-family:"Helvetica",sans-serif'>Hello,</span><o:p></o:p></p><p><spa=
n style=3D'font-family:"Helvetica",sans-serif'>Gluster-Performance is
bad. =
Thats why I asked for native qemu-libgfapi access for Ovirt-VM's to gluster=
volumes which I thought to be possible since 3.6.x. Documentation is misle=
ading and still in 4.1.2 Ovirt is using fuse to mount gluster-based VM-Disk=
s.</span><o:p></o:p></p><p>Bye<o:p></o:p></p><p
class=3DMsoNormal><o:p>&nbs=
p;</o:p></p><div><p class=3DMsoNormal>Am 19.06.2017 um 17:23
schrieb Darrel=
l Budic:<o:p></o:p></p></div><blockquote
style=3D'margin-top:5.0pt;margin-b=
ottom:5.0pt'><p class=3DMsoNormal>Chris-
<o:p></o:p></p><div><p class=3DMso=
Normal><o:p> </o:p></p></div><div><p
class=3DMsoNormal>You probably ne=
ed to head over to <a
href=3D"mailto:gluster-users@gluster.org">gluster-use=
rs(a)gluster.org</a>&nbsp;for help with performance
issues.<o:p></o:p></p></d=
iv><div><p
class=3DMsoNormal><o:p> </o:p></p></div><div><p
class=3DMso=
Normal>That said, what kind of performance are you getting, via some form o=
r testing like bonnie++ or even dd runs? Raw bricks vs gluster performance =
is useful to determine what kind of performance you’re actually getti=
ng.<o:p></o:p></p></div><div><p
class=3DMsoNormal><o:p> </o:p></p></di=
v><div><p class=3DMsoNormal>Beyond that, I’d recommend dropping
the a=
rbiter bricks and re-adding them as full replicas, they can’t serve d=
istributed data in this configuration and may be slowing things down on you=
. If you’ve got a storage network setup, make sure it’s using t=
he largest MTU it can, and consider adding/testing these settings that I us=
e on my main storage volume:<o:p></o:p></p></div><div><p
class=3DMsoNormal>=
<o:p> </o:p></p></div><div><div><p
class=3DMsoNormal><a href=3D"http:/=
/performance.io">performance.io</a>-thread-count:
32<o:p></o:p></p></div><d=
iv><p class=3DMsoNormal>client.event-threads:
8<o:p></o:p></p></div><div><p=
class=3DMsoNormal>server.event-threads:
3<o:p></o:p></p></div><div><p clas=
s=3DMsoNormal>performance.stat-prefetch:
on<o:p></o:p></p></div></div><div>=
<p
class=3DMsoNormal><br><br><o:p></o:p></p></div><div><p
class=3DMsoNormal=
Good
luck,<o:p></o:p></p></div><div><p
class=3DMsoNormal><br><br><o:p></o:=
p></p></div><div><p class=3DMsoNormal>
-Darrell<o:p></o:p></p></div><=
div><p
class=3DMsoNormal><o:p> </o:p></p></div><div><p
class=3DMsoNorm=
al><o:p> </o:p></p><div><blockquote
style=3D'margin-top:5.0pt;margin-b=
ottom:5.0pt'><div><p class=3DMsoNormal>On Jun 19, 2017, at 9:46 AM,
Chris B=
oot <<a
href=3D"mailto:bootc@bootc.net">bootc@bootc.net</a>>
wrote:<o=
:p></o:p></p></div><p
class=3DMsoNormal><o:p> </o:p></p><div><div><p
c=
lass=3DMsoNormal>Hi folks,<br><br>I have 3x servers in a
"hyper-conver=
ged" oVirt 4.1.2 + GlusterFS 3.10<br>configuration. My VMs run off a r=
eplica 3 arbiter 1 volume comprised of<br>6 bricks, which themselves live o=
n two SSDs in each of the servers (one<br>brick per SSD). The bricks are XF=
S on LVM thin volumes straight onto the<br>SSDs. Connectivity is 10G Ethern=
et.<br><br>Performance within the VMs is pretty terrible. I experience very=
low<br>throughput and random IO is really bad: it feels like a latency iss=
ue.<br>On my oVirt nodes the SSDs are not generally very busy. The 10G netw=
ork<br>seems to run without errors (iperf3 gives bandwidth measurements of =
>=3D<br>9.20 Gbits/sec between the three servers).<br><br>To put
this in=
to perspective: I was getting better behaviour from NFS4<br>on a gigabit co=
nnection than I am with GlusterFS on 10G: that doesn't<br>feel right at all=
.<br><br>My volume configuration looks like this:<br><br>Volume
Name: vmssd=
<br>Type: Distributed-Replicate<br>Volume ID: d5a5ddd1-a140-4e0d-b514-701cf=
e464853<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 2 x
(2=
+ 1) =3D 6<br>Transport-type: tcp<br>Bricks:<br>Brick1:
ovirt3:/gluster/ss=
d0_vmssd/brick<br>Brick2: ovirt1:/gluster/ssd0_vmssd/brick<br>Brick3: ovirt=
2:/gluster/ssd0_vmssd/brick (arbiter)<br>Brick4: ovirt3:/gluster/ssd1_vmssd=
/brick<br>Brick5: ovirt1:/gluster/ssd1_vmssd/brick<br>Brick6: ovirt2:/glust=
er/ssd1_vmssd/brick (arbiter)<br>Options Reconfigured:<br>nfs.disable:
on<b=
r>transport.address-family: inet6<br>performance.quick-read:
off<br>perform=
ance.read-ahead: off<br><a
href=3D"http://performance.io">performance.io</a=
-cache: off<br>performance.stat-prefetch:
off<br>performance.low-prio-thre=
ads: 32<br>network.remote-dio:
off<br>cluster.eager-lock: enable<br>cluster=
.quorum-type: auto<br>cluster.server-quorum-type: server<br>cluster.data-se=
lf-heal-algorithm: full<br>cluster.locking-scheme: granular<br>cluster.shd-=
max-threads: 8<br>cluster.shd-wait-qlength: 10000<br>features.shard:
on<br>=
user.cifs: off<br>storage.owner-uid: 36<br>storage.owner-gid:
36<br>feature=
s.shard-block-size: 128MB<br>performance.strict-o-direct: on<br>network.pin=
g-timeout: 30<br>cluster.granular-entry-heal: enable<br><br>I would
really =
appreciate some guidance on this to try to improve things<br>because at thi=
s rate I will need to reconsider using GlusterFS altogether.<br><br>Cheers,=
<br>Chris<br><br>-- <br>Chris Boot<br><a
href=3D"mailto:bootc@bootc.net">bo=
otc@bootc.net</a><br>_______________________________________________<br>Use=
rs mailing list<br><a
href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a><b=
r><a
href=3D"http://lists.ovirt.org/mailman/listinfo/users">http:...
irt.org/mailman/listinfo/users</a><o:p></o:p></p>...
/div><p
class=3DMsoNormal><o:p> </o:p></p></div><p
class=3DMsoNormal><=
br><br><br><o:p></o:p></p><pre>____________________________________________=
___<o:p></o:p></pre><pre>Users mailing
list<o:p></o:p></pre><pre><a href=3D=
"mailto:Users@ovirt.org">Users@ovirt.org</a><o:p></o:p></pre><pre><a
href=
=3D"http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/m=
ailman/listinfo/users</a><o:p></o:p></pre></blockquote><p
class=3DMsoNormal=
<o:p> </o:p></p><div><p
class=3DMsoNormal>-- <o:p></o:p></p><table cl=
ass=3DMsoNormalTable border=3D0 cellspacing=3D0 cellpadding=3D0><tr><td
col=
span=3D3 style=3D'padding:0cm 0cm 0cm 0cm'><p class=3DMsoNormal><img
border=
=3D0 width=3D151 height=3D30 id=3D"_x0000_i1026"
src=3D"cid:image001.gif@01=
D2F8B7.C9A43050"><o:p></o:p></p></td></tr><tr><td
valign=3Dtop style=3D'pad=
ding:0cm 0cm 0cm 0cm'><p class=3DMsoNormal><span
style=3D'font-size:10.0pt;=
font-family:"Verdana",sans-serif'><br><b>Ralf
Schenk</b><br>fon +49 (0) 24 =
05 / 40 83 70<br>fax +49 (0) 24 05 / 40 83 759<br>mail <a
href=3D"mailto:rs=
@databay.de"><b><span
style=3D'color:red'>rs@databay.de</span></b></a></spa=
n><o:p></o:p></p></td><td width=3D33
style=3D'width:22.5pt;padding:0cm 0cm =
0cm 0cm'><p
class=3DMsoNormal> <o:p></o:p></p></td><td
valign=3Dtop st=
yle=3D'padding:0cm 0cm 0cm 0cm'><p class=3DMsoNormal><span
style=3D'font-si=
ze:10.0pt;font-family:"Verdana",sans-serif'><br><b>Databay
AG</b><br>Jens-O=
tto-Krag-Stra=DFe 11<br>D-52146 W=FCrselen<br><a
href=3D"http://www.databay=
.de"><b><span
style=3D'color:red'>www.databay.de</span></b></a>
</span><o:p=
</o:p></p></td></tr><tr><td
colspan=3D3 valign=3Dtop style=3D'padding:0cm =
0cm 0cm 0cm'><p
class=3DMsoNormal><span style=3D'font-size:7.5pt;font-famil=
y:"Verdana",sans-serif'><br>Sitz/Amtsgericht Aachen •
HRB:8437 •=
; USt-IdNr.: DE 210844202<br>Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, =
Aresch Yavari, Dipl.-Kfm. Philipp Hermanns<br>Aufsichtsratsvorsitzender: Wi=
lhelm Dohmen
</span><o:p></o:p></p></td></tr></table><div
class=3DMsoNormal=
align=3Dcenter style=3D'text-align:center'><hr size=3D1
width=3D"100%" nos=
hade style=3D'color:black'
align=3Dcenter></div></div></div></div></body></=
html>=
--_000_BFAB40933B3367488CE6299BAF8592D1014E52E492F3SOCRATESasl_--
--_004_BFAB40933B3367488CE6299BAF8592D1014E52E492F3SOCRATESasl_
Content-Type: image/gif; name="image001.gif"
Content-Description: image001.gif
Content-Disposition: inline; filename="image001.gif"; size=1250;
creation-date="Sun, 09 Jul 2017 06:34:25 GMT";
modification-date="Sun, 09 Jul 2017 06:34:25 GMT"
Content-ID: <image001.gif(a)01D2F8B7.C9A43050>
Content-Transfer-Encoding: base64
R0lGODlhlwAeAMQAAObm5v9QVf/R0oKBgfDw8NfX105MTLi3t/r6+sfHx/+rrf98gC0sLP8LEhIQ
EKalpf/g4ZmYmHd2dmppaf8uNP/y8v8cIv+Ym//AwkE/P46NjRwbG11cXP8ABwUDA////yH5BAAA
AAAALAAAAACXAB4AAAX/4CeOYnUJZKqubOu+cCzPNA0tVnfVfO//wGAKk+t0Ap+KQMFUYCDCqHRK
JVUWDaPRUsFktZ1G4AKtms9o1gKsFVS+7I5ll67bpd647hPQawNld4KDMQJFbA07F35aFBiEkJEp
fXEBjx8KjI0Vkp2DEIdaCySgFBShbEgrCQOtrq+uEQcALQewrQUjEbe8rgkkD7y5KhMZB3drqSoV
FQhdlHGXKQYe1dbX2BvHKwzY1RMiAN7j1xEjBeTmKeIeD3cYCxRfFigvChRxFJwkBBvk5A7cpZhA
jgGCDwn+kfslgto4CSoSehh2BwEEBQvowDAUR0EKdArHZTg44oDCXBFC/3qj9SEluZEpHnjYQFIG
gpo1KgSasYjNKBImrzF4NaFbNgIjCGRQeIyVKwneOLzScLCAg38OWI4Y4GECgQcSOEwYcADnh6/F
NjAwoGFYAQ0atI4AAFeEFwsLFLiJUQEfGH0kNGADx8+oNQdIRQg+7NCaOhIgD8sVgYADNsPVGI5Y
WjRqzQTdHDDIYHRDLokaUhCglkFEJi0NKJhl0RP2TsvXUg88KiLBVWsZrF6DmMKlNYMqglqTik1g
uN8OBgAgkGCpB+L9ugK4iSCBvwEfECw1kILrBpa1jVCQIQBRvbP+rlEcQVAoSevWyv6uhpwE12uE
kQAAZucpVw1xIsjkgf8B863mQVYteQATCZYJZJ5WBfij2wfpHcEeHGG8Z+BMszVWDXkfKLhceJhB
SAJ+1ThH32AfRFZNayNAtUFiwFSTSwEHJIYAAQU84IADwyjIEALU9MchG+vFgIF7W2GDI2T7HfjB
gNcgKQKMHmwjgnCSpeCbULRkdxhF1CDY40RjgmUAA/v1J5FAKW2gGSZscBFDMraNgJs1AYpAAGYP
5jJoNQ4Y4Gh8jpFgHH9mgbmWo1l6oA4C3Ygp6UwEIFBfNRtkMIBlKMLnAXgAXLWhXXH85EIFqMhG
GZgDEKArABGAed0HI4bk5qgnprCYSt88B6dqS0FEEAMPJDCdCJYViur/B1BlwGMJqDTwnhqxJgUp
o0ceOQ4D0yEakpMm/jqCRMgWm2I1j824Y6vLvuuPjHnqOJkIgP6xzwp5sCFNsCFp88Gxh11lrjfD
cNrcCEx64/CD3iAHlQcMUEQXvcA+qBkBB4Q2X1CusjBlJdKMYAKI6g28MbKN5hJsBAXknHOwutn4
oFYqkpqAzjnPbE0u1PxmwAQGXLWBbvhuIIEGEnRjlAHO4SvhbCNAkwoGzEBwgV9U0lfu2WiXOkDE
GaCdKgl0nk2YkWdPOCDabvaGdkAftL1LlgwCM+7Tq11V71IO7LkM2XE0YAHMYMhqqK6UV165CpaH
ukLmiXFO8XSVzzakX+UH6TrmAajPNxfqByTQec41AeBPvSwIALkmAnuiexCsca3CBajgfsROuxcP
A8kHQJX4DAIwjnsAvhsvfXHWKEwDAljg7sj03L9wwAQTxOWD2AE0YP75eCkwcPfs+xACADs=
--_004_BFAB40933B3367488CE6299BAF8592D1014E52E492F3SOCRATESasl_--