From tdemeter at itsmart.hu Mon Sep 7 16:26:55 2015 Content-Type: multipart/mixed; boundary="===============5755898195264461777==" MIME-Version: 1.0 From: Demeter Tibor To: users at ovirt.org Subject: [ovirt-users] strange iscsi issue Date: Mon, 07 Sep 2015 22:26:51 +0200 Message-ID: <1462912199.1904413.1441657611396.JavaMail.zimbra@itsmart.hu> --===============5755898195264461777== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable ------=3D_Part_1904412_191957680.1441657611396 Content-Type: text/plain; charset=3Dutf-8 Content-Transfer-Encoding: 7bit Hi All, = I have to create a test environment for testing purposes, because we need t= o testing our new 10gbe infrastructure. = One server that have a 10gbe nic - this is the vdsm host and ovirt portal. = One server that have a 10gbe nic - this is the storage. = Its connected to each other throught a dlink 10gbe switch. = Everything good and nice, the server can connect to storage, I can make and= run VMs, but the storage performance from inside VM seems to be 1Gb/sec on= ly. = I did try the iperf command for testing connections beetwen servers, and it= was 9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice and = also it was 400-450 MB/sec. I've got same result on storage server. = So: = - hdparm test on local storage ~ 400 mb/sec = - hdparm test on ovirt node server through attached iscsi device ~ 400 Mb/s= ec = - hdparm test from inside vm on local virtual disk - 93-102 Mb /sec = The question is : Why? = ps. I Have only one ovirtmgmt device, so there are no other networks. The r= outer is only 1gbe/sec, but i've tested and the traffic does not going thro= ugh this. = Thanks in advance, = Regards, = Tibor = ------=3D_Part_1904412_191957680.1441657611396 Content-Type: text/html; charset=3Dutf-8 Content-Transfer-Encoding: quoted-printable
Hi All,

=3D
I have to create a test environment for testing purposes, becaus= =3D e we need to testing our new 10gbe infrastructure.
One server tha= =3D t have a 10gbe nic - this is the vdsm host and ovirt portal.
One = =3D server that have a 10gbe nic - this is the storage.

Its connected to each other throught a dlink 10gbe= s=3D witch.

Everything good and nice, the server can co= =3D nnect to storage, I can make and run VMs, but the storage performance from = =3D inside VM seems to be 1Gb/sec only. 
I did try the iperf com= =3D mand for testing connections beetwen servers, and it was 9.40 GB/sec. I hav= =3D e try to use hdparm -tT /dev/mapper/iscsidevice and also it was 400-450 MB/= =3D sec. I've got same result on storage server.

So:

- h= dpar=3D m test on local storage ~ 400 mb/sec
- hdparm test on ovirt node = =3D server through attached iscsi device ~ 400 Mb/sec
- hdparm test f= =3D rom inside vm on local virtual disk - 93-102 Mb /sec

The question is : Why?

ps. I Have only one ovirtmgmt device, so there are no o= th=3D er networks. The router is only 1gbe/sec, but i've tested and the traffic d= =3D oes not going through  this.

=3D
Thanks in advance,

Reg= ar=3D ds, 
Tibor

=3D

------=3D_Part_1904412_191957680.1441657611396-- --===============5755898195264461777== Content-Type: multipart/alternative MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.bin" LS0tLS0tPV9QYXJ0XzE5MDQ0MTJfMTkxOTU3NjgwLjE0NDE2NTc2MTEzOTYKQ29udGVudC1UeXBl OiB0ZXh0L3BsYWluOyBjaGFyc2V0PXV0Zi04CkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdi aXQKCkhpIEFsbCwgCgpJIGhhdmUgdG8gY3JlYXRlIGEgdGVzdCBlbnZpcm9ubWVudCBmb3IgdGVz dGluZyBwdXJwb3NlcywgYmVjYXVzZSB3ZSBuZWVkIHRvIHRlc3Rpbmcgb3VyIG5ldyAxMGdiZSBp bmZyYXN0cnVjdHVyZS4gCk9uZSBzZXJ2ZXIgdGhhdCBoYXZlIGEgMTBnYmUgbmljIC0gdGhpcyBp cyB0aGUgdmRzbSBob3N0IGFuZCBvdmlydCBwb3J0YWwuIApPbmUgc2VydmVyIHRoYXQgaGF2ZSBh IDEwZ2JlIG5pYyAtIHRoaXMgaXMgdGhlIHN0b3JhZ2UuIAoKSXRzIGNvbm5lY3RlZCB0byBlYWNo IG90aGVyIHRocm91Z2h0IGEgZGxpbmsgMTBnYmUgc3dpdGNoLiAKCkV2ZXJ5dGhpbmcgZ29vZCBh bmQgbmljZSwgdGhlIHNlcnZlciBjYW4gY29ubmVjdCB0byBzdG9yYWdlLCBJIGNhbiBtYWtlIGFu ZCBydW4gVk1zLCBidXQgdGhlIHN0b3JhZ2UgcGVyZm9ybWFuY2UgZnJvbSBpbnNpZGUgVk0gc2Vl bXMgdG8gYmUgMUdiL3NlYyBvbmx5LiAKSSBkaWQgdHJ5IHRoZSBpcGVyZiBjb21tYW5kIGZvciB0 ZXN0aW5nIGNvbm5lY3Rpb25zIGJlZXR3ZW4gc2VydmVycywgYW5kIGl0IHdhcyA5LjQwIEdCL3Nl Yy4gSSBoYXZlIHRyeSB0byB1c2UgaGRwYXJtIC10VCAvZGV2L21hcHBlci9pc2NzaWRldmljZSBh bmQgYWxzbyBpdCB3YXMgNDAwLTQ1MCBNQi9zZWMuIEkndmUgZ290IHNhbWUgcmVzdWx0IG9uIHN0 b3JhZ2Ugc2VydmVyLiAKClNvOiAKCi0gaGRwYXJtIHRlc3Qgb24gbG9jYWwgc3RvcmFnZSB+IDQw MCBtYi9zZWMgCi0gaGRwYXJtIHRlc3Qgb24gb3ZpcnQgbm9kZSBzZXJ2ZXIgdGhyb3VnaCBhdHRh Y2hlZCBpc2NzaSBkZXZpY2UgfiA0MDAgTWIvc2VjIAotIGhkcGFybSB0ZXN0IGZyb20gaW5zaWRl IHZtIG9uIGxvY2FsIHZpcnR1YWwgZGlzayAtIDkzLTEwMiBNYiAvc2VjIAoKVGhlIHF1ZXN0aW9u IGlzIDogV2h5PyAKCnBzLiBJIEhhdmUgb25seSBvbmUgb3ZpcnRtZ210IGRldmljZSwgc28gdGhl cmUgYXJlIG5vIG90aGVyIG5ldHdvcmtzLiBUaGUgcm91dGVyIGlzIG9ubHkgMWdiZS9zZWMsIGJ1 dCBpJ3ZlIHRlc3RlZCBhbmQgdGhlIHRyYWZmaWMgZG9lcyBub3QgZ29pbmcgdGhyb3VnaCB0aGlz LiAKClRoYW5rcyBpbiBhZHZhbmNlLCAKClJlZ2FyZHMsIApUaWJvciAKCgoKCi0tLS0tLT1fUGFy dF8xOTA0NDEyXzE5MTk1NzY4MC4xNDQxNjU3NjExMzk2CkNvbnRlbnQtVHlwZTogdGV4dC9odG1s OyBjaGFyc2V0PXV0Zi04CkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IHF1b3RlZC1wcmludGFi bGUKCjxodG1sPjxib2R5PjxkaXYgc3R5bGU9M0QiZm9udC1mYW1pbHk6IGFyaWFsLCBoZWx2ZXRp Y2EsIHNhbnMtc2VyaWY7IGZvbnQtcz0KaXplOiAxMnB0OyBjb2xvcjogIzAwMDAwMCI+PGRpdj5I aSBBbGwsPC9kaXY+PGRpdj48YnIgZGF0YS1tY2UtYm9ndXM9M0QiMSI+PQo8L2Rpdj48ZGl2Pkkg aGF2ZSB0byBjcmVhdGUgYSB0ZXN0IGVudmlyb25tZW50IGZvciB0ZXN0aW5nIHB1cnBvc2VzLCBi ZWNhdXM9CmUgd2UgbmVlZCB0byB0ZXN0aW5nIG91ciBuZXcgMTBnYmUgaW5mcmFzdHJ1Y3R1cmUu PC9kaXY+PGRpdj5PbmUgc2VydmVyIHRoYT0KdCBoYXZlIGEgMTBnYmUgbmljIC0gdGhpcyBpcyB0 aGUgdmRzbSBob3N0IGFuZCBvdmlydCBwb3J0YWwuPC9kaXY+PGRpdj5PbmUgPQpzZXJ2ZXIgdGhh dCBoYXZlIGEgMTBnYmUgbmljIC0gdGhpcyBpcyB0aGUgc3RvcmFnZS48L2Rpdj48ZGl2PjxiciBk YXRhLW1jZS09CmJvZ3VzPTNEIjEiPjwvZGl2PjxkaXY+SXRzIGNvbm5lY3RlZCB0byBlYWNoIG90 aGVyIHRocm91Z2h0IGEgZGxpbmsgMTBnYmUgcz0Kd2l0Y2guPC9kaXY+PGRpdj48YnI+PC9kaXY+ PGRpdj5FdmVyeXRoaW5nIGdvb2QgYW5kIG5pY2UsIHRoZSBzZXJ2ZXIgY2FuIGNvPQpubmVjdCB0 byBzdG9yYWdlLCBJIGNhbiBtYWtlIGFuZCBydW4gVk1zLCBidXQgdGhlIHN0b3JhZ2UgcGVyZm9y bWFuY2UgZnJvbSA9Cmluc2lkZSBWTSBzZWVtcyB0byBiZSAxR2Ivc2VjIG9ubHkuJm5ic3A7PC9k aXY+PGRpdj5JIGRpZCB0cnkgdGhlIGlwZXJmIGNvbT0KbWFuZCBmb3IgdGVzdGluZyBjb25uZWN0 aW9ucyBiZWV0d2VuIHNlcnZlcnMsIGFuZCBpdCB3YXMgOS40MCBHQi9zZWMuIEkgaGF2PQplIHRy eSB0byB1c2UgaGRwYXJtIC10VCAvZGV2L21hcHBlci9pc2NzaWRldmljZSBhbmQgYWxzbyBpdCB3 YXMgNDAwLTQ1MCBNQi89CnNlYy4gSSd2ZSBnb3Qgc2FtZSByZXN1bHQgb24gc3RvcmFnZSBzZXJ2 ZXIuPC9kaXY+PGRpdj48YnIgZGF0YS1tY2UtYm9ndXM9Cj0zRCIxIj48L2Rpdj48ZGl2PlNvOjwv ZGl2PjxkaXY+PGJyIGRhdGEtbWNlLWJvZ3VzPTNEIjEiPjwvZGl2PjxkaXY+LSBoZHBhcj0KbSB0 ZXN0IG9uIGxvY2FsIHN0b3JhZ2UgfiA0MDAgbWIvc2VjPC9kaXY+PGRpdj4tIGhkcGFybSB0ZXN0 IG9uIG92aXJ0IG5vZGUgPQpzZXJ2ZXIgdGhyb3VnaCBhdHRhY2hlZCBpc2NzaSBkZXZpY2UgfiA0 MDAgTWIvc2VjPC9kaXY+PGRpdj4tIGhkcGFybSB0ZXN0IGY9CnJvbSBpbnNpZGUgdm0gb24gbG9j YWwgdmlydHVhbCBkaXNrIC0gOTMtMTAyIE1iIC9zZWM8L2Rpdj48ZGl2PjxiciBkYXRhLW1jZT0K LWJvZ3VzPTNEIjEiPjwvZGl2PjxkaXY+VGhlIHF1ZXN0aW9uIGlzIDogV2h5PzwvZGl2PjxkaXY+ PGJyIGRhdGEtbWNlLWJvZ3VzPQo9M0QiMSI+PC9kaXY+PGRpdj5wcy4gSSBIYXZlIG9ubHkgb25l IG92aXJ0bWdtdCBkZXZpY2UsIHNvIHRoZXJlIGFyZSBubyBvdGg9CmVyIG5ldHdvcmtzLiBUaGUg cm91dGVyIGlzIG9ubHkgMWdiZS9zZWMsIGJ1dCBpJ3ZlIHRlc3RlZCBhbmQgdGhlIHRyYWZmaWMg ZD0Kb2VzIG5vdCBnb2luZyB0aHJvdWdoICZuYnNwO3RoaXMuPC9kaXY+PGRpdj48YnIgZGF0YS1t Y2UtYm9ndXM9M0QiMSI+PC9kaXY+PQo8ZGl2PlRoYW5rcyBpbiBhZHZhbmNlLDwvZGl2PjxkaXY+ PGJyIGRhdGEtbWNlLWJvZ3VzPTNEIjEiPjwvZGl2PjxkaXY+UmVnYXI9CmRzLCZuYnNwOzwvZGl2 PjxkaXY+VGlib3I8YnI+PC9kaXY+PGRpdiBkYXRhLW1hcmtlcj0zRCJfX1NJR19QUkVfXyI+PHA+ PC9wPj0KPC9kaXY+PC9kaXY+PC9ib2R5PjwvaHRtbD4KLS0tLS0tPV9QYXJ0XzE5MDQ0MTJfMTkx OTU3NjgwLjE0NDE2NTc2MTEzOTYtLQo= --===============5755898195264461777==-- From alexmcwhirter at triadic.us Mon Sep 7 18:46:58 2015 Content-Type: multipart/mixed; boundary="===============6772705382048931946==" MIME-Version: 1.0 From: Alex McWhirter To: users at ovirt.org Subject: Re: [ovirt-users] strange iscsi issue Date: Mon, 07 Sep 2015 18:40:29 -0400 Message-ID: <5dcdcb07-8de8-4a8f-8212-3bf4850cf1af@email.android.com> In-Reply-To: 1462912199.1904413.1441657611396.JavaMail.zimbra@itsmart.hu --===============6772705382048931946== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Unless you're using a caching filesystem like zfs, then you're going to be = limited by how fast your storage back end can actually right to disk. Unles= s you have a quite large storage back end, 10gbe is probably faster than yo= ur disks can read and write. On Sep 7, 2015 4:26 PM, Demeter Tibor wrote: > > Hi All, > > I have to create a test environment for testing purposes, because we need= to testing our new 10gbe infrastructure. > One server that have a 10gbe nic - this is the vdsm host and ovirt portal. > One server that have a 10gbe nic - this is the storage. > > Its connected to each other throught a dlink 10gbe switch. > > Everything good and nice, the server can connect to storage, I can make a= nd run VMs, but the storage performance from inside VM seems to be 1Gb/sec = only.=C2=A0 > I did try the iperf command for testing connections beetwen servers, and = it was 9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice an= d also it was 400-450 MB/sec. I've got same result on storage server. > > So: > > - hdparm test on local storage ~ 400 mb/sec > - hdparm test on ovirt node server through attached iscsi device ~ 400 Mb= /sec > - hdparm test from inside vm on local virtual disk - 93-102 Mb /sec > > The question is : Why? > > ps. I Have only one ovirtmgmt device, so there are no other networks. The= router is only 1gbe/sec, but i've tested and the traffic does not going th= rough =C2=A0this. > > Thanks in advance, > > Regards,=C2=A0 > Tibor --===============6772705382048931946==-- From tdemeter at itsmart.hu Tue Sep 8 00:59:07 2015 Content-Type: multipart/mixed; boundary="===============8143766650783356364==" MIME-Version: 1.0 From: Demeter Tibor To: users at ovirt.org Subject: Re: [ovirt-users] strange iscsi issue Date: Tue, 08 Sep 2015 06:59:03 +0200 Message-ID: <764423025.1914503.1441688343273.JavaMail.zimbra@itsmart.hu> In-Reply-To: 5dcdcb07-8de8-4a8f-8212-3bf4850cf1af@email.android.com --===============8143766650783356364== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Hi, Thank you for your reply. I'm sorry but I don't think so. This storage is fast, because it is a SSD b= ased storage, and I can read/write to it with fast performance. I know, in virtual environment the I/O always slowest than on physical, but= here I have a very large difference. = Also, I use ext4 FS. Thanks Tibor ----- 2015. szept.. 8., 0:40, Alex McWhirter alexmcwhirter(a)triadic.us =C3= =ADrta: > Unless you're using a caching filesystem like zfs, then you're going to be > limited by how fast your storage back end can actually right to disk. Unl= ess > you have a quite large storage back end, 10gbe is probably faster than yo= ur > disks can read and write. > = > On Sep 7, 2015 4:26 PM, Demeter Tibor wrote: >> >> Hi All, >> >> I have to create a test environment for testing purposes, because we nee= d to >> testing our new 10gbe infrastructure. >> One server that have a 10gbe nic - this is the vdsm host and ovirt porta= l. >> One server that have a 10gbe nic - this is the storage. >> >> Its connected to each other throught a dlink 10gbe switch. >> >> Everything good and nice, the server can connect to storage, I can make = and run >> VMs, but the storage performance from inside VM seems to be 1Gb/sec only. >> I did try the iperf command for testing connections beetwen servers, and= it was >> 9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice and al= so it >> was 400-450 MB/sec. I've got same result on storage server. >> >> So: >> >> - hdparm test on local storage ~ 400 mb/sec >> - hdparm test on ovirt node server through attached iscsi device ~ 400 M= b/sec >> - hdparm test from inside vm on local virtual disk - 93-102 Mb /sec >> >> The question is : Why? >> >> ps. I Have only one ovirtmgmt device, so there are no other networks. Th= e router >> is only 1gbe/sec, but i've tested and the traffic does not going through= =C2=A0this. >> >> Thanks in advance, >> >> Regards, > > Tibor --===============8143766650783356364==-- From karli.sjoberg at slu.se Tue Sep 8 02:00:44 2015 Content-Type: multipart/mixed; boundary="===============6788524280111670746==" MIME-Version: 1.0 From: =?utf-8?q?Karli_Sj=C3=B6berg_=3Ckarli=2Esjoberg_at_slu=2Ese=3E?= To: users at ovirt.org Subject: Re: [ovirt-users] strange iscsi issue Date: Tue, 08 Sep 2015 05:45:36 +0000 Message-ID: <1441691136.15670.7.camel@data-b104.adm.slu.se> In-Reply-To: 764423025.1914503.1441688343273.JavaMail.zimbra@itsmart.hu --===============6788524280111670746== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable tis 2015-09-08 klockan 06:59 +0200 skrev Demeter Tibor: > Hi, > Thank you for your reply. > I'm sorry but I don't think so. This storage is fast, because it is a SSD= based storage, and I can read/write to it with fast performance. > I know, in virtual environment the I/O always slowest than on physical, b= ut here I have a very large difference. = > Also, I use ext4 FS. My suggestion would be to use a filesystem benchmarking tool like bonnie ++ to first test the performance locally on the storage server and then redo the same test inside of a virtual machine. Also make sure the VM is using VirtIO disk (either block or SCSI) for best performance. I have tested speeds over 1Gb/s with bonded 1Gb NICS so I know it should work in theory as well as practice. Oh, and for the record. IO doesn=C2=B4t have to be bound by the speed of storage, if the host caches in RAM before sending it over the wire. But that in my opinion is dangerous and as far as I know, it=C2=B4s not actived in oVirt, please correct me if I=C2=B4m wrong. /K > = > Thanks > = > Tibor > = > = > ----- 2015. szept.. 8., 0:40, Alex McWhirter alexmcwhirter(a)triadic.us = =C3=ADrta: > = > > Unless you're using a caching filesystem like zfs, then you're going to= be > > limited by how fast your storage back end can actually right to disk. U= nless > > you have a quite large storage back end, 10gbe is probably faster than = your > > disks can read and write. > > = > > On Sep 7, 2015 4:26 PM, Demeter Tibor wrote: > >> > >> Hi All, > >> > >> I have to create a test environment for testing purposes, because we n= eed to > >> testing our new 10gbe infrastructure. > >> One server that have a 10gbe nic - this is the vdsm host and ovirt por= tal. > >> One server that have a 10gbe nic - this is the storage. > >> > >> Its connected to each other throught a dlink 10gbe switch. > >> > >> Everything good and nice, the server can connect to storage, I can mak= e and run > >> VMs, but the storage performance from inside VM seems to be 1Gb/sec on= ly. > >> I did try the iperf command for testing connections beetwen servers, a= nd it was > >> 9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice and = also it > >> was 400-450 MB/sec. I've got same result on storage server. > >> > >> So: > >> > >> - hdparm test on local storage ~ 400 mb/sec > >> - hdparm test on ovirt node server through attached iscsi device ~ 400= Mb/sec > >> - hdparm test from inside vm on local virtual disk - 93-102 Mb /sec > >> > >> The question is : Why? > >> > >> ps. I Have only one ovirtmgmt device, so there are no other networks. = The router > >> is only 1gbe/sec, but i've tested and the traffic does not going throu= gh this. > >> > >> Thanks in advance, > >> > >> Regards, > > > Tibor > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users --===============6788524280111670746==-- From alexmcwhirter at triadic.us Tue Sep 8 02:05:07 2015 Content-Type: multipart/mixed; boundary="===============7749644991135674188==" MIME-Version: 1.0 From: Alex McWhirter To: users at ovirt.org Subject: Re: [ovirt-users] strange iscsi issue Date: Tue, 08 Sep 2015 02:05:02 -0400 Message-ID: In-Reply-To: 1441691136.15670.7.camel@data-b104.adm.slu.se --===============7749644991135674188== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Are we talking about a single ssd or an array of them? VMs are usually larg= e continuous image files. SSDs are faster delivering many small files over = large continuous file. I believe ovirt forces sync writes by default, but I'm not sure as I'm usin= g NFS. The best thing to do is figure out whether it's a storage issue or n= etwork issue. Try setting your iscsi server to use async writes, this can be dangerous if= either server crashes or loses power so I would just do it for testing pur= poses. = With async writes you should be able to hit near 10gbps writes, but reads w= ill depend on how much data is cached and how much ram the iscsi server has. Are you presenting a raw disk over iscsi, an image file, or a filesystem lu= n via zfs or something similar? Alex sent the message, but his phone sent the typos... On Sep 8, 2015 1:45 AM, Karli Sj=C3=B6berg wrote: > > tis 2015-09-08 klocka= n 06:59 +0200 skrev Demeter Tibor: > > Hi, > > Thank you for your reply. > = > I'm sorry but I don't think so. This storage is fast, because it is a SSD= based storage, and I can read/write to it with fast performance. > > I kno= w, in virtual environment the I/O always slowest than on physical, but here= I have a very large difference. > > Also, I use ext4 FS. > > My suggestion= would be to use a filesystem benchmarking tool like bonnie > ++ to first t= est the performance locally on the storage server and then > redo the same = test inside of a virtual machine. Also make sure the VM is > using VirtIO d= isk (either block or SCSI) for best performance. I have > tested speeds ove= r 1Gb/s with bonded 1Gb NICS so I know it should work > in theory as well a= s practice. > > Oh, and for the record. IO doesn=C2=B4t have to be bound by= the speed of > storage, if the host caches in RAM before sending it over t= he wire. But > that in my opinion is dangerous and as far as I know, it=C2= =B4s not actived > in oVirt, please correct me if I=C2=B4m wrong. > > /K > = > > > > Thanks > > > > Tibor > > > > > > ----- 2015. szept.. 8., 0:40, Alex= McWhirter alexmcwhirter(a)triadic.us =C3=ADrta: > > > > > Unless you're us= ing a caching filesystem like zfs, then you're going to be > > > limited by= how fast your storage back end can actually right to disk. Unless > > > yo= u have a quite large storage back end, 10gbe is probably faster than your >= > > disks can read and write. > > > > > > On Sep 7, 2015 4:26 PM, Demeter = Tibor wrote: > > >> > > >> Hi All, > > >> > > >> I have to create a test en= vironment for testing purposes, because we need to > > >> testing our new 1= 0gbe infrastructure. > > >> One server that have a 10gbe nic - this is the = vdsm host and ovirt portal. > > >> One server that have a 10gbe nic - this = is the storage. > > >> > > >> Its connected to each other throught a dlink = 10gbe switch. > > >> > > >> Everything good and nice, the server can connec= t to storage, I can make and run > > >> VMs, but the storage performance fr= om inside VM seems to be 1Gb/sec only. > > >> I did try the iperf command f= or testing connections beetwen servers, and it was > > >> 9.40 GB/sec. I ha= ve try to use hdparm -tT /dev/mapper/iscsidevice and also it > > >> was 400= -450 MB/sec. I've got same result on storage server. > > >> > > >> So: > > = >> > > >> - hdparm test on local storage ~ 400 mb/sec > > >> - hdparm test = on ovirt node server through attached iscsi device ~ 400 Mb/sec > > >> - hd= parm test from inside vm on local virtual disk - 93-102 Mb /sec > > >> > > = >> The question is : Why? > > >> > > >> ps. I Have only one ovirtmgmt devic= e, so there are no other networks. The router > > >> is only 1gbe/sec, but = i've tested and the traffic does not going through=C2=A0 this. > > >> > > >= > Thanks in advance, > > >> > > >> Regards, > > > > Tibor > > _____________= __________________________________ > > Users mailing list > > Users(a)ovirt= .org > > http://lists.ovirt.org/mailman/listinfo/users > --===============7749644991135674188==-- From ykaul at redhat.com Tue Sep 8 02:40:46 2015 Content-Type: multipart/mixed; boundary="===============8913283616758221867==" MIME-Version: 1.0 From: Yaniv Kaul To: users at ovirt.org Subject: Re: [ovirt-users] strange iscsi issue Date: Tue, 08 Sep 2015 09:40:43 +0300 Message-ID: <55EE82EB.2040106@redhat.com> In-Reply-To: e4189aa7-df74-437c-8d6a-f4eace722bef@email.android.com --===============8913283616758221867== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 08/09/15 09:05, Alex McWhirter wrote: > Are we talking about a single ssd or an array of them? VMs are usually la= rge continuous image files. SSDs are faster delivering many small files ove= r large continuous file. > > I believe ovirt forces sync writes by default, but I'm not sure as I'm us= ing NFS. The best thing to do is figure out whether it's a storage issue or= network issue. > > Try setting your iscsi server to use async writes, this can be dangerous = if either server crashes or loses power so I would just do it for testing p= urposes. = I do not recommend it, as it would not reflect real life usage later on. > > With async writes you should be able to hit near 10gbps writes, but reads= will depend on how much data is cached and how much ram the iscsi server h= as. > > Are you presenting a raw disk over iscsi, an image file, or a filesystem = lun via zfs or something similar? Some tips when using iSCSI and general IO performance: - The VM should be using VirtIO for both disk and NIC. - I recommend XFS over EXT4, but both are OK generally. If possible, however, I'd test with a raw block device first. - Ensure you have enough paths to the storage and/or multiple iSCSI sessions. You may wish to configure the iSCSI target portal with multiple IP addresses (not only for redundancy, but for multiple connections). - I highly recommend 'fio' as an IO tool over Bonnie or hdparm. = Is the VM CPU the bottleneck, perhaps? Y. > > Alex sent the message, but his phone sent the typos... > > On Sep 8, 2015 1:45 AM, Karli Sj=C3=B6berg wrote: > > tis 2015-09-08 kloc= kan 06:59 +0200 skrev Demeter Tibor: > > Hi, > > Thank you for your reply. = > > I'm sorry but I don't think so. This storage is fast, because it is a S= SD based storage, and I can read/write to it with fast performance. > > I k= now, in virtual environment the I/O always slowest than on physical, but he= re I have a very large difference. > > Also, I use ext4 FS. > > My suggesti= on would be to use a filesystem benchmarking tool like bonnie > ++ to first= test the performance locally on the storage server and then > redo the sam= e test inside of a virtual machine. Also make sure the VM is > using VirtIO= disk (either block or SCSI) for best performance. I have > tested speeds o= ver 1Gb/s with bonded 1Gb NICS so I know it should work > in theory as well= as practice. > > Oh, and for the record. IO doesn=C2=B4t have to be bound = by the speed of > storage, if the host caches in RAM before sending it over= the wire. But > that in my opinion is dangerous and as far as I know, it= =C2=B4s not actived > in oVirt, please correct me if I=C2=B4m wrong. > > /K= > > > > > Thanks > > > > Tibor > > > > > > ----- 2015. szept.. 8., 0:40, A= lex McWhirter alexmcwhirter(a)triadic.us =C3=ADrta: > > > > > Unless you're= using a caching filesystem like zfs, then you're going to be > > > limited= by how fast your storage back end can actually right to disk. Unless > > >= you have a quite large storage back end, 10gbe is probably faster than you= r > > > disks can read and write. > > > > > > On Sep 7, 2015 4:26 PM, Demet= er Tibor wrote: > > >> > > >> Hi All, > > >> > > >> I have to create a test= environment for testing purposes, because we need to > > >> testing our ne= w 10gbe infrastructure. > > >> One server that have a 10gbe nic - this is t= he vdsm host and ovirt portal. > > >> One server that have a 10gbe nic - th= is is the storage. > > >> > > >> Its connected to each other throught a dli= nk 10gbe switch. > > >> > > >> Everything good and nice, the server can con= nect to storage, I can make and run > > >> VMs, but the storage performance= from inside VM seems to be 1Gb/sec only. > > >> I did try the iperf comman= d for testing connections beetwen servers, and it was > > >> 9.40 GB/sec. I= have try to use hdparm -tT /dev/mapper/iscsidevice and also it > > >> was = 400-450 MB/sec. I've got same result on storage server. > > >> > > >> So: >= > >> > > >> - hdparm test on local storage ~ 400 mb/sec > > >> - hdparm te= st on ovirt node server through attached iscsi device ~ 400 Mb/sec > > >> -= hdparm test from inside vm on local virtual disk - 93-102 Mb /sec > > >> >= > >> The question is : Why? > > >> > > >> ps. I Have only one ovirtmgmt de= vice, so there are no other networks. The router > > >> is only 1gbe/sec, b= ut i've tested and the traffic does not going through this. > > >> > > >> = Thanks in advance, > > >> > > >> Regards, > > > > Tibor > > _______________= ________________________________ > > Users mailing list > > Users(a)ovirt.o= rg > > http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users --===============8913283616758221867==-- From michal.skrivanek at redhat.com Tue Sep 8 04:19:00 2015 Content-Type: multipart/mixed; boundary="===============8641436620434087755==" MIME-Version: 1.0 From: Michal Skrivanek To: users at ovirt.org Subject: Re: [ovirt-users] strange iscsi issue Date: Tue, 08 Sep 2015 10:18:54 +0200 Message-ID: <164FE651-FC9D-4AAE-B915-7BC94872B43F@redhat.com> In-Reply-To: 1441691136.15670.7.camel@data-b104.adm.slu.se --===============8641436620434087755== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 8 Sep 2015, at 07:45, Karli Sj=C3=B6berg wrote: > tis 2015-09-08 klockan 06:59 +0200 skrev Demeter Tibor: >> Hi, >> Thank you for your reply. >> I'm sorry but I don't think so. This storage is fast, because it is a SS= D based storage, and I can read/write to it with fast performance. >> I know, in virtual environment the I/O always slowest than on physical, = but here I have a very large difference. = >> Also, I use ext4 FS. > = > My suggestion would be to use a filesystem benchmarking tool like bonnie > ++ to first test the performance locally on the storage server and then > redo the same test inside of a virtual machine. Also make sure the VM is > using VirtIO disk (either block or SCSI) for best performance. I have also note new 3.6 support for virtio-blk dataplane[1]. Not sure how will it= look using artificial stress tools, but in general it improves storage per= formance a lot. Thanks, michal [1] https://bugzilla.redhat.com/show_bug.cgi?id=3D1214311 > tested speeds over 1Gb/s with bonded 1Gb NICS so I know it should work > in theory as well as practice. > = > Oh, and for the record. IO doesn=C2=B4t have to be bound by the speed of > storage, if the host caches in RAM before sending it over the wire. But > that in my opinion is dangerous and as far as I know, it=C2=B4s not activ= ed > in oVirt, please correct me if I=C2=B4m wrong. > = > /K > = >> = >> Thanks >> = >> Tibor >> = >> = >> ----- 2015. szept.. 8., 0:40, Alex McWhirter alexmcwhirter(a)triadic.us = =C3=ADrta: >> = >>> Unless you're using a caching filesystem like zfs, then you're going to= be >>> limited by how fast your storage back end can actually right to disk. U= nless >>> you have a quite large storage back end, 10gbe is probably faster than = your >>> disks can read and write. >>> = >>> On Sep 7, 2015 4:26 PM, Demeter Tibor wrote: >>>> = >>>> Hi All, >>>> = >>>> I have to create a test environment for testing purposes, because we n= eed to >>>> testing our new 10gbe infrastructure. >>>> One server that have a 10gbe nic - this is the vdsm host and ovirt por= tal. >>>> One server that have a 10gbe nic - this is the storage. >>>> = >>>> Its connected to each other throught a dlink 10gbe switch. >>>> = >>>> Everything good and nice, the server can connect to storage, I can mak= e and run >>>> VMs, but the storage performance from inside VM seems to be 1Gb/sec on= ly. >>>> I did try the iperf command for testing connections beetwen servers, a= nd it was >>>> 9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice and = also it >>>> was 400-450 MB/sec. I've got same result on storage server. >>>> = >>>> So: >>>> = >>>> - hdparm test on local storage ~ 400 mb/sec >>>> - hdparm test on ovirt node server through attached iscsi device ~ 400= Mb/sec >>>> - hdparm test from inside vm on local virtual disk - 93-102 Mb /sec >>>> = >>>> The question is : Why? >>>> = >>>> ps. I Have only one ovirtmgmt device, so there are no other networks. = The router >>>> is only 1gbe/sec, but i've tested and the traffic does not going throu= gh this. >>>> = >>>> Thanks in advance, >>>> = >>>> Regards, >>>> Tibor >> _______________________________________________ >> Users mailing list >> Users(a)ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > = > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users --===============8641436620434087755==-- From raymond at worteltje.nl Wed Sep 9 16:16:18 2015 Content-Type: multipart/mixed; boundary="===============6718254534691446270==" MIME-Version: 1.0 From: Raymond To: users at ovirt.org Subject: Re: [ovirt-users] strange iscsi issue Date: Thu, 10 Sep 2015 00:16:00 +0200 Message-ID: <401904129.8754.1441836960445.JavaMail.zimbra@worteltje.nl> In-Reply-To: 164FE651-FC9D-4AAE-B915-7BC94872B43F@redhat.com --===============6718254534691446270== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable I've my homelab connected via 10Gb Direct Attached Cables (DAC) Use x520 cards and Cisco 2m cables. Did some tuning on servers and storage (HPC background :) ) Here is a short copy paste from my personal install doc. Whole HW config and speeds you to trust me on, but I can achieve between 70= 0 and 950MB/s for 4GB files. Again this is for my homelab, power over performance, 115w average power us= age for the whole stack. ++++++++++++++++++++++++++++++++++++++++++++++++++++++ *All nodes* install CentOS Put eth in correct order MTU=3D9000 reboot /etc/sysctl.conf net.core.rmem_max=3D16777216 net.core.wmem_max=3D16777216 # increase Linux autotuning TCP buffer limit net.ipv4.tcp_rmem=3D4096 87380 16777216 net.ipv4.tcp_wmem=3D4096 65536 16777216 # increase the length of the processor input queue net.core.netdev_max_backlog=3D30000 *removed detailed personal info* *below is storage only* /etc/fstab ext4 defaults,barrier=3D0,noatime,nodiratime /etc/sysconfig/nfs RPCNFSDCOUNT=3D16 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ ----- Original Message ----- From: "Michal Skrivanek" To: "Karli Sj=C3=B6berg" , "Demeter Tibor" Cc: "users" Sent: Tuesday, September 8, 2015 10:18:54 AM Subject: Re: [ovirt-users] strange iscsi issue On 8 Sep 2015, at 07:45, Karli Sj=C3=B6berg wrote: > tis 2015-09-08 klockan 06:59 +0200 skrev Demeter Tibor: >> Hi, >> Thank you for your reply. >> I'm sorry but I don't think so. This storage is fast, because it is a SS= D based storage, and I can read/write to it with fast performance. >> I know, in virtual environment the I/O always slowest than on physical, = but here I have a very large difference. = >> Also, I use ext4 FS. > = > My suggestion would be to use a filesystem benchmarking tool like bonnie > ++ to first test the performance locally on the storage server and then > redo the same test inside of a virtual machine. Also make sure the VM is > using VirtIO disk (either block or SCSI) for best performance. I have also note new 3.6 support for virtio-blk dataplane[1]. Not sure how will it= look using artificial stress tools, but in general it improves storage per= formance a lot. Thanks, michal [1] https://bugzilla.redhat.com/show_bug.cgi?id=3D1214311 > tested speeds over 1Gb/s with bonded 1Gb NICS so I know it should work > in theory as well as practice. > = > Oh, and for the record. IO doesn=C2=B4t have to be bound by the speed of > storage, if the host caches in RAM before sending it over the wire. But > that in my opinion is dangerous and as far as I know, it=C2=B4s not activ= ed > in oVirt, please correct me if I=C2=B4m wrong. > = > /K > = >> = >> Thanks >> = >> Tibor >> = >> = >> ----- 2015. szept.. 8., 0:40, Alex McWhirter alexmcwhirter(a)triadic.us = =C3=ADrta: >> = >>> Unless you're using a caching filesystem like zfs, then you're going to= be >>> limited by how fast your storage back end can actually right to disk. U= nless >>> you have a quite large storage back end, 10gbe is probably faster than = your >>> disks can read and write. >>> = >>> On Sep 7, 2015 4:26 PM, Demeter Tibor wrote: >>>> = >>>> Hi All, >>>> = >>>> I have to create a test environment for testing purposes, because we n= eed to >>>> testing our new 10gbe infrastructure. >>>> One server that have a 10gbe nic - this is the vdsm host and ovirt por= tal. >>>> One server that have a 10gbe nic - this is the storage. >>>> = >>>> Its connected to each other throught a dlink 10gbe switch. >>>> = >>>> Everything good and nice, the server can connect to storage, I can mak= e and run >>>> VMs, but the storage performance from inside VM seems to be 1Gb/sec on= ly. >>>> I did try the iperf command for testing connections beetwen servers, a= nd it was >>>> 9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice and = also it >>>> was 400-450 MB/sec. I've got same result on storage server. >>>> = >>>> So: >>>> = >>>> - hdparm test on local storage ~ 400 mb/sec >>>> - hdparm test on ovirt node server through attached iscsi device ~ 400= Mb/sec >>>> - hdparm test from inside vm on local virtual disk - 93-102 Mb /sec >>>> = >>>> The question is : Why? >>>> = >>>> ps. I Have only one ovirtmgmt device, so there are no other networks. = The router >>>> is only 1gbe/sec, but i've tested and the traffic does not going throu= gh this. >>>> = >>>> Thanks in advance, >>>> = >>>> Regards, >>>> Tibor >> _______________________________________________ >> Users mailing list >> Users(a)ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > = > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users(a)ovirt.org http://lists.ovirt.org/mailman/listinfo/users --===============6718254534691446270==-- From ykaul at redhat.com Wed Sep 9 16:50:17 2015 Content-Type: multipart/mixed; boundary="===============6969377068892129556==" MIME-Version: 1.0 From: Yaniv Kaul To: users at ovirt.org Subject: Re: [ovirt-users] strange iscsi issue Date: Wed, 09 Sep 2015 23:50:09 +0300 Message-ID: <55F09B81.7060207@redhat.com> In-Reply-To: 401904129.8754.1441836960445.JavaMail.zimbra@worteltje.nl --===============6969377068892129556== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable This is a multi-part message in MIME format. --------------070302070303090702030004 Content-Type: text/plain; charset=3Dutf-8 Content-Transfer-Encoding: 8bit On 10/09/15 01:16, Raymond wrote: > I've my homelab connected via 10Gb Direct Attached Cables (DAC) > Use x520 cards and Cisco 2m cables. > > Did some tuning on servers and storage (HPC background :) ) > Here is a short copy paste from my personal install doc. > > Whole HW config and speeds you to trust me on, but I can achieve between = 700 and 950MB/s for 4GB files. > Again this is for my homelab, power over performance, 115w average power = usage for the whole stack. > > ++++++++++++++++++++++++++++++++++++++++++++++++++++++ > *All nodes* > install CentOS > > Put eth in correct order > > MTU=3D9000 > > reboot > > /etc/sysctl.conf > net.core.rmem_max=3D16777216 > net.core.wmem_max=3D16777216 > # increase Linux autotuning TCP buffer limit > net.ipv4.tcp_rmem=3D4096 87380 16777216 > net.ipv4.tcp_wmem=3D4096 65536 16777216 > # increase the length of the processor input queue > net.core.netdev_max_backlog=3D30000 > > *removed detailed personal info* > > *below is storage only* > /etc/fstab > ext4 defaults,barrier=3D0,noatime,nodiratime > /etc/sysconfig/nfs > RPCNFSDCOUNT=3D16 All looks quite good. Do you have multipathing for iSCSI? I highly recommend it, and then reduce the number of requests (via multipath.conf) down as low as possible (against high-end all flash array - 1 is good too! I reckon against homelabs the default is OK too). Regardless, I also recommend increasing the number of TCP sessions - assuming your storage is not a bottleneck, you should be able to get to ~1100MB/sec. node.session./nr_sessions /in iscsi.conf should be set to 2, for example. Y. > ++++++++++++++++++++++++++++++++++++++++++++++++++++++ > > ----- Original Message ----- > From: "Michal Skrivanek" > To: "Karli Sj=C3=B6berg" , "Demeter Tibor" > Cc: "users" > Sent: Tuesday, September 8, 2015 10:18:54 AM > Subject: Re: [ovirt-users] strange iscsi issue > > On 8 Sep 2015, at 07:45, Karli Sj=C3=B6berg wrote: > >> tis 2015-09-08 klockan 06:59 +0200 skrev Demeter Tibor: >>> Hi, >>> Thank you for your reply. >>> I'm sorry but I don't think so. This storage is fast, because it is a S= SD based storage, and I can read/write to it with fast performance. >>> I know, in virtual environment the I/O always slowest than on physical,= but here I have a very large difference. = >>> Also, I use ext4 FS. >> My suggestion would be to use a filesystem benchmarking tool like bonnie >> ++ to first test the performance locally on the storage server and then >> redo the same test inside of a virtual machine. Also make sure the VM is >> using VirtIO disk (either block or SCSI) for best performance. I have > also note new 3.6 support for virtio-blk dataplane[1]. Not sure how will = it look using artificial stress tools, but in general it improves storage p= erformance a lot. > > Thanks, > michal > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=3D1214311 > >> tested speeds over 1Gb/s with bonded 1Gb NICS so I know it should work >> in theory as well as practice. >> >> Oh, and for the record. IO doesn=C2=B4t have to be bound by the speed of >> storage, if the host caches in RAM before sending it over the wire. But >> that in my opinion is dangerous and as far as I know, it=C2=B4s not acti= ved >> in oVirt, please correct me if I=C2=B4m wrong. >> >> /K >> >>> Thanks >>> >>> Tibor >>> >>> >>> ----- 2015. szept.. 8., 0:40, Alex McWhirter alexmcwhirter(a)triadic.us= =C3=ADrta: >>> >>>> Unless you're using a caching filesystem like zfs, then you're going t= o be >>>> limited by how fast your storage back end can actually right to disk. = Unless >>>> you have a quite large storage back end, 10gbe is probably faster than= your >>>> disks can read and write. >>>> >>>> On Sep 7, 2015 4:26 PM, Demeter Tibor wrote: >>>>> Hi All, >>>>> >>>>> I have to create a test environment for testing purposes, because we = need to >>>>> testing our new 10gbe infrastructure. >>>>> One server that have a 10gbe nic - this is the vdsm host and ovirt po= rtal. >>>>> One server that have a 10gbe nic - this is the storage. >>>>> >>>>> Its connected to each other throught a dlink 10gbe switch. >>>>> >>>>> Everything good and nice, the server can connect to storage, I can ma= ke and run >>>>> VMs, but the storage performance from inside VM seems to be 1Gb/sec o= nly. >>>>> I did try the iperf command for testing connections beetwen servers, = and it was >>>>> 9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice and= also it >>>>> was 400-450 MB/sec. I've got same result on storage server. >>>>> >>>>> So: >>>>> >>>>> - hdparm test on local storage ~ 400 mb/sec >>>>> - hdparm test on ovirt node server through attached iscsi device ~ 40= 0 Mb/sec >>>>> - hdparm test from inside vm on local virtual disk - 93-102 Mb /sec >>>>> >>>>> The question is : Why? >>>>> >>>>> ps. I Have only one ovirtmgmt device, so there are no other networks.= The router >>>>> is only 1gbe/sec, but i've tested and the traffic does not going thro= ugh this. >>>>> >>>>> Thanks in advance, >>>>> >>>>> Regards, >>>>> Tibor >>> _______________________________________________ >>> Users mailing list >>> Users(a)ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >> _______________________________________________ >> Users mailing list >> Users(a)ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users --------------070302070303090702030004 Content-Type: text/html; charset=3Dutf-8 Content-Transfer-Encoding: 8bit
On 10/09/15 01:16, Raymond wrote:
I've my homelab connected via 10Gb Direct Attached Cab=
les (DAC)
Use x520 cards and Cisco 2m cables.

Did some tuning on servers and storage (HPC background :) )
Here is a short copy paste from my personal install doc.

Whole HW config and speeds you to trust me on, but I can achieve between 70=
0 and 950MB/s for 4GB files.
Again this is for my homelab, power over performance, 115w average power us=
age for the whole stack.

++++++++++++++++++++++++++++++++++++++++++++++++++++++
*All nodes*
install CentOS

Put eth in correct order

MTU=3D9000

reboot

/etc/sysctl.conf
  net.core.rmem_max=3D16777216
  net.core.wmem_max=3D16777216
  # increase Linux autotuning TCP buffer limit
  net.ipv4.tcp_rmem=3D4096 87380 16777216
  net.ipv4.tcp_wmem=3D4096 65536 16777216
  # increase the length of the processor input queue
  net.core.netdev_max_backlog=3D30000

*removed detailed personal info*

*below is storage only*
/etc/fstab
  ext4    defaults,barrier=3D0,noatime,nodiratime
/etc/sysconfig/nfs
  RPCNFSDCOUNT=3D16

All looks quite good.
Do you have multipathing for iSCSI? I highly recommend it, and then reduce the number of requests (via multipath.conf) down as low as possible (against high-end all flash array - 1 is good too! I reckon against homelabs the default is OK too).

Regardless, I also recommend increasing the number of TCP sessions - assuming your storage is not a bottleneck, you should be able to get to ~1100MB/sec.
node.session.nr_sessions in iscsi.conf should be set to 2, for example.
Y.

++++++++++++++++++++++++++++++++++++++++++++++++++++++

----- Original Message -----
From: "Michal Skrivanek" <michal.skrivanek(a)redhat.com>
To: "Karli Sj=C3=B6berg" <Karli.Sjoberg(a)slu.se>, "Demeter Tibor"=
 &=
lt;tdemeter(a)itsmart.hu>
Cc: "users" <users(a)ovirt.org>
Sent: Tuesday, September 8, 2015 10:18:54 AM
Subject: Re: [ovirt-users] strange iscsi issue

On 8 Sep 2015, at 07:45, Karli Sj=C3=B6berg wrote:

tis 2015-09-08 klockan 06:59 +0200 skrev Demeter Tib=
or:
Hi,
Thank you for your reply.
I'm sorry but I don't think so. This storage is fast, because it is a SSD b=
ased storage, and I can read/write to it with fast performance.
I know, in virtual environment the I/O always slowest than on physical, but=
 here I have a very large difference. =

Also, I use ext4 FS.
My suggestion would be to use a filesystem benchmarking tool like bonnie
++ to first test the performance locally on the storage server and then
redo the same test inside of a virtual machine. Also make sure the VM is
using VirtIO disk (either block or SCSI) for best performance. I have
also note new 3.6 support for virtio-blk dataplane[1]. Not sure how will it=
 look using artificial stress tools, but in general it improves storage per=
formance a lot.

Thanks,
michal

[1] https://bugzilla.redhat.com/show_bug.cgi?id=3D12=
14311

tested speeds over 1Gb/s with bonded 1Gb NICS so I k=
now it should work
in theory as well as practice.

Oh, and for the record. IO doesn=C2=B4t have to be bound by the speed of
storage, if the host caches in RAM before sending it over the wire. But
that in my opinion is dangerous and as far as I know, it=C2=B4s not actived
in oVirt, please correct me if I=C2=B4m wrong.

/K

Thanks

Tibor


----- 2015. szept.. 8., 0:40, Alex McWhirter alexmcwhirter(a)triadic.=
us =C3=ADrta:

Unless you're using a caching filesystem like zf=
s, then you're going to be
limited by how fast your storage back end can actually right to disk. Unless
you have a quite large storage back end, 10gbe is probably faster than your
disks can read and write.

On Sep 7, 2015 4:26 PM, Demeter Tibor <tdemeter(a)itsmart.hu> wrote:
Hi All,

I have to create a test environment for testing purposes, because we need to
testing our new 10gbe infrastructure.
One server that have a 10gbe nic - this is the vdsm host and ovirt portal.
One server that have a 10gbe nic - this is the storage.

Its connected to each other throught a dlink 10gbe switch.

Everything good and nice, the server can connect to storage, I can make and=
 run
VMs, but the storage performance from inside VM seems to be 1Gb/sec only.
I did try the iperf command for testing connections beetwen servers, and it=
 was
9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice and also =
it
was 400-450 MB/sec. I've got same result on storage server.

So:

- hdparm test on local storage ~ 400 mb/sec
- hdparm test on ovirt node server through attached iscsi device ~ 400 Mb/s=
ec
- hdparm test from inside vm on local virtual disk - 93-102 Mb /sec

The question is : Why?

ps. I Have only one ovirtmgmt device, so there are no other networks. The r=
outer
is only 1gbe/sec, but i've tested and the traffic does not going through  t=
his.

Thanks in advance,

Regards,
Tibor
_______________________________________________
Users mailing list
Use=
rs(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________
Users mailing list
Use=
rs(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________
Users mailing list
Use=
rs(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________
Users mailing list
Use=
rs(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

--------------070302070303090702030004-- --===============6969377068892129556== Content-Type: multipart/alternative MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.bin" VGhpcyBpcyBhIG11bHRpLXBhcnQgbWVzc2FnZSBpbiBNSU1FIGZvcm1hdC4KLS0tLS0tLS0tLS0t LS0wNzAzMDIwNzAzMDMwOTA3MDIwMzAwMDQKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFy c2V0PXV0Zi04CkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDhiaXQKCk9uIDEwLzA5LzE1IDAx OjE2LCBSYXltb25kIHdyb3RlOgo+IEkndmUgbXkgaG9tZWxhYiBjb25uZWN0ZWQgdmlhIDEwR2Ig RGlyZWN0IEF0dGFjaGVkIENhYmxlcyAoREFDKQo+IFVzZSB4NTIwIGNhcmRzIGFuZCBDaXNjbyAy bSBjYWJsZXMuCj4KPiBEaWQgc29tZSB0dW5pbmcgb24gc2VydmVycyBhbmQgc3RvcmFnZSAoSFBD IGJhY2tncm91bmQgOikgKQo+IEhlcmUgaXMgYSBzaG9ydCBjb3B5IHBhc3RlIGZyb20gbXkgcGVy c29uYWwgaW5zdGFsbCBkb2MuCj4KPiBXaG9sZSBIVyBjb25maWcgYW5kIHNwZWVkcyB5b3UgdG8g dHJ1c3QgbWUgb24sIGJ1dCBJIGNhbiBhY2hpZXZlIGJldHdlZW4gNzAwIGFuZCA5NTBNQi9zIGZv ciA0R0IgZmlsZXMuCj4gQWdhaW4gdGhpcyBpcyBmb3IgbXkgaG9tZWxhYiwgcG93ZXIgb3ZlciBw ZXJmb3JtYW5jZSwgMTE1dyBhdmVyYWdlIHBvd2VyIHVzYWdlIGZvciB0aGUgd2hvbGUgc3RhY2su Cj4KPiArKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysr KysKPiAqQWxsIG5vZGVzKgo+IGluc3RhbGwgQ2VudE9TCj4KPiBQdXQgZXRoIGluIGNvcnJlY3Qg b3JkZXIKPgo+IE1UVT05MDAwCj4KPiByZWJvb3QKPgo+IC9ldGMvc3lzY3RsLmNvbmYKPiAgIG5l dC5jb3JlLnJtZW1fbWF4PTE2Nzc3MjE2Cj4gICBuZXQuY29yZS53bWVtX21heD0xNjc3NzIxNgo+ ICAgIyBpbmNyZWFzZSBMaW51eCBhdXRvdHVuaW5nIFRDUCBidWZmZXIgbGltaXQKPiAgIG5ldC5p cHY0LnRjcF9ybWVtPTQwOTYgODczODAgMTY3NzcyMTYKPiAgIG5ldC5pcHY0LnRjcF93bWVtPTQw OTYgNjU1MzYgMTY3NzcyMTYKPiAgICMgaW5jcmVhc2UgdGhlIGxlbmd0aCBvZiB0aGUgcHJvY2Vz c29yIGlucHV0IHF1ZXVlCj4gICBuZXQuY29yZS5uZXRkZXZfbWF4X2JhY2tsb2c9MzAwMDAKPgo+ ICpyZW1vdmVkIGRldGFpbGVkIHBlcnNvbmFsIGluZm8qCj4KPiAqYmVsb3cgaXMgc3RvcmFnZSBv bmx5Kgo+IC9ldGMvZnN0YWIKPiAgIGV4dDQgICAgZGVmYXVsdHMsYmFycmllcj0wLG5vYXRpbWUs bm9kaXJhdGltZQo+IC9ldGMvc3lzY29uZmlnL25mcwo+ICAgUlBDTkZTRENPVU5UPTE2CgpBbGwg bG9va3MgcXVpdGUgZ29vZC4KRG8geW91IGhhdmUgbXVsdGlwYXRoaW5nIGZvciBpU0NTST8gSSBo aWdobHkgcmVjb21tZW5kIGl0LCBhbmQgdGhlbgpyZWR1Y2UgdGhlIG51bWJlciBvZiByZXF1ZXN0 cyAodmlhIG11bHRpcGF0aC5jb25mKSBkb3duIGFzIGxvdyBhcwpwb3NzaWJsZSAoYWdhaW5zdCBo aWdoLWVuZCBhbGwgZmxhc2ggYXJyYXkgLSAxIGlzIGdvb2QgdG9vISBJIHJlY2tvbgphZ2FpbnN0 IGhvbWVsYWJzIHRoZSBkZWZhdWx0IGlzIE9LIHRvbykuCgpSZWdhcmRsZXNzLCBJIGFsc28gcmVj b21tZW5kIGluY3JlYXNpbmcgdGhlIG51bWJlciBvZiBUQ1Agc2Vzc2lvbnMgLQphc3N1bWluZyB5 b3VyIHN0b3JhZ2UgaXMgbm90IGEgYm90dGxlbmVjaywgeW91IHNob3VsZCBiZSBhYmxlIHRvIGdl dCB0bwp+MTEwME1CL3NlYy4Kbm9kZS5zZXNzaW9uLi9ucl9zZXNzaW9ucyAvaW4gaXNjc2kuY29u ZiBzaG91bGQgYmUgc2V0IHRvIDIsIGZvciBleGFtcGxlLgpZLgoKPiArKysrKysrKysrKysrKysr KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysKPgo+IC0tLS0tIE9yaWdpbmFs IE1lc3NhZ2UgLS0tLS0KPiBGcm9tOiAiTWljaGFsIFNrcml2YW5layIgPG1pY2hhbC5za3JpdmFu ZWtAcmVkaGF0LmNvbT4KPiBUbzogIkthcmxpIFNqw7ZiZXJnIiA8S2FybGkuU2pvYmVyZ0BzbHUu c2U+LCAiRGVtZXRlciBUaWJvciIgPHRkZW1ldGVyQGl0c21hcnQuaHU+Cj4gQ2M6ICJ1c2VycyIg PHVzZXJzQG92aXJ0Lm9yZz4KPiBTZW50OiBUdWVzZGF5LCBTZXB0ZW1iZXIgOCwgMjAxNSAxMDox ODo1NCBBTQo+IFN1YmplY3Q6IFJlOiBbb3ZpcnQtdXNlcnNdIHN0cmFuZ2UgaXNjc2kgaXNzdWUK Pgo+IE9uIDggU2VwIDIwMTUsIGF0IDA3OjQ1LCBLYXJsaSBTasO2YmVyZyB3cm90ZToKPgo+PiB0 aXMgMjAxNS0wOS0wOCBrbG9ja2FuIDA2OjU5ICswMjAwIHNrcmV2IERlbWV0ZXIgVGlib3I6Cj4+ PiBIaSwKPj4+IFRoYW5rIHlvdSBmb3IgeW91ciByZXBseS4KPj4+IEknbSBzb3JyeSBidXQgSSBk b24ndCB0aGluayBzby4gVGhpcyBzdG9yYWdlIGlzIGZhc3QsIGJlY2F1c2UgaXQgaXMgYSBTU0Qg YmFzZWQgc3RvcmFnZSwgYW5kIEkgY2FuIHJlYWQvd3JpdGUgdG8gaXQgd2l0aCBmYXN0IHBlcmZv cm1hbmNlLgo+Pj4gSSBrbm93LCBpbiB2aXJ0dWFsIGVudmlyb25tZW50IHRoZSBJL08gYWx3YXlz IHNsb3dlc3QgdGhhbiBvbiBwaHlzaWNhbCwgYnV0IGhlcmUgSSBoYXZlIGEgdmVyeSBsYXJnZSBk aWZmZXJlbmNlLiAKPj4+IEFsc28sIEkgdXNlIGV4dDQgRlMuCj4+IE15IHN1Z2dlc3Rpb24gd291 bGQgYmUgdG8gdXNlIGEgZmlsZXN5c3RlbSBiZW5jaG1hcmtpbmcgdG9vbCBsaWtlIGJvbm5pZQo+ PiArKyB0byBmaXJzdCB0ZXN0IHRoZSBwZXJmb3JtYW5jZSBsb2NhbGx5IG9uIHRoZSBzdG9yYWdl IHNlcnZlciBhbmQgdGhlbgo+PiByZWRvIHRoZSBzYW1lIHRlc3QgaW5zaWRlIG9mIGEgdmlydHVh bCBtYWNoaW5lLiBBbHNvIG1ha2Ugc3VyZSB0aGUgVk0gaXMKPj4gdXNpbmcgVmlydElPIGRpc2sg KGVpdGhlciBibG9jayBvciBTQ1NJKSBmb3IgYmVzdCBwZXJmb3JtYW5jZS4gSSBoYXZlCj4gYWxz byBub3RlIG5ldyAzLjYgc3VwcG9ydCBmb3IgdmlydGlvLWJsayBkYXRhcGxhbmVbMV0uIE5vdCBz dXJlIGhvdyB3aWxsIGl0IGxvb2sgdXNpbmcgYXJ0aWZpY2lhbCBzdHJlc3MgdG9vbHMsIGJ1dCBp biBnZW5lcmFsIGl0IGltcHJvdmVzIHN0b3JhZ2UgcGVyZm9ybWFuY2UgYSBsb3QuCj4KPiBUaGFu a3MsCj4gbWljaGFsCj4KPiBbMV0gaHR0cHM6Ly9idWd6aWxsYS5yZWRoYXQuY29tL3Nob3dfYnVn LmNnaT9pZD0xMjE0MzExCj4KPj4gdGVzdGVkIHNwZWVkcyBvdmVyIDFHYi9zIHdpdGggYm9uZGVk IDFHYiBOSUNTIHNvIEkga25vdyBpdCBzaG91bGQgd29yawo+PiBpbiB0aGVvcnkgYXMgd2VsbCBh cyBwcmFjdGljZS4KPj4KPj4gT2gsIGFuZCBmb3IgdGhlIHJlY29yZC4gSU8gZG9lc27CtHQgaGF2 ZSB0byBiZSBib3VuZCBieSB0aGUgc3BlZWQgb2YKPj4gc3RvcmFnZSwgaWYgdGhlIGhvc3QgY2Fj aGVzIGluIFJBTSBiZWZvcmUgc2VuZGluZyBpdCBvdmVyIHRoZSB3aXJlLiBCdXQKPj4gdGhhdCBp biBteSBvcGluaW9uIGlzIGRhbmdlcm91cyBhbmQgYXMgZmFyIGFzIEkga25vdywgaXTCtHMgbm90 IGFjdGl2ZWQKPj4gaW4gb1ZpcnQsIHBsZWFzZSBjb3JyZWN0IG1lIGlmIEnCtG0gd3JvbmcuCj4+ Cj4+IC9LCj4+Cj4+PiBUaGFua3MKPj4+Cj4+PiBUaWJvcgo+Pj4KPj4+Cj4+PiAtLS0tLSAyMDE1 LiBzemVwdC4uIDguLCAwOjQwLCBBbGV4IE1jV2hpcnRlciBhbGV4bWN3aGlydGVyQHRyaWFkaWMu dXMgw61ydGE6Cj4+Pgo+Pj4+IFVubGVzcyB5b3UncmUgdXNpbmcgYSBjYWNoaW5nIGZpbGVzeXN0 ZW0gbGlrZSB6ZnMsIHRoZW4geW91J3JlIGdvaW5nIHRvIGJlCj4+Pj4gbGltaXRlZCBieSBob3cg ZmFzdCB5b3VyIHN0b3JhZ2UgYmFjayBlbmQgY2FuIGFjdHVhbGx5IHJpZ2h0IHRvIGRpc2suIFVu bGVzcwo+Pj4+IHlvdSBoYXZlIGEgcXVpdGUgbGFyZ2Ugc3RvcmFnZSBiYWNrIGVuZCwgMTBnYmUg aXMgcHJvYmFibHkgZmFzdGVyIHRoYW4geW91cgo+Pj4+IGRpc2tzIGNhbiByZWFkIGFuZCB3cml0 ZS4KPj4+Pgo+Pj4+IE9uIFNlcCA3LCAyMDE1IDQ6MjYgUE0sIERlbWV0ZXIgVGlib3IgPHRkZW1l dGVyQGl0c21hcnQuaHU+IHdyb3RlOgo+Pj4+PiBIaSBBbGwsCj4+Pj4+Cj4+Pj4+IEkgaGF2ZSB0 byBjcmVhdGUgYSB0ZXN0IGVudmlyb25tZW50IGZvciB0ZXN0aW5nIHB1cnBvc2VzLCBiZWNhdXNl IHdlIG5lZWQgdG8KPj4+Pj4gdGVzdGluZyBvdXIgbmV3IDEwZ2JlIGluZnJhc3RydWN0dXJlLgo+ Pj4+PiBPbmUgc2VydmVyIHRoYXQgaGF2ZSBhIDEwZ2JlIG5pYyAtIHRoaXMgaXMgdGhlIHZkc20g aG9zdCBhbmQgb3ZpcnQgcG9ydGFsLgo+Pj4+PiBPbmUgc2VydmVyIHRoYXQgaGF2ZSBhIDEwZ2Jl IG5pYyAtIHRoaXMgaXMgdGhlIHN0b3JhZ2UuCj4+Pj4+Cj4+Pj4+IEl0cyBjb25uZWN0ZWQgdG8g ZWFjaCBvdGhlciB0aHJvdWdodCBhIGRsaW5rIDEwZ2JlIHN3aXRjaC4KPj4+Pj4KPj4+Pj4gRXZl cnl0aGluZyBnb29kIGFuZCBuaWNlLCB0aGUgc2VydmVyIGNhbiBjb25uZWN0IHRvIHN0b3JhZ2Us IEkgY2FuIG1ha2UgYW5kIHJ1bgo+Pj4+PiBWTXMsIGJ1dCB0aGUgc3RvcmFnZSBwZXJmb3JtYW5j ZSBmcm9tIGluc2lkZSBWTSBzZWVtcyB0byBiZSAxR2Ivc2VjIG9ubHkuCj4+Pj4+IEkgZGlkIHRy eSB0aGUgaXBlcmYgY29tbWFuZCBmb3IgdGVzdGluZyBjb25uZWN0aW9ucyBiZWV0d2VuIHNlcnZl cnMsIGFuZCBpdCB3YXMKPj4+Pj4gOS40MCBHQi9zZWMuIEkgaGF2ZSB0cnkgdG8gdXNlIGhkcGFy bSAtdFQgL2Rldi9tYXBwZXIvaXNjc2lkZXZpY2UgYW5kIGFsc28gaXQKPj4+Pj4gd2FzIDQwMC00 NTAgTUIvc2VjLiBJJ3ZlIGdvdCBzYW1lIHJlc3VsdCBvbiBzdG9yYWdlIHNlcnZlci4KPj4+Pj4K Pj4+Pj4gU286Cj4+Pj4+Cj4+Pj4+IC0gaGRwYXJtIHRlc3Qgb24gbG9jYWwgc3RvcmFnZSB+IDQw MCBtYi9zZWMKPj4+Pj4gLSBoZHBhcm0gdGVzdCBvbiBvdmlydCBub2RlIHNlcnZlciB0aHJvdWdo IGF0dGFjaGVkIGlzY3NpIGRldmljZSB+IDQwMCBNYi9zZWMKPj4+Pj4gLSBoZHBhcm0gdGVzdCBm cm9tIGluc2lkZSB2bSBvbiBsb2NhbCB2aXJ0dWFsIGRpc2sgLSA5My0xMDIgTWIgL3NlYwo+Pj4+ Pgo+Pj4+PiBUaGUgcXVlc3Rpb24gaXMgOiBXaHk/Cj4+Pj4+Cj4+Pj4+IHBzLiBJIEhhdmUgb25s eSBvbmUgb3ZpcnRtZ210IGRldmljZSwgc28gdGhlcmUgYXJlIG5vIG90aGVyIG5ldHdvcmtzLiBU aGUgcm91dGVyCj4+Pj4+IGlzIG9ubHkgMWdiZS9zZWMsIGJ1dCBpJ3ZlIHRlc3RlZCBhbmQgdGhl IHRyYWZmaWMgZG9lcyBub3QgZ29pbmcgdGhyb3VnaCAgdGhpcy4KPj4+Pj4KPj4+Pj4gVGhhbmtz IGluIGFkdmFuY2UsCj4+Pj4+Cj4+Pj4+IFJlZ2FyZHMsCj4+Pj4+IFRpYm9yCj4+PiBfX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwo+Pj4gVXNlcnMgbWFpbGlu ZyBsaXN0Cj4+PiBVc2Vyc0BvdmlydC5vcmcKPj4+IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFp bG1hbi9saXN0aW5mby91c2Vycwo+PiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fXwo+PiBVc2VycyBtYWlsaW5nIGxpc3QKPj4gVXNlcnNAb3ZpcnQub3JnCj4+ IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vycwo+IF9fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCj4gVXNlcnMgbWFpbGluZyBs aXN0Cj4gVXNlcnNAb3ZpcnQub3JnCj4gaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xp c3RpbmZvL3VzZXJzCj4gX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX18KPiBVc2VycyBtYWlsaW5nIGxpc3QKPiBVc2Vyc0BvdmlydC5vcmcKPiBodHRwOi8vbGlz dHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnMKCgotLS0tLS0tLS0tLS0tLTA3MDMw MjA3MDMwMzA5MDcwMjAzMDAwNApDb250ZW50LVR5cGU6IHRleHQvaHRtbDsgY2hhcnNldD11dGYt OApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA4Yml0Cgo8aHRtbD4KICA8aGVhZD4KICAgIDxt ZXRhIGNvbnRlbnQ9InRleHQvaHRtbDsgY2hhcnNldD11dGYtOCIgaHR0cC1lcXVpdj0iQ29udGVu dC1UeXBlIj4KICA8L2hlYWQ+CiAgPGJvZHkgdGV4dD0iIzAwMDAwMCIgYmdjb2xvcj0iI0ZGRkZG RiI+CiAgICA8ZGl2IGNsYXNzPSJtb3otY2l0ZS1wcmVmaXgiPk9uIDEwLzA5LzE1IDAxOjE2LCBS YXltb25kIHdyb3RlOjxicj4KICAgIDwvZGl2PgogICAgPGJsb2NrcXVvdGUKICAgICAgY2l0ZT0i bWlkOjQwMTkwNDEyOS44NzU0LjE0NDE4MzY5NjA0NDUuSmF2YU1haWwuemltYnJhQHdvcnRlbHRq ZS5ubCIKICAgICAgdHlwZT0iY2l0ZSI+CiAgICAgIDxwcmUgd3JhcD0iIj5JJ3ZlIG15IGhvbWVs YWIgY29ubmVjdGVkIHZpYSAxMEdiIERpcmVjdCBBdHRhY2hlZCBDYWJsZXMgKERBQykKVXNlIHg1 MjAgY2FyZHMgYW5kIENpc2NvIDJtIGNhYmxlcy4KCkRpZCBzb21lIHR1bmluZyBvbiBzZXJ2ZXJz IGFuZCBzdG9yYWdlIChIUEMgYmFja2dyb3VuZCA6KSApCkhlcmUgaXMgYSBzaG9ydCBjb3B5IHBh c3RlIGZyb20gbXkgcGVyc29uYWwgaW5zdGFsbCBkb2MuCgpXaG9sZSBIVyBjb25maWcgYW5kIHNw ZWVkcyB5b3UgdG8gdHJ1c3QgbWUgb24sIGJ1dCBJIGNhbiBhY2hpZXZlIGJldHdlZW4gNzAwIGFu ZCA5NTBNQi9zIGZvciA0R0IgZmlsZXMuCkFnYWluIHRoaXMgaXMgZm9yIG15IGhvbWVsYWIsIHBv d2VyIG92ZXIgcGVyZm9ybWFuY2UsIDExNXcgYXZlcmFnZSBwb3dlciB1c2FnZSBmb3IgdGhlIHdo b2xlIHN0YWNrLgoKKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysr KysrKysrKysrCipBbGwgbm9kZXMqCmluc3RhbGwgQ2VudE9TCgpQdXQgZXRoIGluIGNvcnJlY3Qg b3JkZXIKCk1UVT05MDAwCgpyZWJvb3QKCi9ldGMvc3lzY3RsLmNvbmYKICBuZXQuY29yZS5ybWVt X21heD0xNjc3NzIxNgogIG5ldC5jb3JlLndtZW1fbWF4PTE2Nzc3MjE2CiAgIyBpbmNyZWFzZSBM aW51eCBhdXRvdHVuaW5nIFRDUCBidWZmZXIgbGltaXQKICBuZXQuaXB2NC50Y3Bfcm1lbT00MDk2 IDg3MzgwIDE2Nzc3MjE2CiAgbmV0LmlwdjQudGNwX3dtZW09NDA5NiA2NTUzNiAxNjc3NzIxNgog ICMgaW5jcmVhc2UgdGhlIGxlbmd0aCBvZiB0aGUgcHJvY2Vzc29yIGlucHV0IHF1ZXVlCiAgbmV0 LmNvcmUubmV0ZGV2X21heF9iYWNrbG9nPTMwMDAwCgoqcmVtb3ZlZCBkZXRhaWxlZCBwZXJzb25h bCBpbmZvKgoKKmJlbG93IGlzIHN0b3JhZ2Ugb25seSoKL2V0Yy9mc3RhYgogIGV4dDQgICAgZGVm YXVsdHMsYmFycmllcj0wLG5vYXRpbWUsbm9kaXJhdGltZQovZXRjL3N5c2NvbmZpZy9uZnMKICBS UENORlNEQ09VTlQ9MTY8L3ByZT4KICAgIDwvYmxvY2txdW90ZT4KICAgIDxicj4KICAgIEFsbCBs b29rcyBxdWl0ZSBnb29kLiA8YnI+CiAgICBEbyB5b3UgaGF2ZSBtdWx0aXBhdGhpbmcgZm9yIGlT Q1NJPyBJIGhpZ2hseSByZWNvbW1lbmQgaXQsIGFuZCB0aGVuCiAgICByZWR1Y2UgdGhlIG51bWJl ciBvZiByZXF1ZXN0cyAodmlhIG11bHRpcGF0aC5jb25mKSBkb3duIGFzIGxvdyBhcwogICAgcG9z c2libGUgKGFnYWluc3QgaGlnaC1lbmQgYWxsIGZsYXNoIGFycmF5IC0gMSBpcyBnb29kIHRvbyEg SSByZWNrb24KICAgIGFnYWluc3QgaG9tZWxhYnMgdGhlIGRlZmF1bHQgaXMgT0sgdG9vKS48YnI+ CiAgICA8YnI+CiAgICBSZWdhcmRsZXNzLCBJIGFsc28gcmVjb21tZW5kIGluY3JlYXNpbmcgdGhl IG51bWJlciBvZiBUQ1Agc2Vzc2lvbnMgLQogICAgYXNzdW1pbmcgeW91ciBzdG9yYWdlIGlzIG5v dCBhIGJvdHRsZW5lY2ssIHlvdSBzaG91bGQgYmUgYWJsZSB0byBnZXQKICAgIHRvIH4xMTAwTUIv c2VjLjxicj4KICAgIDxzcGFuIGNsYXNzPSJzdCI+bm9kZS5zZXNzaW9uLjxlbT5ucl9zZXNzaW9u cyA8L2VtPmluIGlzY3NpLmNvbmYKICAgICAgc2hvdWxkIGJlIHNldCB0byAyLCBmb3IgZXhhbXBs ZS48YnI+CiAgICAgIFkuPGJyPgogICAgICA8YnI+CiAgICA8L3NwYW4+CiAgICA8YmxvY2txdW90 ZQogICAgICBjaXRlPSJtaWQ6NDAxOTA0MTI5Ljg3NTQuMTQ0MTgzNjk2MDQ0NS5KYXZhTWFpbC56 aW1icmFAd29ydGVsdGplLm5sIgogICAgICB0eXBlPSJjaXRlIj4KICAgICAgPHByZSB3cmFwPSIi PgorKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysK Ci0tLS0tIE9yaWdpbmFsIE1lc3NhZ2UgLS0tLS0KRnJvbTogIk1pY2hhbCBTa3JpdmFuZWsiIDxh IGNsYXNzPSJtb3otdHh0LWxpbmstcmZjMjM5NkUiIGhyZWY9Im1haWx0bzptaWNoYWwuc2tyaXZh bmVrQHJlZGhhdC5jb20iPiZsdDttaWNoYWwuc2tyaXZhbmVrQHJlZGhhdC5jb20mZ3Q7PC9hPgpU bzogIkthcmxpIFNqw7ZiZXJnIiA8YSBjbGFzcz0ibW96LXR4dC1saW5rLXJmYzIzOTZFIiBocmVm PSJtYWlsdG86S2FybGkuU2pvYmVyZ0BzbHUuc2UiPiZsdDtLYXJsaS5Tam9iZXJnQHNsdS5zZSZn dDs8L2E+LCAiRGVtZXRlciBUaWJvciIgPGEgY2xhc3M9Im1vei10eHQtbGluay1yZmMyMzk2RSIg aHJlZj0ibWFpbHRvOnRkZW1ldGVyQGl0c21hcnQuaHUiPiZsdDt0ZGVtZXRlckBpdHNtYXJ0Lmh1 Jmd0OzwvYT4KQ2M6ICJ1c2VycyIgPGEgY2xhc3M9Im1vei10eHQtbGluay1yZmMyMzk2RSIgaHJl Zj0ibWFpbHRvOnVzZXJzQG92aXJ0Lm9yZyI+Jmx0O3VzZXJzQG92aXJ0Lm9yZyZndDs8L2E+ClNl bnQ6IFR1ZXNkYXksIFNlcHRlbWJlciA4LCAyMDE1IDEwOjE4OjU0IEFNClN1YmplY3Q6IFJlOiBb b3ZpcnQtdXNlcnNdIHN0cmFuZ2UgaXNjc2kgaXNzdWUKCk9uIDggU2VwIDIwMTUsIGF0IDA3OjQ1 LCBLYXJsaSBTasO2YmVyZyB3cm90ZToKCjwvcHJlPgogICAgICA8YmxvY2txdW90ZSB0eXBlPSJj aXRlIj4KICAgICAgICA8cHJlIHdyYXA9IiI+dGlzIDIwMTUtMDktMDgga2xvY2thbiAwNjo1OSAr MDIwMCBza3JldiBEZW1ldGVyIFRpYm9yOgo8L3ByZT4KICAgICAgICA8YmxvY2txdW90ZSB0eXBl PSJjaXRlIj4KICAgICAgICAgIDxwcmUgd3JhcD0iIj5IaSwKVGhhbmsgeW91IGZvciB5b3VyIHJl cGx5LgpJJ20gc29ycnkgYnV0IEkgZG9uJ3QgdGhpbmsgc28uIFRoaXMgc3RvcmFnZSBpcyBmYXN0 LCBiZWNhdXNlIGl0IGlzIGEgU1NEIGJhc2VkIHN0b3JhZ2UsIGFuZCBJIGNhbiByZWFkL3dyaXRl IHRvIGl0IHdpdGggZmFzdCBwZXJmb3JtYW5jZS4KSSBrbm93LCBpbiB2aXJ0dWFsIGVudmlyb25t ZW50IHRoZSBJL08gYWx3YXlzIHNsb3dlc3QgdGhhbiBvbiBwaHlzaWNhbCwgYnV0IGhlcmUgSSBo YXZlIGEgdmVyeSBsYXJnZSBkaWZmZXJlbmNlLiAKQWxzbywgSSB1c2UgZXh0NCBGUy4KPC9wcmU+ CiAgICAgICAgPC9ibG9ja3F1b3RlPgogICAgICAgIDxwcmUgd3JhcD0iIj4KTXkgc3VnZ2VzdGlv biB3b3VsZCBiZSB0byB1c2UgYSBmaWxlc3lzdGVtIGJlbmNobWFya2luZyB0b29sIGxpa2UgYm9u bmllCisrIHRvIGZpcnN0IHRlc3QgdGhlIHBlcmZvcm1hbmNlIGxvY2FsbHkgb24gdGhlIHN0b3Jh Z2Ugc2VydmVyIGFuZCB0aGVuCnJlZG8gdGhlIHNhbWUgdGVzdCBpbnNpZGUgb2YgYSB2aXJ0dWFs IG1hY2hpbmUuIEFsc28gbWFrZSBzdXJlIHRoZSBWTSBpcwp1c2luZyBWaXJ0SU8gZGlzayAoZWl0 aGVyIGJsb2NrIG9yIFNDU0kpIGZvciBiZXN0IHBlcmZvcm1hbmNlLiBJIGhhdmUKPC9wcmU+CiAg ICAgIDwvYmxvY2txdW90ZT4KICAgICAgPHByZSB3cmFwPSIiPgphbHNvIG5vdGUgbmV3IDMuNiBz dXBwb3J0IGZvciB2aXJ0aW8tYmxrIGRhdGFwbGFuZVsxXS4gTm90IHN1cmUgaG93IHdpbGwgaXQg bG9vayB1c2luZyBhcnRpZmljaWFsIHN0cmVzcyB0b29scywgYnV0IGluIGdlbmVyYWwgaXQgaW1w cm92ZXMgc3RvcmFnZSBwZXJmb3JtYW5jZSBhIGxvdC4KClRoYW5rcywKbWljaGFsCgpbMV0gPGEg Y2xhc3M9Im1vei10eHQtbGluay1mcmVldGV4dCIgaHJlZj0iaHR0cHM6Ly9idWd6aWxsYS5yZWRo YXQuY29tL3Nob3dfYnVnLmNnaT9pZD0xMjE0MzExIj5odHRwczovL2J1Z3ppbGxhLnJlZGhhdC5j b20vc2hvd19idWcuY2dpP2lkPTEyMTQzMTE8L2E+Cgo8L3ByZT4KICAgICAgPGJsb2NrcXVvdGUg dHlwZT0iY2l0ZSI+CiAgICAgICAgPHByZSB3cmFwPSIiPnRlc3RlZCBzcGVlZHMgb3ZlciAxR2Iv cyB3aXRoIGJvbmRlZCAxR2IgTklDUyBzbyBJIGtub3cgaXQgc2hvdWxkIHdvcmsKaW4gdGhlb3J5 IGFzIHdlbGwgYXMgcHJhY3RpY2UuCgpPaCwgYW5kIGZvciB0aGUgcmVjb3JkLiBJTyBkb2VzbsK0 dCBoYXZlIHRvIGJlIGJvdW5kIGJ5IHRoZSBzcGVlZCBvZgpzdG9yYWdlLCBpZiB0aGUgaG9zdCBj YWNoZXMgaW4gUkFNIGJlZm9yZSBzZW5kaW5nIGl0IG92ZXIgdGhlIHdpcmUuIEJ1dAp0aGF0IGlu IG15IG9waW5pb24gaXMgZGFuZ2Vyb3VzIGFuZCBhcyBmYXIgYXMgSSBrbm93LCBpdMK0cyBub3Qg YWN0aXZlZAppbiBvVmlydCwgcGxlYXNlIGNvcnJlY3QgbWUgaWYgScK0bSB3cm9uZy4KCi9LCgo8 L3ByZT4KICAgICAgICA8YmxvY2txdW90ZSB0eXBlPSJjaXRlIj4KICAgICAgICAgIDxwcmUgd3Jh cD0iIj4KVGhhbmtzCgpUaWJvcgoKCi0tLS0tIDIwMTUuIHN6ZXB0Li4gOC4sIDA6NDAsIEFsZXgg TWNXaGlydGVyIDxhIGNsYXNzPSJtb3otdHh0LWxpbmstYWJicmV2aWF0ZWQiIGhyZWY9Im1haWx0 bzphbGV4bWN3aGlydGVyQHRyaWFkaWMudXMiPmFsZXhtY3doaXJ0ZXJAdHJpYWRpYy51czwvYT4g w61ydGE6Cgo8L3ByZT4KICAgICAgICAgIDxibG9ja3F1b3RlIHR5cGU9ImNpdGUiPgogICAgICAg ICAgICA8cHJlIHdyYXA9IiI+VW5sZXNzIHlvdSdyZSB1c2luZyBhIGNhY2hpbmcgZmlsZXN5c3Rl bSBsaWtlIHpmcywgdGhlbiB5b3UncmUgZ29pbmcgdG8gYmUKbGltaXRlZCBieSBob3cgZmFzdCB5 b3VyIHN0b3JhZ2UgYmFjayBlbmQgY2FuIGFjdHVhbGx5IHJpZ2h0IHRvIGRpc2suIFVubGVzcwp5 b3UgaGF2ZSBhIHF1aXRlIGxhcmdlIHN0b3JhZ2UgYmFjayBlbmQsIDEwZ2JlIGlzIHByb2JhYmx5 IGZhc3RlciB0aGFuIHlvdXIKZGlza3MgY2FuIHJlYWQgYW5kIHdyaXRlLgoKT24gU2VwIDcsIDIw MTUgNDoyNiBQTSwgRGVtZXRlciBUaWJvciA8YSBjbGFzcz0ibW96LXR4dC1saW5rLXJmYzIzOTZF IiBocmVmPSJtYWlsdG86dGRlbWV0ZXJAaXRzbWFydC5odSI+Jmx0O3RkZW1ldGVyQGl0c21hcnQu aHUmZ3Q7PC9hPiB3cm90ZToKPC9wcmU+CiAgICAgICAgICAgIDxibG9ja3F1b3RlIHR5cGU9ImNp dGUiPgogICAgICAgICAgICAgIDxwcmUgd3JhcD0iIj4KSGkgQWxsLAoKSSBoYXZlIHRvIGNyZWF0 ZSBhIHRlc3QgZW52aXJvbm1lbnQgZm9yIHRlc3RpbmcgcHVycG9zZXMsIGJlY2F1c2Ugd2UgbmVl ZCB0bwp0ZXN0aW5nIG91ciBuZXcgMTBnYmUgaW5mcmFzdHJ1Y3R1cmUuCk9uZSBzZXJ2ZXIgdGhh dCBoYXZlIGEgMTBnYmUgbmljIC0gdGhpcyBpcyB0aGUgdmRzbSBob3N0IGFuZCBvdmlydCBwb3J0 YWwuCk9uZSBzZXJ2ZXIgdGhhdCBoYXZlIGEgMTBnYmUgbmljIC0gdGhpcyBpcyB0aGUgc3RvcmFn ZS4KCkl0cyBjb25uZWN0ZWQgdG8gZWFjaCBvdGhlciB0aHJvdWdodCBhIGRsaW5rIDEwZ2JlIHN3 aXRjaC4KCkV2ZXJ5dGhpbmcgZ29vZCBhbmQgbmljZSwgdGhlIHNlcnZlciBjYW4gY29ubmVjdCB0 byBzdG9yYWdlLCBJIGNhbiBtYWtlIGFuZCBydW4KVk1zLCBidXQgdGhlIHN0b3JhZ2UgcGVyZm9y bWFuY2UgZnJvbSBpbnNpZGUgVk0gc2VlbXMgdG8gYmUgMUdiL3NlYyBvbmx5LgpJIGRpZCB0cnkg dGhlIGlwZXJmIGNvbW1hbmQgZm9yIHRlc3RpbmcgY29ubmVjdGlvbnMgYmVldHdlbiBzZXJ2ZXJz LCBhbmQgaXQgd2FzCjkuNDAgR0Ivc2VjLiBJIGhhdmUgdHJ5IHRvIHVzZSBoZHBhcm0gLXRUIC9k ZXYvbWFwcGVyL2lzY3NpZGV2aWNlIGFuZCBhbHNvIGl0CndhcyA0MDAtNDUwIE1CL3NlYy4gSSd2 ZSBnb3Qgc2FtZSByZXN1bHQgb24gc3RvcmFnZSBzZXJ2ZXIuCgpTbzoKCi0gaGRwYXJtIHRlc3Qg b24gbG9jYWwgc3RvcmFnZSB+IDQwMCBtYi9zZWMKLSBoZHBhcm0gdGVzdCBvbiBvdmlydCBub2Rl IHNlcnZlciB0aHJvdWdoIGF0dGFjaGVkIGlzY3NpIGRldmljZSB+IDQwMCBNYi9zZWMKLSBoZHBh cm0gdGVzdCBmcm9tIGluc2lkZSB2bSBvbiBsb2NhbCB2aXJ0dWFsIGRpc2sgLSA5My0xMDIgTWIg L3NlYwoKVGhlIHF1ZXN0aW9uIGlzIDogV2h5PwoKcHMuIEkgSGF2ZSBvbmx5IG9uZSBvdmlydG1n bXQgZGV2aWNlLCBzbyB0aGVyZSBhcmUgbm8gb3RoZXIgbmV0d29ya3MuIFRoZSByb3V0ZXIKaXMg b25seSAxZ2JlL3NlYywgYnV0IGkndmUgdGVzdGVkIGFuZCB0aGUgdHJhZmZpYyBkb2VzIG5vdCBn b2luZyB0aHJvdWdoICB0aGlzLgoKVGhhbmtzIGluIGFkdmFuY2UsCgpSZWdhcmRzLApUaWJvcgo8 L3ByZT4KICAgICAgICAgICAgPC9ibG9ja3F1b3RlPgogICAgICAgICAgPC9ibG9ja3F1b3RlPgog ICAgICAgICAgPHByZSB3cmFwPSIiPl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fClVzZXJzIG1haWxpbmcgbGlzdAo8YSBjbGFzcz0ibW96LXR4dC1saW5rLWFi YnJldmlhdGVkIiBocmVmPSJtYWlsdG86VXNlcnNAb3ZpcnQub3JnIj5Vc2Vyc0BvdmlydC5vcmc8 L2E+CjxhIGNsYXNzPSJtb3otdHh0LWxpbmstZnJlZXRleHQiIGhyZWY9Imh0dHA6Ly9saXN0cy5v dmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2VycyI+aHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9t YWlsbWFuL2xpc3RpbmZvL3VzZXJzPC9hPgo8L3ByZT4KICAgICAgICA8L2Jsb2NrcXVvdGU+CiAg ICAgICAgPHByZSB3cmFwPSIiPgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fXwpVc2VycyBtYWlsaW5nIGxpc3QKPGEgY2xhc3M9Im1vei10eHQtbGluay1hYmJy ZXZpYXRlZCIgaHJlZj0ibWFpbHRvOlVzZXJzQG92aXJ0Lm9yZyI+VXNlcnNAb3ZpcnQub3JnPC9h Pgo8YSBjbGFzcz0ibW96LXR4dC1saW5rLWZyZWV0ZXh0IiBocmVmPSJodHRwOi8vbGlzdHMub3Zp cnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnMiPmh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFp bG1hbi9saXN0aW5mby91c2VyczwvYT4KPC9wcmU+CiAgICAgIDwvYmxvY2txdW90ZT4KICAgICAg PHByZSB3cmFwPSIiPgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fXwpVc2VycyBtYWlsaW5nIGxpc3QKPGEgY2xhc3M9Im1vei10eHQtbGluay1hYmJyZXZpYXRl ZCIgaHJlZj0ibWFpbHRvOlVzZXJzQG92aXJ0Lm9yZyI+VXNlcnNAb3ZpcnQub3JnPC9hPgo8YSBj bGFzcz0ibW96LXR4dC1saW5rLWZyZWV0ZXh0IiBocmVmPSJodHRwOi8vbGlzdHMub3ZpcnQub3Jn L21haWxtYW4vbGlzdGluZm8vdXNlcnMiPmh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9s aXN0aW5mby91c2VyczwvYT4KX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX18KVXNlcnMgbWFpbGluZyBsaXN0CjxhIGNsYXNzPSJtb3otdHh0LWxpbmstYWJicmV2 aWF0ZWQiIGhyZWY9Im1haWx0bzpVc2Vyc0BvdmlydC5vcmciPlVzZXJzQG92aXJ0Lm9yZzwvYT4K PGEgY2xhc3M9Im1vei10eHQtbGluay1mcmVldGV4dCIgaHJlZj0iaHR0cDovL2xpc3RzLm92aXJ0 Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzIj5odHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxt YW4vbGlzdGluZm8vdXNlcnM8L2E+CjwvcHJlPgogICAgPC9ibG9ja3F1b3RlPgogICAgPGJyPgog IDwvYm9keT4KPC9odG1sPgoKLS0tLS0tLS0tLS0tLS0wNzAzMDIwNzAzMDMwOTA3MDIwMzAwMDQt LQo= --===============6969377068892129556==--