
This is a multi-part message in MIME format. --------------070302070303090702030004 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit On 10/09/15 01:16, Raymond wrote:
I've my homelab connected via 10Gb Direct Attached Cables (DAC) Use x520 cards and Cisco 2m cables.
Did some tuning on servers and storage (HPC background :) ) Here is a short copy paste from my personal install doc.
Whole HW config and speeds you to trust me on, but I can achieve between 700 and 950MB/s for 4GB files. Again this is for my homelab, power over performance, 115w average power usage for the whole stack.
++++++++++++++++++++++++++++++++++++++++++++++++++++++ *All nodes* install CentOS
Put eth in correct order
MTU=9000
reboot
/etc/sysctl.conf net.core.rmem_max=16777216 net.core.wmem_max=16777216 # increase Linux autotuning TCP buffer limit net.ipv4.tcp_rmem=4096 87380 16777216 net.ipv4.tcp_wmem=4096 65536 16777216 # increase the length of the processor input queue net.core.netdev_max_backlog=30000
*removed detailed personal info*
*below is storage only* /etc/fstab ext4 defaults,barrier=0,noatime,nodiratime /etc/sysconfig/nfs RPCNFSDCOUNT=16
All looks quite good. Do you have multipathing for iSCSI? I highly recommend it, and then reduce the number of requests (via multipath.conf) down as low as possible (against high-end all flash array - 1 is good too! I reckon against homelabs the default is OK too). Regardless, I also recommend increasing the number of TCP sessions - assuming your storage is not a bottleneck, you should be able to get to ~1100MB/sec. node.session./nr_sessions /in iscsi.conf should be set to 2, for example. Y.
++++++++++++++++++++++++++++++++++++++++++++++++++++++
----- Original Message ----- From: "Michal Skrivanek" <michal.skrivanek@redhat.com> To: "Karli Sjöberg" <Karli.Sjoberg@slu.se>, "Demeter Tibor" <tdemeter@itsmart.hu> Cc: "users" <users@ovirt.org> Sent: Tuesday, September 8, 2015 10:18:54 AM Subject: Re: [ovirt-users] strange iscsi issue
On 8 Sep 2015, at 07:45, Karli Sjöberg wrote:
tis 2015-09-08 klockan 06:59 +0200 skrev Demeter Tibor:
Hi, Thank you for your reply. I'm sorry but I don't think so. This storage is fast, because it is a SSD based storage, and I can read/write to it with fast performance. I know, in virtual environment the I/O always slowest than on physical, but here I have a very large difference. Also, I use ext4 FS. My suggestion would be to use a filesystem benchmarking tool like bonnie ++ to first test the performance locally on the storage server and then redo the same test inside of a virtual machine. Also make sure the VM is using VirtIO disk (either block or SCSI) for best performance. I have also note new 3.6 support for virtio-blk dataplane[1]. Not sure how will it look using artificial stress tools, but in general it improves storage performance a lot.
Thanks, michal
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1214311
tested speeds over 1Gb/s with bonded 1Gb NICS so I know it should work in theory as well as practice.
Oh, and for the record. IO doesn´t have to be bound by the speed of storage, if the host caches in RAM before sending it over the wire. But that in my opinion is dangerous and as far as I know, it´s not actived in oVirt, please correct me if I´m wrong.
/K
Thanks
Tibor
----- 2015. szept.. 8., 0:40, Alex McWhirter alexmcwhirter@triadic.us írta:
Unless you're using a caching filesystem like zfs, then you're going to be limited by how fast your storage back end can actually right to disk. Unless you have a quite large storage back end, 10gbe is probably faster than your disks can read and write.
On Sep 7, 2015 4:26 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi All,
I have to create a test environment for testing purposes, because we need to testing our new 10gbe infrastructure. One server that have a 10gbe nic - this is the vdsm host and ovirt portal. One server that have a 10gbe nic - this is the storage.
Its connected to each other throught a dlink 10gbe switch.
Everything good and nice, the server can connect to storage, I can make and run VMs, but the storage performance from inside VM seems to be 1Gb/sec only. I did try the iperf command for testing connections beetwen servers, and it was 9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice and also it was 400-450 MB/sec. I've got same result on storage server.
So:
- hdparm test on local storage ~ 400 mb/sec - hdparm test on ovirt node server through attached iscsi device ~ 400 Mb/sec - hdparm test from inside vm on local virtual disk - 93-102 Mb /sec
The question is : Why?
ps. I Have only one ovirtmgmt device, so there are no other networks. The router is only 1gbe/sec, but i've tested and the traffic does not going through this.
Thanks in advance,
Regards, Tibor
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------070302070303090702030004 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> </head> <body text="#000000" bgcolor="#FFFFFF"> <div class="moz-cite-prefix">On 10/09/15 01:16, Raymond wrote:<br> </div> <blockquote cite="mid:401904129.8754.1441836960445.JavaMail.zimbra@worteltje.nl" type="cite"> <pre wrap="">I've my homelab connected via 10Gb Direct Attached Cables (DAC) Use x520 cards and Cisco 2m cables. Did some tuning on servers and storage (HPC background :) ) Here is a short copy paste from my personal install doc. Whole HW config and speeds you to trust me on, but I can achieve between 700 and 950MB/s for 4GB files. Again this is for my homelab, power over performance, 115w average power usage for the whole stack. ++++++++++++++++++++++++++++++++++++++++++++++++++++++ *All nodes* install CentOS Put eth in correct order MTU=9000 reboot /etc/sysctl.conf net.core.rmem_max=16777216 net.core.wmem_max=16777216 # increase Linux autotuning TCP buffer limit net.ipv4.tcp_rmem=4096 87380 16777216 net.ipv4.tcp_wmem=4096 65536 16777216 # increase the length of the processor input queue net.core.netdev_max_backlog=30000 *removed detailed personal info* *below is storage only* /etc/fstab ext4 defaults,barrier=0,noatime,nodiratime /etc/sysconfig/nfs RPCNFSDCOUNT=16</pre> </blockquote> <br> All looks quite good. <br> Do you have multipathing for iSCSI? I highly recommend it, and then reduce the number of requests (via multipath.conf) down as low as possible (against high-end all flash array - 1 is good too! I reckon against homelabs the default is OK too).<br> <br> Regardless, I also recommend increasing the number of TCP sessions - assuming your storage is not a bottleneck, you should be able to get to ~1100MB/sec.<br> <span class="st">node.session.<em>nr_sessions </em>in iscsi.conf should be set to 2, for example.<br> Y.<br> <br> </span> <blockquote cite="mid:401904129.8754.1441836960445.JavaMail.zimbra@worteltje.nl" type="cite"> <pre wrap=""> ++++++++++++++++++++++++++++++++++++++++++++++++++++++ ----- Original Message ----- From: "Michal Skrivanek" <a class="moz-txt-link-rfc2396E" href="mailto:michal.skrivanek@redhat.com"><michal.skrivanek@redhat.com></a> To: "Karli Sjöberg" <a class="moz-txt-link-rfc2396E" href="mailto:Karli.Sjoberg@slu.se"><Karli.Sjoberg@slu.se></a>, "Demeter Tibor" <a class="moz-txt-link-rfc2396E" href="mailto:tdemeter@itsmart.hu"><tdemeter@itsmart.hu></a> Cc: "users" <a class="moz-txt-link-rfc2396E" href="mailto:users@ovirt.org"><users@ovirt.org></a> Sent: Tuesday, September 8, 2015 10:18:54 AM Subject: Re: [ovirt-users] strange iscsi issue On 8 Sep 2015, at 07:45, Karli Sjöberg wrote: </pre> <blockquote type="cite"> <pre wrap="">tis 2015-09-08 klockan 06:59 +0200 skrev Demeter Tibor: </pre> <blockquote type="cite"> <pre wrap="">Hi, Thank you for your reply. I'm sorry but I don't think so. This storage is fast, because it is a SSD based storage, and I can read/write to it with fast performance. I know, in virtual environment the I/O always slowest than on physical, but here I have a very large difference. Also, I use ext4 FS. </pre> </blockquote> <pre wrap=""> My suggestion would be to use a filesystem benchmarking tool like bonnie ++ to first test the performance locally on the storage server and then redo the same test inside of a virtual machine. Also make sure the VM is using VirtIO disk (either block or SCSI) for best performance. I have </pre> </blockquote> <pre wrap=""> also note new 3.6 support for virtio-blk dataplane[1]. Not sure how will it look using artificial stress tools, but in general it improves storage performance a lot. Thanks, michal [1] <a class="moz-txt-link-freetext" href="https://bugzilla.redhat.com/show_bug.cgi?id=1214311">https://bugzilla.redhat.com/show_bug.cgi?id=1214311</a> </pre> <blockquote type="cite"> <pre wrap="">tested speeds over 1Gb/s with bonded 1Gb NICS so I know it should work in theory as well as practice. Oh, and for the record. IO doesn´t have to be bound by the speed of storage, if the host caches in RAM before sending it over the wire. But that in my opinion is dangerous and as far as I know, it´s not actived in oVirt, please correct me if I´m wrong. /K </pre> <blockquote type="cite"> <pre wrap=""> Thanks Tibor ----- 2015. szept.. 8., 0:40, Alex McWhirter <a class="moz-txt-link-abbreviated" href="mailto:alexmcwhirter@triadic.us">alexmcwhirter@triadic.us</a> írta: </pre> <blockquote type="cite"> <pre wrap="">Unless you're using a caching filesystem like zfs, then you're going to be limited by how fast your storage back end can actually right to disk. Unless you have a quite large storage back end, 10gbe is probably faster than your disks can read and write. On Sep 7, 2015 4:26 PM, Demeter Tibor <a class="moz-txt-link-rfc2396E" href="mailto:tdemeter@itsmart.hu"><tdemeter@itsmart.hu></a> wrote: </pre> <blockquote type="cite"> <pre wrap=""> Hi All, I have to create a test environment for testing purposes, because we need to testing our new 10gbe infrastructure. One server that have a 10gbe nic - this is the vdsm host and ovirt portal. One server that have a 10gbe nic - this is the storage. Its connected to each other throught a dlink 10gbe switch. Everything good and nice, the server can connect to storage, I can make and run VMs, but the storage performance from inside VM seems to be 1Gb/sec only. I did try the iperf command for testing connections beetwen servers, and it was 9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice and also it was 400-450 MB/sec. I've got same result on storage server. So: - hdparm test on local storage ~ 400 mb/sec - hdparm test on ovirt node server through attached iscsi device ~ 400 Mb/sec - hdparm test from inside vm on local virtual disk - 93-102 Mb /sec The question is : Why? ps. I Have only one ovirtmgmt device, so there are no other networks. The router is only 1gbe/sec, but i've tested and the traffic does not going through this. Thanks in advance, Regards, Tibor </pre> </blockquote> </blockquote> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <pre wrap=""> _______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <pre wrap=""> _______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> _______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </body> </html> --------------070302070303090702030004--