
------=_Part_1904412_191957680.1441657611396 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Hi All, I have to create a test environment for testing purposes, because we need to testing our new 10gbe infrastructure. One server that have a 10gbe nic - this is the vdsm host and ovirt portal. One server that have a 10gbe nic - this is the storage. Its connected to each other throught a dlink 10gbe switch. Everything good and nice, the server can connect to storage, I can make and run VMs, but the storage performance from inside VM seems to be 1Gb/sec only. I did try the iperf command for testing connections beetwen servers, and it was 9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice and also it was 400-450 MB/sec. I've got same result on storage server. So: - hdparm test on local storage ~ 400 mb/sec - hdparm test on ovirt node server through attached iscsi device ~ 400 Mb/sec - hdparm test from inside vm on local virtual disk - 93-102 Mb /sec The question is : Why? ps. I Have only one ovirtmgmt device, so there are no other networks. The router is only 1gbe/sec, but i've tested and the traffic does not going through this. Thanks in advance, Regards, Tibor ------=_Part_1904412_191957680.1441657611396 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html><body><div style=3D"font-family: arial, helvetica, sans-serif; font-s= ize: 12pt; color: #000000"><div>Hi All,</div><div><br data-mce-bogus=3D"1">= </div><div>I have to create a test environment for testing purposes, becaus= e we need to testing our new 10gbe infrastructure.</div><div>One server tha= t have a 10gbe nic - this is the vdsm host and ovirt portal.</div><div>One = server that have a 10gbe nic - this is the storage.</div><div><br data-mce-= bogus=3D"1"></div><div>Its connected to each other throught a dlink 10gbe s= witch.</div><div><br></div><div>Everything good and nice, the server can co= nnect to storage, I can make and run VMs, but the storage performance from = inside VM seems to be 1Gb/sec only. </div><div>I did try the iperf com= mand for testing connections beetwen servers, and it was 9.40 GB/sec. I hav= e try to use hdparm -tT /dev/mapper/iscsidevice and also it was 400-450 MB/= sec. I've got same result on storage server.</div><div><br data-mce-bogus= =3D"1"></div><div>So:</div><div><br data-mce-bogus=3D"1"></div><div>- hdpar= m test on local storage ~ 400 mb/sec</div><div>- hdparm test on ovirt node = server through attached iscsi device ~ 400 Mb/sec</div><div>- hdparm test f= rom inside vm on local virtual disk - 93-102 Mb /sec</div><div><br data-mce= -bogus=3D"1"></div><div>The question is : Why?</div><div><br data-mce-bogus= =3D"1"></div><div>ps. I Have only one ovirtmgmt device, so there are no oth= er networks. The router is only 1gbe/sec, but i've tested and the traffic d= oes not going through this.</div><div><br data-mce-bogus=3D"1"></div>= <div>Thanks in advance,</div><div><br data-mce-bogus=3D"1"></div><div>Regar= ds, </div><div>Tibor<br></div><div data-marker=3D"__SIG_PRE__"><p></p>= </div></div></body></html> ------=_Part_1904412_191957680.1441657611396--

Unless you're using a caching filesystem like zfs, then you're going to be limited by how fast your storage back end can actually right to disk. Unless you have a quite large storage back end, 10gbe is probably faster than your disks can read and write. On Sep 7, 2015 4:26 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi All,
I have to create a test environment for testing purposes, because we need to testing our new 10gbe infrastructure. One server that have a 10gbe nic - this is the vdsm host and ovirt portal. One server that have a 10gbe nic - this is the storage.
Its connected to each other throught a dlink 10gbe switch.
Everything good and nice, the server can connect to storage, I can make and run VMs, but the storage performance from inside VM seems to be 1Gb/sec only. I did try the iperf command for testing connections beetwen servers, and it was 9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice and also it was 400-450 MB/sec. I've got same result on storage server.
So:
- hdparm test on local storage ~ 400 mb/sec - hdparm test on ovirt node server through attached iscsi device ~ 400 Mb/sec - hdparm test from inside vm on local virtual disk - 93-102 Mb /sec
The question is : Why?
ps. I Have only one ovirtmgmt device, so there are no other networks. The router is only 1gbe/sec, but i've tested and the traffic does not going through this.
Thanks in advance,
Regards, Tibor

Hi, Thank you for your reply. I'm sorry but I don't think so. This storage is fast, because it is a SSD based storage, and I can read/write to it with fast performance. I know, in virtual environment the I/O always slowest than on physical, but here I have a very large difference. Also, I use ext4 FS. Thanks Tibor ----- 2015. szept.. 8., 0:40, Alex McWhirter alexmcwhirter@triadic.us írta:
Unless you're using a caching filesystem like zfs, then you're going to be limited by how fast your storage back end can actually right to disk. Unless you have a quite large storage back end, 10gbe is probably faster than your disks can read and write.
On Sep 7, 2015 4:26 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi All,
I have to create a test environment for testing purposes, because we need to testing our new 10gbe infrastructure. One server that have a 10gbe nic - this is the vdsm host and ovirt portal. One server that have a 10gbe nic - this is the storage.
Its connected to each other throught a dlink 10gbe switch.
Everything good and nice, the server can connect to storage, I can make and run VMs, but the storage performance from inside VM seems to be 1Gb/sec only. I did try the iperf command for testing connections beetwen servers, and it was 9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice and also it was 400-450 MB/sec. I've got same result on storage server.
So:
- hdparm test on local storage ~ 400 mb/sec - hdparm test on ovirt node server through attached iscsi device ~ 400 Mb/sec - hdparm test from inside vm on local virtual disk - 93-102 Mb /sec
The question is : Why?
ps. I Have only one ovirtmgmt device, so there are no other networks. The router is only 1gbe/sec, but i've tested and the traffic does not going through this.
Thanks in advance,
Regards, Tibor

tis 2015-09-08 klockan 06:59 +0200 skrev Demeter Tibor:
Hi, Thank you for your reply. I'm sorry but I don't think so. This storage is fast, because it is a SSD based storage, and I can read/write to it with fast performance. I know, in virtual environment the I/O always slowest than on physical, but here I have a very large difference. Also, I use ext4 FS.
My suggestion would be to use a filesystem benchmarking tool like bonnie ++ to first test the performance locally on the storage server and then redo the same test inside of a virtual machine. Also make sure the VM is using VirtIO disk (either block or SCSI) for best performance. I have tested speeds over 1Gb/s with bonded 1Gb NICS so I know it should work in theory as well as practice. Oh, and for the record. IO doesn´t have to be bound by the speed of storage, if the host caches in RAM before sending it over the wire. But that in my opinion is dangerous and as far as I know, it´s not actived in oVirt, please correct me if I´m wrong. /K
Thanks
Tibor
----- 2015. szept.. 8., 0:40, Alex McWhirter alexmcwhirter@triadic.us írta:
Unless you're using a caching filesystem like zfs, then you're going to be limited by how fast your storage back end can actually right to disk. Unless you have a quite large storage back end, 10gbe is probably faster than your disks can read and write.
On Sep 7, 2015 4:26 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi All,
I have to create a test environment for testing purposes, because we need to testing our new 10gbe infrastructure. One server that have a 10gbe nic - this is the vdsm host and ovirt portal. One server that have a 10gbe nic - this is the storage.
Its connected to each other throught a dlink 10gbe switch.
Everything good and nice, the server can connect to storage, I can make and run VMs, but the storage performance from inside VM seems to be 1Gb/sec only. I did try the iperf command for testing connections beetwen servers, and it was 9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice and also it was 400-450 MB/sec. I've got same result on storage server.
So:
- hdparm test on local storage ~ 400 mb/sec - hdparm test on ovirt node server through attached iscsi device ~ 400 Mb/sec - hdparm test from inside vm on local virtual disk - 93-102 Mb /sec
The question is : Why?
ps. I Have only one ovirtmgmt device, so there are no other networks. The router is only 1gbe/sec, but i've tested and the traffic does not going through this.
Thanks in advance,
Regards, Tibor
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Are we talking about a single ssd or an array of them? VMs are usually large continuous image files. SSDs are faster delivering many small files over large continuous file. I believe ovirt forces sync writes by default, but I'm not sure as I'm using NFS. The best thing to do is figure out whether it's a storage issue or network issue. Try setting your iscsi server to use async writes, this can be dangerous if either server crashes or loses power so I would just do it for testing purposes. With async writes you should be able to hit near 10gbps writes, but reads will depend on how much data is cached and how much ram the iscsi server has. Are you presenting a raw disk over iscsi, an image file, or a filesystem lun via zfs or something similar? Alex sent the message, but his phone sent the typos... On Sep 8, 2015 1:45 AM, Karli Sjöberg wrote: > > tis 2015-09-08 klockan 06:59 +0200 skrev Demeter Tibor: > > Hi, > > Thank you for your reply. > > I'm sorry but I don't think so. This storage is fast, because it is a SSD based storage, and I can read/write to it with fast performance. > > I know, in virtual environment the I/O always slowest than on physical, but here I have a very large difference. > > Also, I use ext4 FS. > > My suggestion would be to use a filesystem benchmarking tool like bonnie > ++ to first test the performance locally on the storage server and then > redo the same test inside of a virtual machine. Also make sure the VM is > using VirtIO disk (either block or SCSI) for best performance. I have > tested speeds over 1Gb/s with bonded 1Gb NICS so I know it should work > in theory as well as practice. > > Oh, and for the record. IO doesn´t have to be bound by the speed of > storage, if the host caches in RAM before sending it over the wire. But > that in my opinion is dangerous and as far as I know, it´s not actived > in oVirt, please correct me if I´m wrong. > > /K > > > > > Thanks > > > > Tibor > > > > > > ----- 2015. szept.. 8., 0:40, Alex McWhirter alexmcwhirter@triadic.us írta: > > > > > Unless you're using a caching filesystem like zfs, then you're going to be > > > limited by how fast your storage back end can actually right to disk. Unless > > > you have a quite large storage back end, 10gbe is probably faster than your > > > disks can read and write. > > > > > > On Sep 7, 2015 4:26 PM, Demeter Tibor wrote: > > >> > > >> Hi All, > > >> > > >> I have to create a test environment for testing purposes, because we need to > > >> testing our new 10gbe infrastructure. > > >> One server that have a 10gbe nic - this is the vdsm host and ovirt portal. > > >> One server that have a 10gbe nic - this is the storage. > > >> > > >> Its connected to each other throught a dlink 10gbe switch. > > >> > > >> Everything good and nice, the server can connect to storage, I can make and run > > >> VMs, but the storage performance from inside VM seems to be 1Gb/sec only. > > >> I did try the iperf command for testing connections beetwen servers, and it was > > >> 9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice and also it > > >> was 400-450 MB/sec. I've got same result on storage server. > > >> > > >> So: > > >> > > >> - hdparm test on local storage ~ 400 mb/sec > > >> - hdparm test on ovirt node server through attached iscsi device ~ 400 Mb/sec > > >> - hdparm test from inside vm on local virtual disk - 93-102 Mb /sec > > >> > > >> The question is : Why? > > >> > > >> ps. I Have only one ovirtmgmt device, so there are no other networks. The router > > >> is only 1gbe/sec, but i've tested and the traffic does not going through this. > > >> > > >> Thanks in advance, > > >> > > >> Regards, > > > > Tibor > > _______________________________________________ > > Users mailing list > > Users@ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users >

On 08/09/15 09:05, Alex McWhirter wrote:
Are we talking about a single ssd or an array of them? VMs are usually large continuous image files. SSDs are faster delivering many small files over large continuous file.
I believe ovirt forces sync writes by default, but I'm not sure as I'm using NFS. The best thing to do is figure out whether it's a storage issue or network issue.
Try setting your iscsi server to use async writes, this can be dangerous if either server crashes or loses power so I would just do it for testing purposes.
I do not recommend it, as it would not reflect real life usage later on.
With async writes you should be able to hit near 10gbps writes, but reads will depend on how much data is cached and how much ram the iscsi server has.
Are you presenting a raw disk over iscsi, an image file, or a filesystem lun via zfs or something similar?
Some tips when using iSCSI and general IO performance: - The VM should be using VirtIO for both disk and NIC. - I recommend XFS over EXT4, but both are OK generally. If possible, however, I'd test with a raw block device first. - Ensure you have enough paths to the storage and/or multiple iSCSI sessions. You may wish to configure the iSCSI target portal with multiple IP addresses (not only for redundancy, but for multiple connections). - I highly recommend 'fio' as an IO tool over Bonnie or hdparm. Is the VM CPU the bottleneck, perhaps? Y.
Alex sent the message, but his phone sent the typos...
On Sep 8, 2015 1:45 AM, Karli Sjöberg wrote: > > tis 2015-09-08 klockan 06:59 +0200 skrev Demeter Tibor: > > Hi, > > Thank you for your reply. > > I'm sorry but I don't think so. This storage is fast, because it is a SSD based storage, and I can read/write to it with fast performance. > > I know, in virtual environment the I/O always slowest than on physical, but here I have a very large difference. > > Also, I use ext4 FS. > > My suggestion would be to use a filesystem benchmarking tool like bonnie > ++ to first test the performance locally on the storage server and then > redo the same test inside of a virtual machine. Also make sure the VM is > using VirtIO disk (either block or SCSI) for best performance. I have > tested speeds over 1Gb/s with bonded 1Gb NICS so I know it should work > in theory as well as practice. > > Oh, and for the record. IO doesn´t have to be bound by the speed of > storage, if the host caches in RAM before sending it over the wire. But > that in my opinion is dangerous and as far as I know, it´s not actived > in oVirt, please correct me if I´m wrong. > > /K > > > > > Thanks > > > > Tibor > > > > > > ----- 2015. szept.. 8., 0:40, Alex McWhirter alexmcwhirter@triadic.us írta: > > > > > Unless you're using a caching filesystem like zfs, then you're going to be > > > limited by how fast your storage back end can actually right to disk. Unless > > > you have a quite large storage back end, 10gbe is probably faster than your > > > disks can read and write. > > > > > > On Sep 7, 2015 4:26 PM, Demeter Tibor wrote: > > >> > > >> Hi All, > > >> > > >> I have to create a test environment for testing purposes, because we need to > > >> testing our new 10gbe infrastructure. > > >> One server that have a 10gbe nic - this is the vdsm host and ovirt portal. > > >> One server that have a 10gbe nic - this is the storage. > > >> > > >> Its connected to each other throught a dlink 10gbe switch. > > >> > > >> Everything good and nice, the server can connect to storage, I can make and run > > >> VMs, but the storage performance from inside VM seems to be 1Gb/sec only. > > >> I did try the iperf command for testing connections beetwen servers, and it was > > >> 9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice and also it > > >> was 400-450 MB/sec. I've got same result on storage server. > > >> > > >> So: > > >> > > >> - hdparm test on local storage ~ 400 mb/sec > > >> - hdparm test on ovirt node server through attached iscsi device ~ 400 Mb/sec > > >> - hdparm test from inside vm on local virtual disk - 93-102 Mb /sec > > >> > > >> The question is : Why? > > >> > > >> ps. I Have only one ovirtmgmt device, so there are no other networks. The router > > >> is only 1gbe/sec, but i've tested and the traffic does not going through this. > > >> > > >> Thanks in advance, > > >> > > >> Regards, > > > > Tibor > > _______________________________________________ > > Users mailing list > > Users@ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 8 Sep 2015, at 07:45, Karli Sjöberg wrote:
tis 2015-09-08 klockan 06:59 +0200 skrev Demeter Tibor:
Hi, Thank you for your reply. I'm sorry but I don't think so. This storage is fast, because it is a SSD based storage, and I can read/write to it with fast performance. I know, in virtual environment the I/O always slowest than on physical, but here I have a very large difference. Also, I use ext4 FS.
My suggestion would be to use a filesystem benchmarking tool like bonnie ++ to first test the performance locally on the storage server and then redo the same test inside of a virtual machine. Also make sure the VM is using VirtIO disk (either block or SCSI) for best performance. I have
also note new 3.6 support for virtio-blk dataplane[1]. Not sure how will it look using artificial stress tools, but in general it improves storage performance a lot. Thanks, michal [1] https://bugzilla.redhat.com/show_bug.cgi?id=1214311
tested speeds over 1Gb/s with bonded 1Gb NICS so I know it should work in theory as well as practice.
Oh, and for the record. IO doesn´t have to be bound by the speed of storage, if the host caches in RAM before sending it over the wire. But that in my opinion is dangerous and as far as I know, it´s not actived in oVirt, please correct me if I´m wrong.
/K
Thanks
Tibor
----- 2015. szept.. 8., 0:40, Alex McWhirter alexmcwhirter@triadic.us írta:
Unless you're using a caching filesystem like zfs, then you're going to be limited by how fast your storage back end can actually right to disk. Unless you have a quite large storage back end, 10gbe is probably faster than your disks can read and write.
On Sep 7, 2015 4:26 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi All,
I have to create a test environment for testing purposes, because we need to testing our new 10gbe infrastructure. One server that have a 10gbe nic - this is the vdsm host and ovirt portal. One server that have a 10gbe nic - this is the storage.
Its connected to each other throught a dlink 10gbe switch.
Everything good and nice, the server can connect to storage, I can make and run VMs, but the storage performance from inside VM seems to be 1Gb/sec only. I did try the iperf command for testing connections beetwen servers, and it was 9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice and also it was 400-450 MB/sec. I've got same result on storage server.
So:
- hdparm test on local storage ~ 400 mb/sec - hdparm test on ovirt node server through attached iscsi device ~ 400 Mb/sec - hdparm test from inside vm on local virtual disk - 93-102 Mb /sec
The question is : Why?
ps. I Have only one ovirtmgmt device, so there are no other networks. The router is only 1gbe/sec, but i've tested and the traffic does not going through this.
Thanks in advance,
Regards, Tibor
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

I've my homelab connected via 10Gb Direct Attached Cables (DAC) Use x520 cards and Cisco 2m cables. Did some tuning on servers and storage (HPC background :) ) Here is a short copy paste from my personal install doc. Whole HW config and speeds you to trust me on, but I can achieve between 700 and 950MB/s for 4GB files. Again this is for my homelab, power over performance, 115w average power usage for the whole stack. ++++++++++++++++++++++++++++++++++++++++++++++++++++++ *All nodes* install CentOS Put eth in correct order MTU=9000 reboot /etc/sysctl.conf net.core.rmem_max=16777216 net.core.wmem_max=16777216 # increase Linux autotuning TCP buffer limit net.ipv4.tcp_rmem=4096 87380 16777216 net.ipv4.tcp_wmem=4096 65536 16777216 # increase the length of the processor input queue net.core.netdev_max_backlog=30000 *removed detailed personal info* *below is storage only* /etc/fstab ext4 defaults,barrier=0,noatime,nodiratime /etc/sysconfig/nfs RPCNFSDCOUNT=16 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ ----- Original Message ----- From: "Michal Skrivanek" <michal.skrivanek@redhat.com> To: "Karli Sjöberg" <Karli.Sjoberg@slu.se>, "Demeter Tibor" <tdemeter@itsmart.hu> Cc: "users" <users@ovirt.org> Sent: Tuesday, September 8, 2015 10:18:54 AM Subject: Re: [ovirt-users] strange iscsi issue On 8 Sep 2015, at 07:45, Karli Sjöberg wrote:
tis 2015-09-08 klockan 06:59 +0200 skrev Demeter Tibor:
Hi, Thank you for your reply. I'm sorry but I don't think so. This storage is fast, because it is a SSD based storage, and I can read/write to it with fast performance. I know, in virtual environment the I/O always slowest than on physical, but here I have a very large difference. Also, I use ext4 FS.
My suggestion would be to use a filesystem benchmarking tool like bonnie ++ to first test the performance locally on the storage server and then redo the same test inside of a virtual machine. Also make sure the VM is using VirtIO disk (either block or SCSI) for best performance. I have
also note new 3.6 support for virtio-blk dataplane[1]. Not sure how will it look using artificial stress tools, but in general it improves storage performance a lot. Thanks, michal [1] https://bugzilla.redhat.com/show_bug.cgi?id=1214311
tested speeds over 1Gb/s with bonded 1Gb NICS so I know it should work in theory as well as practice.
Oh, and for the record. IO doesn´t have to be bound by the speed of storage, if the host caches in RAM before sending it over the wire. But that in my opinion is dangerous and as far as I know, it´s not actived in oVirt, please correct me if I´m wrong.
/K
Thanks
Tibor
----- 2015. szept.. 8., 0:40, Alex McWhirter alexmcwhirter@triadic.us írta:
Unless you're using a caching filesystem like zfs, then you're going to be limited by how fast your storage back end can actually right to disk. Unless you have a quite large storage back end, 10gbe is probably faster than your disks can read and write.
On Sep 7, 2015 4:26 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi All,
I have to create a test environment for testing purposes, because we need to testing our new 10gbe infrastructure. One server that have a 10gbe nic - this is the vdsm host and ovirt portal. One server that have a 10gbe nic - this is the storage.
Its connected to each other throught a dlink 10gbe switch.
Everything good and nice, the server can connect to storage, I can make and run VMs, but the storage performance from inside VM seems to be 1Gb/sec only. I did try the iperf command for testing connections beetwen servers, and it was 9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice and also it was 400-450 MB/sec. I've got same result on storage server.
So:
- hdparm test on local storage ~ 400 mb/sec - hdparm test on ovirt node server through attached iscsi device ~ 400 Mb/sec - hdparm test from inside vm on local virtual disk - 93-102 Mb /sec
The question is : Why?
ps. I Have only one ovirtmgmt device, so there are no other networks. The router is only 1gbe/sec, but i've tested and the traffic does not going through this.
Thanks in advance,
Regards, Tibor
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This is a multi-part message in MIME format. --------------070302070303090702030004 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit On 10/09/15 01:16, Raymond wrote:
I've my homelab connected via 10Gb Direct Attached Cables (DAC) Use x520 cards and Cisco 2m cables.
Did some tuning on servers and storage (HPC background :) ) Here is a short copy paste from my personal install doc.
Whole HW config and speeds you to trust me on, but I can achieve between 700 and 950MB/s for 4GB files. Again this is for my homelab, power over performance, 115w average power usage for the whole stack.
++++++++++++++++++++++++++++++++++++++++++++++++++++++ *All nodes* install CentOS
Put eth in correct order
MTU=9000
reboot
/etc/sysctl.conf net.core.rmem_max=16777216 net.core.wmem_max=16777216 # increase Linux autotuning TCP buffer limit net.ipv4.tcp_rmem=4096 87380 16777216 net.ipv4.tcp_wmem=4096 65536 16777216 # increase the length of the processor input queue net.core.netdev_max_backlog=30000
*removed detailed personal info*
*below is storage only* /etc/fstab ext4 defaults,barrier=0,noatime,nodiratime /etc/sysconfig/nfs RPCNFSDCOUNT=16
All looks quite good. Do you have multipathing for iSCSI? I highly recommend it, and then reduce the number of requests (via multipath.conf) down as low as possible (against high-end all flash array - 1 is good too! I reckon against homelabs the default is OK too). Regardless, I also recommend increasing the number of TCP sessions - assuming your storage is not a bottleneck, you should be able to get to ~1100MB/sec. node.session./nr_sessions /in iscsi.conf should be set to 2, for example. Y.
++++++++++++++++++++++++++++++++++++++++++++++++++++++
----- Original Message ----- From: "Michal Skrivanek" <michal.skrivanek@redhat.com> To: "Karli Sjöberg" <Karli.Sjoberg@slu.se>, "Demeter Tibor" <tdemeter@itsmart.hu> Cc: "users" <users@ovirt.org> Sent: Tuesday, September 8, 2015 10:18:54 AM Subject: Re: [ovirt-users] strange iscsi issue
On 8 Sep 2015, at 07:45, Karli Sjöberg wrote:
tis 2015-09-08 klockan 06:59 +0200 skrev Demeter Tibor:
Hi, Thank you for your reply. I'm sorry but I don't think so. This storage is fast, because it is a SSD based storage, and I can read/write to it with fast performance. I know, in virtual environment the I/O always slowest than on physical, but here I have a very large difference. Also, I use ext4 FS. My suggestion would be to use a filesystem benchmarking tool like bonnie ++ to first test the performance locally on the storage server and then redo the same test inside of a virtual machine. Also make sure the VM is using VirtIO disk (either block or SCSI) for best performance. I have also note new 3.6 support for virtio-blk dataplane[1]. Not sure how will it look using artificial stress tools, but in general it improves storage performance a lot.
Thanks, michal
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1214311
tested speeds over 1Gb/s with bonded 1Gb NICS so I know it should work in theory as well as practice.
Oh, and for the record. IO doesn´t have to be bound by the speed of storage, if the host caches in RAM before sending it over the wire. But that in my opinion is dangerous and as far as I know, it´s not actived in oVirt, please correct me if I´m wrong.
/K
Thanks
Tibor
----- 2015. szept.. 8., 0:40, Alex McWhirter alexmcwhirter@triadic.us írta:
Unless you're using a caching filesystem like zfs, then you're going to be limited by how fast your storage back end can actually right to disk. Unless you have a quite large storage back end, 10gbe is probably faster than your disks can read and write.
On Sep 7, 2015 4:26 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi All,
I have to create a test environment for testing purposes, because we need to testing our new 10gbe infrastructure. One server that have a 10gbe nic - this is the vdsm host and ovirt portal. One server that have a 10gbe nic - this is the storage.
Its connected to each other throught a dlink 10gbe switch.
Everything good and nice, the server can connect to storage, I can make and run VMs, but the storage performance from inside VM seems to be 1Gb/sec only. I did try the iperf command for testing connections beetwen servers, and it was 9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice and also it was 400-450 MB/sec. I've got same result on storage server.
So:
- hdparm test on local storage ~ 400 mb/sec - hdparm test on ovirt node server through attached iscsi device ~ 400 Mb/sec - hdparm test from inside vm on local virtual disk - 93-102 Mb /sec
The question is : Why?
ps. I Have only one ovirtmgmt device, so there are no other networks. The router is only 1gbe/sec, but i've tested and the traffic does not going through this.
Thanks in advance,
Regards, Tibor
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------070302070303090702030004 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> </head> <body text="#000000" bgcolor="#FFFFFF"> <div class="moz-cite-prefix">On 10/09/15 01:16, Raymond wrote:<br> </div> <blockquote cite="mid:401904129.8754.1441836960445.JavaMail.zimbra@worteltje.nl" type="cite"> <pre wrap="">I've my homelab connected via 10Gb Direct Attached Cables (DAC) Use x520 cards and Cisco 2m cables. Did some tuning on servers and storage (HPC background :) ) Here is a short copy paste from my personal install doc. Whole HW config and speeds you to trust me on, but I can achieve between 700 and 950MB/s for 4GB files. Again this is for my homelab, power over performance, 115w average power usage for the whole stack. ++++++++++++++++++++++++++++++++++++++++++++++++++++++ *All nodes* install CentOS Put eth in correct order MTU=9000 reboot /etc/sysctl.conf net.core.rmem_max=16777216 net.core.wmem_max=16777216 # increase Linux autotuning TCP buffer limit net.ipv4.tcp_rmem=4096 87380 16777216 net.ipv4.tcp_wmem=4096 65536 16777216 # increase the length of the processor input queue net.core.netdev_max_backlog=30000 *removed detailed personal info* *below is storage only* /etc/fstab ext4 defaults,barrier=0,noatime,nodiratime /etc/sysconfig/nfs RPCNFSDCOUNT=16</pre> </blockquote> <br> All looks quite good. <br> Do you have multipathing for iSCSI? I highly recommend it, and then reduce the number of requests (via multipath.conf) down as low as possible (against high-end all flash array - 1 is good too! I reckon against homelabs the default is OK too).<br> <br> Regardless, I also recommend increasing the number of TCP sessions - assuming your storage is not a bottleneck, you should be able to get to ~1100MB/sec.<br> <span class="st">node.session.<em>nr_sessions </em>in iscsi.conf should be set to 2, for example.<br> Y.<br> <br> </span> <blockquote cite="mid:401904129.8754.1441836960445.JavaMail.zimbra@worteltje.nl" type="cite"> <pre wrap=""> ++++++++++++++++++++++++++++++++++++++++++++++++++++++ ----- Original Message ----- From: "Michal Skrivanek" <a class="moz-txt-link-rfc2396E" href="mailto:michal.skrivanek@redhat.com"><michal.skrivanek@redhat.com></a> To: "Karli Sjöberg" <a class="moz-txt-link-rfc2396E" href="mailto:Karli.Sjoberg@slu.se"><Karli.Sjoberg@slu.se></a>, "Demeter Tibor" <a class="moz-txt-link-rfc2396E" href="mailto:tdemeter@itsmart.hu"><tdemeter@itsmart.hu></a> Cc: "users" <a class="moz-txt-link-rfc2396E" href="mailto:users@ovirt.org"><users@ovirt.org></a> Sent: Tuesday, September 8, 2015 10:18:54 AM Subject: Re: [ovirt-users] strange iscsi issue On 8 Sep 2015, at 07:45, Karli Sjöberg wrote: </pre> <blockquote type="cite"> <pre wrap="">tis 2015-09-08 klockan 06:59 +0200 skrev Demeter Tibor: </pre> <blockquote type="cite"> <pre wrap="">Hi, Thank you for your reply. I'm sorry but I don't think so. This storage is fast, because it is a SSD based storage, and I can read/write to it with fast performance. I know, in virtual environment the I/O always slowest than on physical, but here I have a very large difference. Also, I use ext4 FS. </pre> </blockquote> <pre wrap=""> My suggestion would be to use a filesystem benchmarking tool like bonnie ++ to first test the performance locally on the storage server and then redo the same test inside of a virtual machine. Also make sure the VM is using VirtIO disk (either block or SCSI) for best performance. I have </pre> </blockquote> <pre wrap=""> also note new 3.6 support for virtio-blk dataplane[1]. Not sure how will it look using artificial stress tools, but in general it improves storage performance a lot. Thanks, michal [1] <a class="moz-txt-link-freetext" href="https://bugzilla.redhat.com/show_bug.cgi?id=1214311">https://bugzilla.redhat.com/show_bug.cgi?id=1214311</a> </pre> <blockquote type="cite"> <pre wrap="">tested speeds over 1Gb/s with bonded 1Gb NICS so I know it should work in theory as well as practice. Oh, and for the record. IO doesn´t have to be bound by the speed of storage, if the host caches in RAM before sending it over the wire. But that in my opinion is dangerous and as far as I know, it´s not actived in oVirt, please correct me if I´m wrong. /K </pre> <blockquote type="cite"> <pre wrap=""> Thanks Tibor ----- 2015. szept.. 8., 0:40, Alex McWhirter <a class="moz-txt-link-abbreviated" href="mailto:alexmcwhirter@triadic.us">alexmcwhirter@triadic.us</a> írta: </pre> <blockquote type="cite"> <pre wrap="">Unless you're using a caching filesystem like zfs, then you're going to be limited by how fast your storage back end can actually right to disk. Unless you have a quite large storage back end, 10gbe is probably faster than your disks can read and write. On Sep 7, 2015 4:26 PM, Demeter Tibor <a class="moz-txt-link-rfc2396E" href="mailto:tdemeter@itsmart.hu"><tdemeter@itsmart.hu></a> wrote: </pre> <blockquote type="cite"> <pre wrap=""> Hi All, I have to create a test environment for testing purposes, because we need to testing our new 10gbe infrastructure. One server that have a 10gbe nic - this is the vdsm host and ovirt portal. One server that have a 10gbe nic - this is the storage. Its connected to each other throught a dlink 10gbe switch. Everything good and nice, the server can connect to storage, I can make and run VMs, but the storage performance from inside VM seems to be 1Gb/sec only. I did try the iperf command for testing connections beetwen servers, and it was 9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice and also it was 400-450 MB/sec. I've got same result on storage server. So: - hdparm test on local storage ~ 400 mb/sec - hdparm test on ovirt node server through attached iscsi device ~ 400 Mb/sec - hdparm test from inside vm on local virtual disk - 93-102 Mb /sec The question is : Why? ps. I Have only one ovirtmgmt device, so there are no other networks. The router is only 1gbe/sec, but i've tested and the traffic does not going through this. Thanks in advance, Regards, Tibor </pre> </blockquote> </blockquote> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <pre wrap=""> _______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <pre wrap=""> _______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> _______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </body> </html> --------------070302070303090702030004--
participants (6)
-
Alex McWhirter
-
Demeter Tibor
-
Karli Sjöberg
-
Michal Skrivanek
-
Raymond
-
Yaniv Kaul