Poor iSCSI Performance

This is a multi-part message in MIME format. --------------070002020309040609070902 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Hi all, I've been trying to resolve a storage performance issue but have had no luck in identifying the exact cause. I have my storage domain on iSCSI and I can get the expected performance (limited by the Gbit Ethernet) when running bonnie++ on: * a regular physical machine configured with the iSCSI initiator connected to a dedicated iSCSI test target -- thus oVirt and VM technology are completely out of the picture * my oVirt host with the initiator connected to that same dedicated target -- thus I have an iSCSI connection on the oVirt host but I'm not using the iSCSI connection provided by oVirt's storage domain * a VM (hosted by oVirt) with the initiator (inside the VM) connected to that target -- thus bypassing oVirt's storage domain and the virtual disk it provides this VM However, if I just use a regular virtual disk via oVirt's storage domain the performance is much worse. I've tried both VirtIO and VirtIO-SCSI and have found no appreciable difference. Here's a typical example of the poor performance I get (as tested with bonnie++) with the normal virtual disk setup: # bonnie++ -d . -r 2048 -u root:root <snip> Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP narvi-f21.double 4G 806 91 18507 1 15675 1 3174 56 33175 1 176.4 3 Latency 15533us 8142ms 2440ms 262ms 1289ms 780ms Version 1.96 ------Sequential Create------ --------Random Create-------- narvi-f21.doubledog -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 13641 24 +++++ +++ 22702 17 18919 31 +++++ +++ +++++ +++ Latency 27724us 247us 292us 71us 30us 172us For comparison, here's what I see if I run the same test, same VM, same host but this time the file system is mounted from a device obtained using iscsi-initiator-utils within the VM, i.e., the 3rd bullet config above: bonnie++ -d . -r 2048 -u root:root <snip> Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP narvi-f21.double 4G 2051 89 103877 4 36286 3 4803 88 88166 4 163.6 3 Latency 7724us 191ms 396ms 48734us 73004us 1645ms Version 1.96 ------Sequential Create------ --------Random Create-------- narvi-f21.doubledog -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 6531 18 +++++ +++ 16388 20 5924 15 +++++ +++ 17906 23 Latency 15623us 64us 92us 1281us 14us 256us My host is Fedora 20 running oVirt 3.5 (hosted-engine). VM is running Fedora Server 21. Tonight I tried updating the host with the Fedora virt preview repo and I didn't see any significant change in the performance. Where should I look next? --------------070002020309040609070902 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta http-equiv="content-type" content="text/html; charset=utf-8"> </head> <body bgcolor="#FFFFFF" text="#000000"> Hi all,<br> <br> I've been trying to resolve a storage performance issue but have had no luck in identifying the exact cause. I have my storage domain on iSCSI and I can get the expected performance (limited by the Gbit Ethernet) when running bonnie++ on:<br> <ul> <li>a regular physical machine configured with the iSCSI initiator connected to a dedicated iSCSI test target -- thus oVirt and VM technology are completely out of the picture<br> </li> <li>my oVirt host with the initiator connected to that same dedicated target -- thus I have an iSCSI connection on the oVirt host but I'm not using the iSCSI connection provided by oVirt's storage domain<br> </li> <li>a VM (hosted by oVirt) with the initiator (inside the VM) connected to that target -- thus bypassing oVirt's storage domain and the virtual disk it provides this VM</li> </ul> <p>However, if I just use a regular virtual disk via oVirt's storage domain the performance is much worse. I've tried both VirtIO and VirtIO-SCSI and have found no appreciable difference.<br> </p> <p>Here's a typical example of the poor performance I get (as tested with bonnie++) with the normal virtual disk setup:<br> </p> <p><tt># bonnie++ -d . -r 2048 -u root:root<br> <snip><br> Version 1.96 ------Sequential Output------ --Sequential Input- --Random-</tt><tt><br> </tt><tt>Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--</tt><tt><br> </tt><tt>Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP</tt><tt><br> </tt><tt>narvi-f21.double 4G 806 91 18507 1 15675 1 3174 56 33175 1 176.4 3</tt><tt><br> </tt><tt>Latency 15533us 8142ms 2440ms 262ms 1289ms 780ms</tt><tt><br> </tt><tt>Version 1.96 ------Sequential Create------ --------Random Create--------</tt><tt><br> </tt><tt>narvi-f21.doubledog -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--</tt><tt><br> </tt><tt> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP</tt><tt><br> </tt><tt> 16 13641 24 +++++ +++ 22702 17 18919 31 +++++ +++ +++++ +++</tt><tt><br> </tt><tt>Latency 27724us 247us 292us 71us 30us 172us</tt><br> <br> </p> <p>For comparison, here's what I see if I run the same test, same VM, same host but this time the file system is mounted from a device obtained using iscsi-initiator-utils within the VM, i.e., the 3rd bullet config above:<br> </p> <p><tt>bonnie++ -d . -r 2048 -u root:root</tt><tt><br> </tt><tt><snip></tt><tt><br> </tt><tt>Version 1.96 ------Sequential Output------ --Sequential Input- --Random-</tt><tt><br> </tt><tt>Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--</tt><tt><br> </tt><tt>Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP</tt><tt><br> </tt><tt>narvi-f21.double 4G 2051 89 103877 4 36286 3 4803 88 88166 4 163.6 3</tt><tt><br> </tt><tt>Latency 7724us 191ms 396ms 48734us 73004us 1645ms</tt><tt><br> </tt><tt>Version 1.96 ------Sequential Create------ --------Random Create--------</tt><tt><br> </tt><tt>narvi-f21.doubledog -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--</tt><tt><br> </tt><tt> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP</tt><tt><br> </tt><tt> 16 6531 18 +++++ +++ 16388 20 5924 15 +++++ +++ 17906 23</tt><tt><br> </tt><tt>Latency 15623us 64us 92us 1281us 14us 256us</tt><br> </p> <p>My host is Fedora 20 running oVirt 3.5 (hosted-engine). VM is running Fedora Server 21. Tonight I tried updating the host with the Fedora virt preview repo and I didn't see any significant change in the performance. Where should I look next?<br> </p> <p><br> </p> </body> </html> --------------070002020309040609070902--

Hi, already captured some iscsi Traffic of both Situations and compared them? MTU maybe? Juergen Am 18.02.2015 um 02:08 schrieb John Florian:
Hi all,
I've been trying to resolve a storage performance issue but have had no luck in identifying the exact cause. I have my storage domain on iSCSI and I can get the expected performance (limited by the Gbit Ethernet) when running bonnie++ on:
* a regular physical machine configured with the iSCSI initiator connected to a dedicated iSCSI test target -- thus oVirt and VM technology are completely out of the picture * my oVirt host with the initiator connected to that same dedicated target -- thus I have an iSCSI connection on the oVirt host but I'm not using the iSCSI connection provided by oVirt's storage domain * a VM (hosted by oVirt) with the initiator (inside the VM) connected to that target -- thus bypassing oVirt's storage domain and the virtual disk it provides this VM
However, if I just use a regular virtual disk via oVirt's storage domain the performance is much worse. I've tried both VirtIO and VirtIO-SCSI and have found no appreciable difference.
Here's a typical example of the poor performance I get (as tested with bonnie++) with the normal virtual disk setup:
# bonnie++ -d . -r 2048 -u root:root <snip> Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP narvi-f21.double 4G 806 91 18507 1 15675 1 3174 56 33175 1 176.4 3 Latency 15533us 8142ms 2440ms 262ms 1289ms 780ms Version 1.96 ------Sequential Create------ --------Random Create-------- narvi-f21.doubledog -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 13641 24 +++++ +++ 22702 17 18919 31 +++++ +++ +++++ +++ Latency 27724us 247us 292us 71us 30us 172us
For comparison, here's what I see if I run the same test, same VM, same host but this time the file system is mounted from a device obtained using iscsi-initiator-utils within the VM, i.e., the 3rd bullet config above:
bonnie++ -d . -r 2048 -u root:root <snip> Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP narvi-f21.double 4G 2051 89 103877 4 36286 3 4803 88 88166 4 163.6 3 Latency 7724us 191ms 396ms 48734us 73004us 1645ms Version 1.96 ------Sequential Create------ --------Random Create-------- narvi-f21.doubledog -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 6531 18 +++++ +++ 16388 20 5924 15 +++++ +++ 17906 23 Latency 15623us 64us 92us 1281us 14us 256us
My host is Fedora 20 running oVirt 3.5 (hosted-engine). VM is running Fedora Server 21. Tonight I tried updating the host with the Fedora virt preview repo and I didn't see any significant change in the performance. Where should I look next?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (2)
-
InterNetX - Juergen Gotteswinter
-
John Florian