[ovirt-users] strange iscsi issue

Yaniv Kaul ykaul at redhat.com
Wed Sep 9 20:50:09 UTC 2015


On 10/09/15 01:16, Raymond wrote:
> I've my homelab connected via 10Gb Direct Attached Cables (DAC)
> Use x520 cards and Cisco 2m cables.
>
> Did some tuning on servers and storage (HPC background :) )
> Here is a short copy paste from my personal install doc.
>
> Whole HW config and speeds you to trust me on, but I can achieve between 700 and 950MB/s for 4GB files.
> Again this is for my homelab, power over performance, 115w average power usage for the whole stack.
>
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> *All nodes*
> install CentOS
>
> Put eth in correct order
>
> MTU=9000
>
> reboot
>
> /etc/sysctl.conf
>   net.core.rmem_max=16777216
>   net.core.wmem_max=16777216
>   # increase Linux autotuning TCP buffer limit
>   net.ipv4.tcp_rmem=4096 87380 16777216
>   net.ipv4.tcp_wmem=4096 65536 16777216
>   # increase the length of the processor input queue
>   net.core.netdev_max_backlog=30000
>
> *removed detailed personal info*
>
> *below is storage only*
> /etc/fstab
>   ext4    defaults,barrier=0,noatime,nodiratime
> /etc/sysconfig/nfs
>   RPCNFSDCOUNT=16

All looks quite good.
Do you have multipathing for iSCSI? I highly recommend it, and then
reduce the number of requests (via multipath.conf) down as low as
possible (against high-end all flash array - 1 is good too! I reckon
against homelabs the default is OK too).

Regardless, I also recommend increasing the number of TCP sessions -
assuming your storage is not a bottleneck, you should be able to get to
~1100MB/sec.
node.session./nr_sessions /in iscsi.conf should be set to 2, for example.
Y.

> ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>
> ----- Original Message -----
> From: "Michal Skrivanek" <michal.skrivanek at redhat.com>
> To: "Karli Sjöberg" <Karli.Sjoberg at slu.se>, "Demeter Tibor" <tdemeter at itsmart.hu>
> Cc: "users" <users at ovirt.org>
> Sent: Tuesday, September 8, 2015 10:18:54 AM
> Subject: Re: [ovirt-users] strange iscsi issue
>
> On 8 Sep 2015, at 07:45, Karli Sjöberg wrote:
>
>> tis 2015-09-08 klockan 06:59 +0200 skrev Demeter Tibor:
>>> Hi,
>>> Thank you for your reply.
>>> I'm sorry but I don't think so. This storage is fast, because it is a SSD based storage, and I can read/write to it with fast performance.
>>> I know, in virtual environment the I/O always slowest than on physical, but here I have a very large difference. 
>>> Also, I use ext4 FS.
>> My suggestion would be to use a filesystem benchmarking tool like bonnie
>> ++ to first test the performance locally on the storage server and then
>> redo the same test inside of a virtual machine. Also make sure the VM is
>> using VirtIO disk (either block or SCSI) for best performance. I have
> also note new 3.6 support for virtio-blk dataplane[1]. Not sure how will it look using artificial stress tools, but in general it improves storage performance a lot.
>
> Thanks,
> michal
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1214311
>
>> tested speeds over 1Gb/s with bonded 1Gb NICS so I know it should work
>> in theory as well as practice.
>>
>> Oh, and for the record. IO doesn´t have to be bound by the speed of
>> storage, if the host caches in RAM before sending it over the wire. But
>> that in my opinion is dangerous and as far as I know, it´s not actived
>> in oVirt, please correct me if I´m wrong.
>>
>> /K
>>
>>> Thanks
>>>
>>> Tibor
>>>
>>>
>>> ----- 2015. szept.. 8., 0:40, Alex McWhirter alexmcwhirter at triadic.us írta:
>>>
>>>> Unless you're using a caching filesystem like zfs, then you're going to be
>>>> limited by how fast your storage back end can actually right to disk. Unless
>>>> you have a quite large storage back end, 10gbe is probably faster than your
>>>> disks can read and write.
>>>>
>>>> On Sep 7, 2015 4:26 PM, Demeter Tibor <tdemeter at itsmart.hu> wrote:
>>>>> Hi All,
>>>>>
>>>>> I have to create a test environment for testing purposes, because we need to
>>>>> testing our new 10gbe infrastructure.
>>>>> One server that have a 10gbe nic - this is the vdsm host and ovirt portal.
>>>>> One server that have a 10gbe nic - this is the storage.
>>>>>
>>>>> Its connected to each other throught a dlink 10gbe switch.
>>>>>
>>>>> Everything good and nice, the server can connect to storage, I can make and run
>>>>> VMs, but the storage performance from inside VM seems to be 1Gb/sec only.
>>>>> I did try the iperf command for testing connections beetwen servers, and it was
>>>>> 9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice and also it
>>>>> was 400-450 MB/sec. I've got same result on storage server.
>>>>>
>>>>> So:
>>>>>
>>>>> - hdparm test on local storage ~ 400 mb/sec
>>>>> - hdparm test on ovirt node server through attached iscsi device ~ 400 Mb/sec
>>>>> - hdparm test from inside vm on local virtual disk - 93-102 Mb /sec
>>>>>
>>>>> The question is : Why?
>>>>>
>>>>> ps. I Have only one ovirtmgmt device, so there are no other networks. The router
>>>>> is only 1gbe/sec, but i've tested and the traffic does not going through  this.
>>>>>
>>>>> Thanks in advance,
>>>>>
>>>>> Regards,
>>>>> Tibor
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20150909/7e4bb708/attachment-0001.html>


More information about the Users mailing list