[ovirt-users] strange iscsi issue

Michal Skrivanek michal.skrivanek at redhat.com
Tue Sep 8 08:18:54 UTC 2015


On 8 Sep 2015, at 07:45, Karli Sjöberg wrote:

> tis 2015-09-08 klockan 06:59 +0200 skrev Demeter Tibor:
>> Hi,
>> Thank you for your reply.
>> I'm sorry but I don't think so. This storage is fast, because it is a SSD based storage, and I can read/write to it with fast performance.
>> I know, in virtual environment the I/O always slowest than on physical, but here I have a very large difference. 
>> Also, I use ext4 FS.
> 
> My suggestion would be to use a filesystem benchmarking tool like bonnie
> ++ to first test the performance locally on the storage server and then
> redo the same test inside of a virtual machine. Also make sure the VM is
> using VirtIO disk (either block or SCSI) for best performance. I have

also note new 3.6 support for virtio-blk dataplane[1]. Not sure how will it look using artificial stress tools, but in general it improves storage performance a lot.

Thanks,
michal

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1214311

> tested speeds over 1Gb/s with bonded 1Gb NICS so I know it should work
> in theory as well as practice.
> 
> Oh, and for the record. IO doesn´t have to be bound by the speed of
> storage, if the host caches in RAM before sending it over the wire. But
> that in my opinion is dangerous and as far as I know, it´s not actived
> in oVirt, please correct me if I´m wrong.
> 
> /K
> 
>> 
>> Thanks
>> 
>> Tibor
>> 
>> 
>> ----- 2015. szept.. 8., 0:40, Alex McWhirter alexmcwhirter at triadic.us írta:
>> 
>>> Unless you're using a caching filesystem like zfs, then you're going to be
>>> limited by how fast your storage back end can actually right to disk. Unless
>>> you have a quite large storage back end, 10gbe is probably faster than your
>>> disks can read and write.
>>> 
>>> On Sep 7, 2015 4:26 PM, Demeter Tibor <tdemeter at itsmart.hu> wrote:
>>>> 
>>>> Hi All,
>>>> 
>>>> I have to create a test environment for testing purposes, because we need to
>>>> testing our new 10gbe infrastructure.
>>>> One server that have a 10gbe nic - this is the vdsm host and ovirt portal.
>>>> One server that have a 10gbe nic - this is the storage.
>>>> 
>>>> Its connected to each other throught a dlink 10gbe switch.
>>>> 
>>>> Everything good and nice, the server can connect to storage, I can make and run
>>>> VMs, but the storage performance from inside VM seems to be 1Gb/sec only.
>>>> I did try the iperf command for testing connections beetwen servers, and it was
>>>> 9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice and also it
>>>> was 400-450 MB/sec. I've got same result on storage server.
>>>> 
>>>> So:
>>>> 
>>>> - hdparm test on local storage ~ 400 mb/sec
>>>> - hdparm test on ovirt node server through attached iscsi device ~ 400 Mb/sec
>>>> - hdparm test from inside vm on local virtual disk - 93-102 Mb /sec
>>>> 
>>>> The question is : Why?
>>>> 
>>>> ps. I Have only one ovirtmgmt device, so there are no other networks. The router
>>>> is only 1gbe/sec, but i've tested and the traffic does not going through  this.
>>>> 
>>>> Thanks in advance,
>>>> 
>>>> Regards,
>>>> Tibor
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
> 
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users




More information about the Users mailing list