[ovirt-users] strange iscsi issue

Alex McWhirter alexmcwhirter at triadic.us
Tue Sep 8 06:05:02 UTC 2015


Are we talking about a single ssd or an array of them? VMs are usually large continuous image files. SSDs are faster delivering many small files over large continuous file.

I believe ovirt forces sync writes by default, but I'm not sure as I'm using NFS. The best thing to do is figure out whether it's a storage issue or network issue.

Try setting your iscsi server to use async writes, this can be dangerous if either server crashes or loses power so I would just do it for testing purposes. 

With async writes you should be able to hit near 10gbps writes, but reads will depend on how much data is cached and how much ram the iscsi server has.

Are you presenting a raw disk over iscsi, an image file, or a filesystem lun via zfs or something similar?

Alex sent the message, but his phone sent the typos...

On Sep 8, 2015 1:45 AM, Karli Sjöberg wrote: > > tis 2015-09-08 klockan 06:59 +0200 skrev Demeter Tibor: > > Hi, > > Thank you for your reply. > > I'm sorry but I don't think so. This storage is fast, because it is a SSD based storage, and I can read/write to it with fast performance. > > I know, in virtual environment the I/O always slowest than on physical, but here I have a very large difference. > > Also, I use ext4 FS. > > My suggestion would be to use a filesystem benchmarking tool like bonnie > ++ to first test the performance locally on the storage server and then > redo the same test inside of a virtual machine. Also make sure the VM is > using VirtIO disk (either block or SCSI) for best performance. I have > tested speeds over 1Gb/s with bonded 1Gb NICS so I know it should work > in theory as well as practice. > > Oh, and for the record. IO doesn´t have to be bound by the speed of > storage, if the host caches in RAM before sending it over the wire. But > that in my opinion is dangerous and as far as I know, it´s not actived > in oVirt, please correct me if I´m wrong. > > /K > > > > > Thanks > > > > Tibor > > > > > > ----- 2015. szept.. 8., 0:40, Alex McWhirter alexmcwhirter at triadic.us írta: > > > > > Unless you're using a caching filesystem like zfs, then you're going to be > > > limited by how fast your storage back end can actually right to disk. Unless > > > you have a quite large storage back end, 10gbe is probably faster than your > > > disks can read and write. > > > > > > On Sep 7, 2015 4:26 PM, Demeter Tibor wrote: > > >> > > >> Hi All, > > >> > > >> I have to create a test environment for testing purposes, because we need to > > >> testing our new 10gbe infrastructure. > > >> One server that have a 10gbe nic - this is the vdsm host and ovirt portal. > > >> One server that have a 10gbe nic - this is the storage. > > >> > > >> Its connected to each other throught a dlink 10gbe switch. > > >> > > >> Everything good and nice, the server can connect to storage, I can make and run > > >> VMs, but the storage performance from inside VM seems to be 1Gb/sec only. > > >> I did try the iperf command for testing connections beetwen servers, and it was > > >> 9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice and also it > > >> was 400-450 MB/sec. I've got same result on storage server. > > >> > > >> So: > > >> > > >> - hdparm test on local storage ~ 400 mb/sec > > >> - hdparm test on ovirt node server through attached iscsi device ~ 400 Mb/sec > > >> - hdparm test from inside vm on local virtual disk - 93-102 Mb /sec > > >> > > >> The question is : Why? > > >> > > >> ps. I Have only one ovirtmgmt device, so there are no other networks. The router > > >> is only 1gbe/sec, but i've tested and the traffic does not going through  this. > > >> > > >> Thanks in advance, > > >> > > >> Regards, > > > > Tibor > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users >


More information about the Users mailing list