[ovirt-users] Ovirt with bad IO performance
Gabriel Ozaki
gabriel.ozaki at kemi.com.br
Fri Sep 2 11:11:43 EDT 2016
Hi Yaniv
Sorry guys, i don't explain well on my first mail, i notice a bad IO
performance on *disk* benchmarks, the network are working really fine
2016-09-02 12:04 GMT-03:00 Yaniv Kaul <ykaul at redhat.com>:
> On Fri, Sep 2, 2016 at 5:33 PM, Gabriel Ozaki <gabriel.ozaki at kemi.com.br>
> wrote:
>
>> Hi Nir, thanks for the answer
>>
>>
>> *The nfs server is in the host?*
>> Yes, i choose NFS to use as storage on ovirt host
>>
>> *- Is this 2.9GiB/s or 2.9 MiB/s?*
>> Is MiB/s, i put the full test on paste bin
>> centos guest on ovirt:
>> http://pastebin.com/d48qfvuf
>>
>> centos guest on xenserver:
>> http://pastebin.com/gqN3du29
>>
>> how the test works:
>> https://www.howtoforge.com/how-to-benchmark-your-system-cpu-
>> file-io-mysql-with-sysbench
>>
>> *- Are you testing using NFS in all versions?*
>> i am using the v3 version
>>
>> *- What is the disk format?*
>> partion size format
>> / 20Gb xfs
>> swap 2 Gb xfs
>> /dados rest of disk xfs (note, this is the partition where i save the
>> ISOs,exports and VM disks)
>>
>>
>> *- How do you test io on the host?*
>> I do a clean install of centos and do the test before i install the ovirt
>> the test:
>> http://pastebin.com/7RKU7778
>>
>> *- What kind of nic is used? (1G, 10G?)*
>> Is only a 100mbps :(
>>
>
> 100Mbps will not get you more than several MB/s. 11MB/s on a very bright
> day...
>
>>
>> *We need much more details to understand what do you test here.*
>> I have problems to upload the benchmark test on orvirt to novabench site,
>> so here is the screenshot(i make a mistake on the last email i get the
>> wrong value), is 86 Mb/s:
>>
>
> Which is not possible on the wire. Unless it's VM to VM? And the storage
> is local, which means it's the bandwidth of the physical disk itself?
> Y.
>
>
>
>>
>>
>> And the novabench on xenserver:
>> https://novabench.com/compare.php?id=ba8dd628e4042dfc1f3d396
>> 70b164ab11061671
>>
>> *- For Xenserver - detailed description of the vm and the storage
>> configuration?*
>> The host is the same(i install xenserver, do the tests before i install
>> centos), the VM i use the same configuration of ovirt, 2 cores, 4 Gb of ram
>> and 60 Gb disk(in the default xenserver SR)
>>
>> *- For ovirt, can you share the vm command line, available in
>> /var/log/libvirt/qemu/vmname.**log?*
>> 2016-09-01 12:50:28.268+0000: starting up libvirt version: 1.2.17,
>> package: 13.el7_2.5 (CentOS BuildSystem <http://bugs.centos.org>,
>> 2016-06-23-14:23:27, worker1.bsys.centos.org), qemu version: 2.3.0
>> (qemu-kvm-ev-2.3.0-31.el7.16.1)
>> LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
>> QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name vmcentos -S -machine
>> pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu Haswell-noTSX -m
>> size=4194304k,slots=16,maxmem=4294967296k -realtime mlock=off -smp
>> 2,maxcpus=16,sockets=16,cores=1,threads=1 -numa
>> node,nodeid=0,cpus=0-1,mem=4096 -uuid 21872e4b-7699-4502-b1ef-2c058eff1c3c
>> -smbios type=1,manufacturer=oVirt,product=oVirt
>> Node,version=7-2.1511.el7.centos.2.10,serial=03AA02FC-0414-
>> 05F8-D906-710700080009,uuid=21872e4b-7699-4502-b1ef-2c058eff1c3c
>> -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/va
>> r/lib/libvirt/qemu/domain-vmcentos/monitor.sock,server,nowait -mon
>> chardev=charmonitor,id=monitor,mode=control -rtc
>> base=2016-09-01T09:50:28,driftfix=slew -global
>> kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on
>> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
>> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x3 -device
>> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4
>> -drive file=/rhev/data-center/mnt/ovirt.kemi.intranet:_dados_iso/
>> 52ee9f87-9d38-48ec-8003-193262f81994/images/11111111-1111-
>> 1111-1111-111111111111/CentOS-7-x86_64-NetInstall-1511.iso,
>> if=none,id=drive-ide0-1-0,readonly=on,format=raw -device
>> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=2
>> -drive file=/rhev/data-center/00000001-0001-0001-0001-0000000002bb/
>> 4ccdd1f3-ee79-4425-b6ed-5774643003fa/images/2ecfcf18-
>> ae84-4e73-922f-28b9cda9e6e1/800f05bf-23f7-4c9d-8c1d-
>> b2503592875f,if=none,id=drive-virtio-disk0,format=raw,
>> serial=2ecfcf18-ae84-4e73-922f-28b9cda9e6e1,cache=none,
>> werror=stop,rerror=stop,aio=threads -device
>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virti
>> o-disk0,id=virtio-disk0,bootindex=1 -chardev
>> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/2
>> 1872e4b-7699-4502-b1ef-2c058eff1c3c.com.redhat.rhevm.vdsm,server,nowait
>> -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel
>> 0,id=channel0,name=com.redhat.rhevm.vdsm -chardev
>> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/2
>> 1872e4b-7699-4502-b1ef-2c058eff1c3c.org.qemu.guest_agent.0,server,nowait
>> -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel
>> 1,id=channel1,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0
>> -vnc 192.168.0.189:0,password -k pt-br -device
>> VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -device
>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on
>> 2016-09-01T12:50:28.307173Z qemu-kvm: warning: CPU(s) not present in any
>> NUMA nodes: 2 3 4 5 6 7 8 9 10 11 12 13 14 15
>> 2016-09-01T12:50:28.307371Z qemu-kvm: warning: All CPU(s) up to maxcpus
>> should be described in NUMA config
>> qemu: terminating on signal 15 from pid 1
>> 2016-09-01 19:13:47.899+0000: shutting down
>>
>>
>> Thanks
>>
>>
>>
>>
>>
>>
>>
>> 2016-09-02 11:05 GMT-03:00 Nir Soffer <nsoffer at redhat.com>:
>>
>>> On Fri, Sep 2, 2016 at 4:44 PM, Gabriel Ozaki <gabriel.ozaki at kemi.com.br
>>> > wrote:
>>>
>>>> Hi
>>>> i am trying Ovirt 4.0 and i am getting some strange results when
>>>> comparing with Xenserver
>>>>
>>>> **The host machine*
>>>> Intel Core i5-4440 3.10GHz running at 3093 MHz
>>>> 8 Gb of RAM (1x8)
>>>> 500 Gb of Disk (seagate st500dm002 7200rpm)
>>>> CentOS 7 (netinstall for the most updated and stable packages)
>>>>
>>>>
>>>>
>>>> **How i am testing:*
>>>> I choose two benchmark tools, sysbench(epel-repo on centos) and
>>>> novabench(for windows guest, https://novabench.com ), then i make a
>>>> clean install of xenserver and create two guests(CentOS and Windows 7 SP1)
>>>>
>>>>
>>>> **The Guest specs*
>>>> 2 cores
>>>> 4 Gb of RAM
>>>> 60 Gb of disk (using virtIO in a NFS storage)
>>>>
>>>
>>> The nfs server is in the host?
>>>
>>>
>>>> Important note: only the testing guest are up on benchmark and i have
>>>> installed the drivers in guest
>>>>
>>>>
>>>> **The Sysbench disk test(creates 10Gb of data and do the bench):*
>>>> # sysbench --test=fileio --file-total-size=10G prepare
>>>> # sysbench --test=fileio --file-total-size=10G --file-test-mode=rndrw
>>>> --init-rng=on --max-time=300 --max-requests=0 run
>>>>
>>>> Host result: 2.9843Mb/sec
>>>> Ovirt result: 1.1561Mb/sec
>>>> Xenserver result: 2.9006Mb/sec
>>>>
>>>
>>> - Is this 2.9GiB/s or 2.9 MiB/s?
>>> - Are you testing using NFS in all versions?
>>> - What is the disk format?
>>> - How do you test io on the host?
>>> - What kind of nic is used? (1G, 10G?)
>>>
>>>
>>>>
>>>> **The novabench test:*
>>>> Ovirt result: 79Mb/s
>>>> Xenserver result: 101Mb/s
>>>>
>>>
>>> We need much more details to understand what do you test here.
>>>
>>> - For ovirt, can you share the vm command line, available in
>>> /var/log/libvirt/qemu/vmname.log?
>>> - For Xenserver - detailed description of the vm and the storage
>>> configuration?
>>>
>>> Nir
>>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160902/ca3a29b6/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ovirt-sem-driver.png
Type: image/png
Size: 29812 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160902/ca3a29b6/attachment-0001.png>
More information about the Users
mailing list