Hi YanivSorry guys, i don't explain well on my first mail, i notice a bad IO performance on disk benchmarks, the network are working really fine
2016-09-02 12:04 GMT-03:00 Yaniv Kaul <ykaul@redhat.com>:On Fri, Sep 2, 2016 at 5:33 PM, Gabriel Ozaki <gabriel.ozaki@kemi.com.br> wrote:how the test works:centos guest on xenserver:Is MiB/s, i put the full test on paste binYes, i choose NFS to use as storage on ovirt hostHi Nir, thanks for the answerThe nfs server is in the host?
- Is this 2.9GiB/s or 2.9 MiB/s?
centos guest on ovirt:
http://pastebin.com/d48qfvuf
http://pastebin.com/gqN3du29
https://www.howtoforge.com/how-to-benchmark-your-system-cpu- file-io-mysql-with-sysbench
- Are you testing using NFS in all versions?i am using the v3 version
- What is the disk format?partion size format/ 20Gb xfsswap 2 Gb xfs/dados rest of disk xfs (note, this is the partition where i save the ISOs,exports and VM disks)
- How do you test io on the host?I do a clean install of centos and do the test before i install the ovirt
the test:
http://pastebin.com/7RKU7778- What kind of nic is used? (1G, 10G?)Is only a 100mbps :(100Mbps will not get you more than several MB/s. 11MB/s on a very bright day...We need much more details to understand what do you test here.I have problems to upload the benchmark test on orvirt to novabench site, so here is the screenshot(i make a mistake on the last email i get the wrong value), is 86 Mb/s:Which is not possible on the wire. Unless it's VM to VM? And the storage is local, which means it's the bandwidth of the physical disk itself?Y.______________________________
And the novabench on xenserver:
https://novabench.com/compare.php?id=ba8dd628e4042dfc1f3d396 70b164ab11061671 - For Xenserver - detailed description of the vm and the storage configuration?The host is the same(i install xenserver, do the tests before i install centos), the VM i use the same configuration of ovirt, 2 cores, 4 Gb of ram and 60 Gb disk(in the default xenserver SR)
- For ovirt, can you share the vm command line, available in /var/log/libvirt/qemu/vmname.log?
2016-09-01 12:50:28.268+0000: starting up libvirt version: 1.2.17, package: 13.el7_2.5 (CentOS BuildSystem <http://bugs.centos.org>, 2016-06-23-14:23:27, worker1.bsys.centos.org), qemu version: 2.3.0 (qemu-kvm-ev-2.3.0-31.el7.16.1)
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name vmcentos -S -machine pc-i440fx-rhel7.2.0,accel=kvm, usb=off -cpu Haswell-noTSX -m size=4194304k,slots=16,maxmem= 4294967296k -realtime mlock=off -smp 2,maxcpus=16,sockets=16,cores= 1,threads=1 -numa node,nodeid=0,cpus=0-1,mem=409 6 -uuid 21872e4b-7699-4502-b1ef-2c058e ff1c3c -smbios type=1,manufacturer=oVirt,prod uct=oVirt Node,version=7-2.1511.el7.cent os.2.10,serial=03AA02FC-0414-0 5F8-D906-710700080009,uuid=218 72e4b-7699-4502-b1ef-2c058eff1 c3c -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/va r/lib/libvirt/qemu/domain-vmce ntos/monitor.sock,server,nowai t -mon chardev=charmonitor,id=monitor ,mode=control -rtc base=2016-09-01T09:50:28,drift fix=slew -global kvm-pit.lost_tick_policy=disca rd -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci. 0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=p ci.0,addr=0x3 -device virtio-serial-pci,id=virtio-se rial0,max_ports=16,bus=pci.0,a ddr=0x4 -drive file=/rhev/data-center/mnt/ovi rt.kemi.intranet:_dados_iso/52 ee9f87-9d38-48ec-8003-193262f8 1994/images/11111111-1111-1111 -1111-111111111111/CentOS-7- x86_64-NetInstall-1511.iso,if= none,id=drive-ide0-1-0,readonl y=on,format=raw -device ide-cd,bus=ide.1,unit=0,drive= drive-ide0-1-0,id=ide0-1-0,boo tindex=2 -drive file=/rhev/data-center/0000000 1-0001-0001-0001-0000000002bb/ 4ccdd1f3-ee79-4425-b6ed-577464 3003fa/images/2ecfcf18-ae84- 4e73-922f-28b9cda9e6e1/800f05b f-23f7-4c9d-8c1d-b2503592875f, if=none,id=drive-virtio-disk0, format=raw,serial=2ecfcf18- ae84-4e73-922f-28b9cda9e6e1, cache=none,werror=stop,rerror= stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pc i.0,addr=0x6,drive=drive-virti o-disk0,id=virtio-disk0,bootin dex=1 -chardev socket,id=charchannel0,path=/v ar/lib/libvirt/qemu/channels/2 1872e4b-7699-4502-b1ef-2c058ef f1c3c.com.redhat.rhevm.vdsm,se rver,nowait -device virtserialport,bus=virtio-seri al0.0,nr=1,chardev=charchannel 0,id=channel0,name=com.redhat. rhevm.vdsm -chardev socket,id=charchannel1,path=/v ar/lib/libvirt/qemu/channels/2 1872e4b-7699-4502-b1ef-2c058ef f1c3c.org.qemu.guest_agent.0,s erver,nowait -device virtserialport,bus=virtio-seri al0.0,nr=2,chardev=charchannel 1,id=channel1,name=org.qemu.gu est_agent.0 -device usb-tablet,id=input0 -vnc 192.168.0.189:0,password -k pt-br -device VGA,id=video0,vgamem_mb=16,bus =pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0 ,bus=pci.0,addr=0x5 -msg timestamp=on
2016-09-01T12:50:28.307173Z qemu-kvm: warning: CPU(s) not present in any NUMA nodes: 2 3 4 5 6 7 8 9 10 11 12 13 14 15
2016-09-01T12:50:28.307371Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config
qemu: terminating on signal 15 from pid 1
2016-09-01 19:13:47.899+0000: shutting downThanks2016-09-02 11:05 GMT-03:00 Nir Soffer <nsoffer@redhat.com>:On Fri, Sep 2, 2016 at 4:44 PM, Gabriel Ozaki <gabriel.ozaki@kemi.com.br> wrote:Hii am trying Ovirt 4.0 and i am getting some strange results when comparing with Xenserver*The host machine
Intel Core i5-4440 3.10GHz running at 3093 MHz8 Gb of RAM (1x8)500 Gb of Disk (seagate st500dm002 7200rpm)CentOS 7 (netinstall for the most updated and stable packages)*How i am testing:I choose two benchmark tools, sysbench(epel-repo on centos) and novabench(for windows guest, https://novabench.com ), then i make a clean install of xenserver and create two guests(CentOS and Windows 7 SP1)*The Guest specs2 cores4 Gb of RAM60 Gb of disk (using virtIO in a NFS storage)The nfs server is in the host?Important note: only the testing guest are up on benchmark and i have installed the drivers in guest*The Sysbench disk test(creates 10Gb of data and do the bench):# sysbench --test=fileio --file-total-size=10G prepare
# sysbench --test=fileio --file-total-size=10G --file-test-mode=rndrw --init-rng=on --max-time=300 --max-requests=0 runHost result: 2.9843Mb/secOvirt result: 1.1561Mb/secXenserver result: 2.9006Mb/sec- Is this 2.9GiB/s or 2.9 MiB/s?- Are you testing using NFS in all versions?- What is the disk format?- How do you test io on the host?- What kind of nic is used? (1G, 10G?)*The novabench test:Ovirt result: 79Mb/sXenserver result: 101Mb/sWe need much more details to understand what do you test here.- For ovirt, can you share the vm command line, available in /var/log/libvirt/qemu/vmname.log? - For Xenserver - detailed description of the vm and the storage configuration?Nir_________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users