<p dir="ltr"></p>
<p dir="ltr">Il 02/Set/2016 08:32 PM, "Yaniv Kaul" <<a href="mailto:ykaul@redhat.com">ykaul@redhat.com</a>> ha scritto:<br>
><br>
><br>
><br>
> On Fri, Sep 2, 2016 at 6:11 PM, Gabriel Ozaki <<a href="mailto:gabriel.ozaki@kemi.com.br">gabriel.ozaki@kemi.com.br</a>> wrote:<br>
>><br>
>> Hi Yaniv<br>
>><br>
>> Sorry guys, i don't explain well on my first mail, i notice a bad IO performance on disk benchmarks, the network are working really fine<br>
><br>
><br>
> But where is the disk? If it's across the network, then network is involved and is certainly a bottleneck.</p>
<p dir="ltr">No yaniv. It's hyperconverged setup with local storage exported over nfs v3<br><br></p>
<p dir="ltr">> Y.<br>
> <br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>> 2016-09-02 12:04 GMT-03:00 Yaniv Kaul <<a href="mailto:ykaul@redhat.com">ykaul@redhat.com</a>>:<br>
>>><br>
>>> On Fri, Sep 2, 2016 at 5:33 PM, Gabriel Ozaki <<a href="mailto:gabriel.ozaki@kemi.com.br">gabriel.ozaki@kemi.com.br</a>> wrote:<br>
>>>><br>
>>>> Hi Nir, thanks for the answer<br>
>>>><br>
>>>> The nfs server is in the host?<br>
>>>> Yes, i choose NFS to use as storage on ovirt host<br>
>>>><br>
>>>> - Is this 2.9GiB/s or 2.9 MiB/s?<br>
>>>> Is MiB/s, i put the full test on paste bin<br>
>>>> centos guest on ovirt:<br>
>>>> <a href="http://pastebin.com/d48qfvuf">http://pastebin.com/d48qfvuf</a><br>
>>>><br>
>>>> centos guest on xenserver:<br>
>>>> <a href="http://pastebin.com/gqN3du29">http://pastebin.com/gqN3du29</a><br>
>>>><br>
>>>> how the test works:<br>
>>>> <a href="https://www.howtoforge.com/how-to-benchmark-your-system-cpu-file-io-mysql-with-sysbench">https://www.howtoforge.com/how-to-benchmark-your-system-cpu-file-io-mysql-with-sysbench</a><br>
>>>><br>
>>>> - Are you testing using NFS in all versions?<br>
>>>> i am using the v3 version<br>
>>>><br>
>>>> - What is the disk format?<br>
>>>> partion size format<br>
>>>> / 20Gb xfs<br>
>>>> swap 2 Gb xfs<br>
>>>> /dados rest of disk xfs (note, this is the partition where i save the ISOs,exports and VM disks)<br>
>>>><br>
>>>> - How do you test io on the host?<br>
>>>> I do a clean install of centos and do the test before i install the ovirt<br>
>>>> the test:<br>
>>>> <a href="http://pastebin.com/7RKU7778">http://pastebin.com/7RKU7778</a><br>
>>>><br>
>>>> - What kind of nic is used? (1G, 10G?)<br>
>>>> Is only a 100mbps :(<br>
>>><br>
>>><br>
>>> 100Mbps will not get you more than several MB/s. 11MB/s on a very bright day... <br>
>>>><br>
>>>><br>
>>>> We need much more details to understand what do you test here.<br>
>>>> I have problems to upload the benchmark test on orvirt to novabench site, so here is the screenshot(i make a mistake on the last email i get the wrong value), is 86 Mb/s:<br>
>>><br>
>>><br>
>>> Which is not possible on the wire. Unless it's VM to VM? And the storage is local, which means it's the bandwidth of the physical disk itself?<br>
>>> Y.<br>
>>><br>
>>> <br>
>>>><br>
>>>><br>
>>>> <br>
>>>> And the novabench on xenserver:<br>
>>>> <a href="https://novabench.com/compare.php?id=ba8dd628e4042dfc1f3d39670b164ab11061671">https://novabench.com/compare.php?id=ba8dd628e4042dfc1f3d39670b164ab11061671</a><br>
>>>><br>
>>>> - For Xenserver - detailed description of the vm and the storage configuration?<br>
>>>> The host is the same(i install xenserver, do the tests before i install centos), the VM i use the same configuration of ovirt, 2 cores, 4 Gb of ram and 60 Gb disk(in the default xenserver SR)<br>
>>>><br>
>>>> - For ovirt, can you share the vm command line, available in /var/log/libvirt/qemu/vmname.log?<br>
>>>> 2016-09-01 12:50:28.268+0000: starting up libvirt version: 1.2.17, package: 13.el7_2.5 (CentOS BuildSystem <<a href="http://bugs.centos.org">http://bugs.centos.org</a>>, 2016-06-23-14:23:27, <a href="http://worker1.bsys.centos.org">worker1.bsys.centos.org</a>), qemu version: 2.3.0 (qemu-kvm-ev-2.3.0-31.el7.16.1)<br>
>>>> LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name vmcentos -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu Haswell-noTSX -m size=4194304k,slots=16,maxmem=4294967296k -realtime mlock=off -smp 2,maxcpus=16,sockets=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0-1,mem=4096 -uuid 21872e4b-7699-4502-b1ef-2c058eff1c3c -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-2.1511.el7.centos.2.10,serial=03AA02FC-0414-05F8-D906-710700080009,uuid=21872e4b-7699-4502-b1ef-2c058eff1c3c -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-vmcentos/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2016-09-01T09:50:28,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x3 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4 -drive file=/rhev/data-center/mnt/ovirt.kemi.intranet:_dados_iso/52ee9f87-9d38-48ec-8003-193262f81994/images/11111111-1111-1111-1111-111111111111/CentOS-7-x86_64-NetInstall-1511.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=2 -drive file=/rhev/data-center/00000001-0001-0001-0001-0000000002bb/4ccdd1f3-ee79-4425-b6ed-5774643003fa/images/2ecfcf18-ae84-4e73-922f-28b9cda9e6e1/800f05bf-23f7-4c9d-8c1d-b2503592875f,if=none,id=drive-virtio-disk0,format=raw,serial=2ecfcf18-ae84-4e73-922f-28b9cda9e6e1,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/21872e4b-7699-4502-b1ef-2c058eff1c3c.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/21872e4b-7699-4502-b1ef-2c058eff1c3c.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0 -vnc <a href="http://192.168.0.189:0">192.168.0.189:0</a>,password -k pt-br -device VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on<br>
>>>> 2016-09-01T12:50:28.307173Z qemu-kvm: warning: CPU(s) not present in any NUMA nodes: 2 3 4 5 6 7 8 9 10 11 12 13 14 15<br>
>>>> 2016-09-01T12:50:28.307371Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config<br>
>>>> qemu: terminating on signal 15 from pid 1<br>
>>>> 2016-09-01 19:13:47.899+0000: shutting down<br>
>>>><br>
>>>><br>
>>>> Thanks<br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>> 2016-09-02 11:05 GMT-03:00 Nir Soffer <<a href="mailto:nsoffer@redhat.com">nsoffer@redhat.com</a>>:<br>
>>>>><br>
>>>>> On Fri, Sep 2, 2016 at 4:44 PM, Gabriel Ozaki <<a href="mailto:gabriel.ozaki@kemi.com.br">gabriel.ozaki@kemi.com.br</a>> wrote:<br>
>>>>>><br>
>>>>>> Hi<br>
>>>>>> i am trying Ovirt 4.0 and i am getting some strange results when comparing with Xenserver<br>
>>>>>><br>
>>>>>> *The host machine<br>
>>>>>> Intel Core i5-4440 3.10GHz running at 3093 MHz<br>
>>>>>> 8 Gb of RAM (1x8)<br>
>>>>>> 500 Gb of Disk (seagate st500dm002 7200rpm)<br>
>>>>>> CentOS 7 (netinstall for the most updated and stable packages)<br>
>>>>>><br>
>>>>>><br>
>>>>>> *How i am testing:<br>
>>>>>> I choose two benchmark tools, sysbench(epel-repo on centos) and novabench(for windows guest, <a href="https://novabench.com">https://novabench.com</a> ), then i make a clean install of xenserver and create two guests(CentOS and Windows 7 SP1)<br>
>>>>>><br>
>>>>>> *The Guest specs<br>
>>>>>> 2 cores<br>
>>>>>> 4 Gb of RAM<br>
>>>>>> 60 Gb of disk (using virtIO in a NFS storage)<br>
>>>>><br>
>>>>><br>
>>>>> The nfs server is in the host?<br>
>>>>> <br>
>>>>>><br>
>>>>>> Important note: only the testing guest are up on benchmark and i have installed the drivers in guest<br>
>>>>>><br>
>>>>>> *The Sysbench disk test(creates 10Gb of data and do the bench):<br>
>>>>>> # sysbench --test=fileio --file-total-size=10G prepare<br>
>>>>>> # sysbench --test=fileio --file-total-size=10G --file-test-mode=rndrw --init-rng=on --max-time=300 --max-requests=0 run<br>
>>>>>><br>
>>>>>> Host result: 2.9843Mb/sec<br>
>>>>>> Ovirt result: 1.1561Mb/sec<br>
>>>>>> Xenserver result: 2.9006Mb/sec<br>
>>>>><br>
>>>>><br>
>>>>> - Is this 2.9GiB/s or 2.9 MiB/s?<br>
>>>>> - Are you testing using NFS in all versions?<br>
>>>>> - What is the disk format?<br>
>>>>> - How do you test io on the host?<br>
>>>>> - What kind of nic is used? (1G, 10G?)<br>
>>>>><br>
>>>>>><br>
>>>>>> *The novabench test:<br>
>>>>>> Ovirt result: 79Mb/s<br>
>>>>>> Xenserver result: 101Mb/s<br>
>>>>><br>
>>>>><br>
>>>>> We need much more details to understand what do you test here.<br>
>>>>><br>
>>>>> - For ovirt, can you share the vm command line, available in /var/log/libvirt/qemu/vmname.log?<br>
>>>>> - For Xenserver - detailed description of the vm and the storage configuration?<br>
>>>>><br>
>>>>> Nir<br>
>>>><br>
>>>><br>
>>>><br>
>>>> _______________________________________________<br>
>>>> Users mailing list<br>
>>>> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
>>>> <a href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a><br>
>>>><br>
>>><br>
>><br>
><br>
><br>
> _______________________________________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a><br>
></p>