[ovirt-users] Info about windows guest performance

Michal Skrivanek michal.skrivanek at redhat.com
Fri Feb 9 15:25:34 UTC 2018



> On 9 Feb 2018, at 14:04, Gianluca Cecchi <gianluca.cecchi at gmail.com> wrote:
> 
> Hello,
> while in my activities to accomplish migration of a Windows 2008 R2 VM (with an Oracle RDBMS inside) from vSphere to oVirt, I'm going to check performance related things.
> 
> Up to now I only ran Windows guests inside my laptops and not inside an oVirt infrastructure.
> 
> Now I successfully migrated this kind of VM to oVirt 4.1.9.
> The guest had an LSI logic sas controller. Inside the oVirt host that I used as proxy (for VMware virt-v2v) I initially didn't have the virtio-win rpm.
> I presume that has been for this reason that the oVirt guest has been configured with IDE disks…

yes
you won’t get any decent performance unless you use virtio drivers. Either virtio-block or virtio-scsi

> Can you confirm?
> 
> For this test I started with ide, then added a virtio-scsi disk and then changed also the boot disk to virtio-scsi and all now goes well, with also ovirt-guest-tools-iso-4.1-3 provided iso used to install qxl and so on...
> 
> So far so good.
> I found this bugzilla:
> https://bugzilla.redhat.com/show_bug.cgi?id=1277353 <https://bugzilla.redhat.com/show_bug.cgi?id=1277353>
> 
> where it seems that 
> 
> "
> For optimum I/O performance it's critical to make sure that Windows guests use the Hyper-V reference counter feature. QEMU command line should include
> 
> -cpu ...,hv_time
> 
> and
> 
> -no-hpet
> "
> Analyzing my command line I see the "-no-hpet" but I dont see the "hv_time"
> See below full comand.
> Any hints?

What OS type do you have set for that VM? Make sure it matches the Windows version. That enables the hyperv enlightenments settings

Thanks,
michal
> Thanks,
> Gianluca
> 
> /usr/libexec/qemu-kvm
> -name guest=testmig,debug-threads=on
> -S
> -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-12-testmig/master-key.aes
> -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off
> -cpu Westmere,vmx=on
> -m size=4194304k,slots=16,maxmem=16777216k
> -realtime mlock=off
> -smp 2,maxcpus=16,sockets=16,cores=1,threads=1
> -numa node,nodeid=0,cpus=0-1,mem=4096
> -uuid x-y-z-x-y
> -smbios type=1,manufacturer=oVirt,product=oVirt
> Node,version=7-4.1708.el7.centos,serial=xx,uuid=yy
> -no-user-config
> -nodefaults
> -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-12-testmig/monitor.sock,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control
> -rtc base=2018-02-09T12:41:41,driftfix=slew
> -global kvm-pit.lost_tick_policy=delay
> -no-hpet
> -no-shutdown
> -boot strict=on
> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
> -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x5
> -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4
> -drive if=none,id=drive-ide0-1-0,readonly=on
> -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
> -drive file=/rhev/data-center/ef17cad6-7724-4cd8-96e3-9af6e529db51/fa33df49-b09d-4f86-9719-ede649542c21/images/2de93ee3-7d6e-4a10-88c4-abc7a11fb687/a9f4e35b-4aa0-45e8-b775-1a046d1851aa,format=qcow2,if=none,id=drive-scsi0-0-0-1,serial=2de93ee3-7d6e-4a10-88c4-abc7a11fb687,cache=none,werror=stop,rerror=stop,aio=native
> -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi0-0-0-1,id=scsi0-0-0-1,bootindex=1
> -drive file=/rhev/data-center/ef17cad6-7724-4cd8-96e3-9af6e529db51/fa33df49-b09d-4f86-9719-ede649542c21/images/f821da0a-cec7-457c-88a4-f83f33404e65/0d0c4244-f184-4eaa-b5bf-8dc65c7069bb,format=raw,if=none,id=drive-scsi0-0-0-0,serial=f821da0a-cec7-457c-88a4-f83f33404e65,cache=none,werror=stop,rerror=stop,aio=native
> -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0
> -netdev tap,fd=30,id=hostnet0
> -device e1000,netdev=hostnet0,id=net0,mac=00:50:56:9d:c9:29,bus=pci.0,addr=0x3
> -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/421d6f1b-58e3-54a4-802f-fb52f7831369.com.redhat.rhevm.vdsm,server,nowait
> -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/421d6f1b-58e3-54a4-802f-fb52f7831369.org.qemu.guest_agent.0,server,nowait
> -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
> -chardev spicevmc,id=charchannel2,name=vdagent
> -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
> -spice tls-port=5900,addr=10.4.192.32,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
> -device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2
> -msg timestamp=on
> 
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20180209/dde5242e/attachment.html>


More information about the Users mailing list