[ovirt-users] Fwd: Re: ***UNCHECKED*** Re: kvm vcpu0 unhandled rdmsr

Yaniv Kaul ykaul at redhat.com
Mon Apr 11 06:06:46 UTC 2016


On Sun, Apr 10, 2016 at 11:22 PM, Yaniv Kaul <ykaul at redhat.com> wrote:

> On Sun, Apr 10, 2016 at 6:05 PM, gregor <gregor_forum at catrix.at> wrote:
>
>> Hi,
>>
>> has anybody a last tip. Now the third installed Windows Server 2012 R2
>> VM is damaged and I will move tomorrow my host back to VMWare and leave
>> oVirt.
>>
>
> It's a QEMU/KVM issue - let me see if I can get someone from KVM
> development team to get the details from you.
>

I've contacted them. They asked to file a bug in RedHat Bugzilla, and state:
1. The version of qemu-kvm (or qemu-kvm-ev) you are using.
2. The command line (which I've seen you've already have pasted to the
email)
3. The exact msr message you are seeing.

Thanks,
Y.


> Y.
>
>
>
>>
>> regards
>> gregor
>>
>> -------- Forwarded Message --------
>> Subject: Re: [ovirt-users] ***UNCHECKED*** Re:  kvm vcpu0 unhandled rdmsr
>> Date: Mon, 4 Apr 2016 15:06:52 +0200
>> From: gregor <gregor_forum at catrix.at>
>> To: Yaniv Kaul <ykaul at redhat.com>
>> CC: users <users at ovirt.org>
>>
>> Hi,
>>
>> the host and VM are all up-to-date with latest packages for CentOS 7.*.
>>
>> In /proc/cpuinfo I see "nx" in the flags list, the full list is on the
>> and of the mail.
>>
>> Is it possible that this problem destroy the Windows Server 2012 R2 VM?
>> Now I start the third installation, hopefully this time it will not get
>> damaged. If it fails again I have to use another virtualization provider
>> and leave oVirt, and I was so happy to leave VMWare :°(
>>
>> This is the command line for a VM (got with ps aux ...):
>> /usr/libexec/qemu-kvm -name srv02 -S -machine
>> pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu Westmere -m
>> size=2097152k,slots=16,maxmem=4294967296k -realtime mlock=off -smp
>> 1,maxcpus=16,sockets=16,cores=1,threads=1 -numa
>> node,nodeid=0,cpus=0,mem=2048 -uuid 6765fd03-ac0d-49ea-b8ba-cf10c60d3968
>> -smbios type=1,manufacturer=oVirt,product=oVirt
>>
>> Node,version=7-2.1511.el7.centos.2.10,serial=39343937-3439-5A43-3135-353130324542,uuid=6765fd03-ac0d-49ea-b8ba-cf10c60d3968
>> -no-user-config -nodefaults -chardev
>>
>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-srv02/monitor.sock,server,nowait
>> -mon chardev=charmonitor,id=monitor,mode=control -rtc
>> base=2016-04-03T21:24:06,driftfix=slew -global
>> kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on
>> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
>> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
>> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
>> -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device
>> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
>>
>> file=/rhev/data-center/00000001-0001-0001-0001-00000000033d/4443edf0-54aa-4ef5-84c2-a433813f304a/images/f596f9a8-c6c4-41b8-b547-7f83829807fe/5028abbd-35c8-4dcd-95a0-3d0c61dfc2b7,if=none,id=drive-virtio-disk0,format=raw,serial=f596f9a8-c6c4-41b8-b547-7f83829807fe,cache=none,werror=stop,rerror=stop,aio=threads
>> -device
>>
>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
>> -netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=31 -device
>>
>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:57,bus=pci.0,addr=0x3
>> -chardev
>>
>> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/6765fd03-ac0d-49ea-b8ba-cf10c60d3968.com.redhat.rhevm.vdsm,server,nowait
>> -device
>>
>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
>> -chardev
>>
>> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/6765fd03-ac0d-49ea-b8ba-cf10c60d3968.org.qemu.guest_agent.0,server,nowait
>> -device
>>
>> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
>> -chardev spicevmc,id=charchannel2,name=vdagent -device
>>
>> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
>> -spice
>>
>> port=5904,tls-port=5905,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,seamless-migration=on
>> -device
>>
>> qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,vgamem_mb=16,bus=pci.0,addr=0x2
>> -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg
>> timestamp=on
>>
>> Here are the full flags list:
>> fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36
>> clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb
>> rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology
>> nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx
>> smx est tm2 ssse3 fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe
>> popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm arat epb
>> pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust
>> bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc
>>
>> On 04/04/16 08:52, Yaniv Kaul wrote:
>> >
>> >
>> > On Sun, Apr 3, 2016 at 10:07 PM, gregor <gregor_forum at catrix.at
>> > <mailto:gregor_forum at catrix.at>> wrote:
>> >
>> >     Update: The problem occur when a VM reboots.
>> >     When I change the CPU Type from default "Intel Haswell-noTSX" to
>> >     "Westmere" the error is gone.
>> >
>> >
>> > The error ""kvm ... vcpu0 unhandled rdmsr ..." is quite harmless.
>> > I assume you are running the latest qemu/kvm packages.
>> > Can you ensure NX is enabled on your host?
>> > In any case, this is most likely a qemu/kvm issue - the command line of
>> > the VM and information regarding the qemu packages and host versions
>> > will be needed.
>> > Y.
>> >
>> >
>> >
>> >     But which CPU type is now the best so I don't lose performance.
>> >
>> >     Host CPU: Intel(R) Xeon(R) CPU E5-2603 v3 @ 1.60GHz
>> >
>> >     regards
>> >     gregor
>> >
>> >     On 03/04/16 20:36, gregor wrote:
>> >     > Hi,
>> >     >
>> >     > on one Host I get very often the message
>> >     >
>> >     > "kvm ... vcpu0 unhandled rdmsr ..."
>> >     >
>> >     > When this occurs some VM's are stuck. Really bad is this for an
>> >     Windows
>> >     > Server 2012 R2 VM which stucks so heavy that the VM is getting
>> corrupt
>> >     > and the VM is unable to boot anymore and a Windows Recovery in
>> any way
>> >     > didn't help. Therefor I had to reinstall the VM, this works for
>> some
>> >     > day's but now the VM is still damaged. So I can't use Windows
>> Server
>> >     > 2012 R2 on this machine but the customer needs it and I have some
>> days
>> >     > to ship it to my customer. So I have to decide to stay on oVirt
>> or use
>> >     > another product. Besides, oVirt run on my others hosts (without
>> >     Windows
>> >     > VM) very well since a long time.
>> >     > On a CentOS 7 VM I have similar problems where the NIC is getting
>> >     > offline sometime + the XFS filesystems get some errors and I have
>> >     to fix
>> >     > this in recovery mode.
>> >     >
>> >     > oVirt: 3.6.4.1-1.el7.centos
>> >     > machine: HP ProLiant ML110 Gen9
>> >     > VM's: 3 CentOS 7 and one Windows Server 2012 R2
>> >     >
>> >     > I hope somebody can help.
>> >     >
>> >     > regards
>> >     > gregor
>> >     > _______________________________________________
>> >     > Users mailing list
>> >     > Users at ovirt.org <mailto:Users at ovirt.org>
>> >     > http://lists.ovirt.org/mailman/listinfo/users
>> >     >
>> >     _______________________________________________
>> >     Users mailing list
>> >     Users at ovirt.org <mailto:Users at ovirt.org>
>> >     http://lists.ovirt.org/mailman/listinfo/users
>> >
>> >
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160411/0271e4ee/attachment-0001.html>


More information about the Users mailing list