[Users] 6.4 CR: oVirt 3.1 breaks with missing cpu features after update to CentOS 6.4 (6.3 + CR)

Hi list, I tested the upcoming CentOS 6.4 release with my lab installation of oVirt 3.1 and it fails to play well. Background: freshly installed CentOS 6.3 host in a Nehalem CPU-type Cluster with 2 other hosts. Storage is iSCSI. Datacenter and Cluster are both version 3.1. oVirt 3.1 was installed via Dreyou's repo. In CentOS 6.3 all is fine and the following rpms are installed: libvirt.x86_64 0.9.10-21.el6_3.8 libvirt-client.x86_64 0.9.10-21.el6_3.8 libvirt-lock-sanlock.x86_64 0.9.10-21.el6_3.8 libvirt-python.x86_64 0.9.10-21.el6_3.8 vdsm.x86_64 4.10.0-0.46.15.el6 vdsm-cli.noarch 4.10.0-0.46.15.el6 vdsm-python.x86_64 4.10.0-0.46.15.el6 vdsm-xmlrpc.noarch 4.10.0-0.46.15.el6 qemu-kvm.x86_64 2:0.12.1.2-2.295.el6_3.10 uname -a Linux vh-test1.mydomain.com 2.6.32-279.22.1.el6.x86_64 #1 SMP Wed Feb 6 03:10:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux virsh cpu capabilities on 6.3: <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='1'/> <feature name='rdtscp'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu> and corresponding cpu features from vdsClient: cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca, cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht, tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon, pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni, dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr, pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi, flexpriority,ept,vpid,model_Conroe,model_Penryn, model_Nehalem cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2394.132 So the system was updated to 6.4 using the continuous release repo. Installed rpms after update to 6.4 (6.3 + CR): libvirt.x86_64 0.10.2-18.el6 libvirt-client.x86_64 0.10.2-18.el6 libvirt-lock-sanlock.x86_64 0.10.2-18.el6 libvirt-python.x86_64 0.10.2-18.el6 vdsm.x86_64 4.10.0-0.46.15.el6 vdsm-cli.noarch 4.10.0-0.46.15.el6 vdsm-python.x86_64 4.10.0-0.46.15.el6 vdsm-xmlrpc.noarch 4.10.0-0.46.15.el6 qemu-kvm.x86_64 2:0.12.1.2-2.355.el6_4_4.1 uname -a Linux vh-test1.mydomain.com 2.6.32-358.0.1.el6.x86_64 #1 SMP Wed Feb 27 06:06:45 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux virsh capabilities on 6.4: <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='1'/> <feature name='rdtscp'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu> and corresponding cpu features from vdsClient: cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca, cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht, tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon, pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni, dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr, pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi, flexpriority,ept,vpid,model_coreduo,model_Conroe cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2394.098 Full outputs of virsh capabilities and vdsCaps are attached. The only difference I can see is that 6.4 exposes one additional cpu flags (sep) and this seems to break the cpu recognition of vdsm. Anyone has some hints on how to resolve or debug this further? What more information can I provide to help? Best regards Patrick -- Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg HRB 178831, Amtsgericht München Geschäftsführer: Dr. Martin Fischer, Rolf Henrich

On 04/03/2013 19:03, Patrick Hurrelmann wrote:
Hi list,
I tested the upcoming CentOS 6.4 release with my lab installation of oVirt 3.1 and it fails to play well.
Background: freshly installed CentOS 6.3 host in a Nehalem CPU-type Cluster with 2 other hosts. Storage is iSCSI. Datacenter and Cluster are both version 3.1. oVirt 3.1 was installed via Dreyou's repo.
In CentOS 6.3 all is fine and the following rpms are installed:
libvirt.x86_64 0.9.10-21.el6_3.8 libvirt-client.x86_64 0.9.10-21.el6_3.8 libvirt-lock-sanlock.x86_64 0.9.10-21.el6_3.8 libvirt-python.x86_64 0.9.10-21.el6_3.8 vdsm.x86_64 4.10.0-0.46.15.el6 vdsm-cli.noarch 4.10.0-0.46.15.el6 vdsm-python.x86_64 4.10.0-0.46.15.el6 vdsm-xmlrpc.noarch 4.10.0-0.46.15.el6 qemu-kvm.x86_64 2:0.12.1.2-2.295.el6_3.10
uname -a Linux vh-test1.mydomain.com 2.6.32-279.22.1.el6.x86_64 #1 SMP Wed Feb 6 03:10:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
virsh cpu capabilities on 6.3: <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='1'/> <feature name='rdtscp'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu>
and corresponding cpu features from vdsClient:
cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca, cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht, tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon, pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni, dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr, pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi, flexpriority,ept,vpid,model_Conroe,model_Penryn, model_Nehalem cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2394.132
So the system was updated to 6.4 using the continuous release repo.
Installed rpms after update to 6.4 (6.3 + CR):
libvirt.x86_64 0.10.2-18.el6 libvirt-client.x86_64 0.10.2-18.el6 libvirt-lock-sanlock.x86_64 0.10.2-18.el6 libvirt-python.x86_64 0.10.2-18.el6 vdsm.x86_64 4.10.0-0.46.15.el6 vdsm-cli.noarch 4.10.0-0.46.15.el6 vdsm-python.x86_64 4.10.0-0.46.15.el6 vdsm-xmlrpc.noarch 4.10.0-0.46.15.el6 qemu-kvm.x86_64 2:0.12.1.2-2.355.el6_4_4.1
uname -a Linux vh-test1.mydomain.com 2.6.32-358.0.1.el6.x86_64 #1 SMP Wed Feb 27 06:06:45 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
virsh capabilities on 6.4: <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='1'/> <feature name='rdtscp'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu>
and corresponding cpu features from vdsClient:
cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca, cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht, tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon, pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni, dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr, pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi, flexpriority,ept,vpid,model_coreduo,model_Conroe cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2394.098
Full outputs of virsh capabilities and vdsCaps are attached. The only difference I can see is that 6.4 exposes one additional cpu flags (sep) and this seems to break the cpu recognition of vdsm.
Anyone has some hints on how to resolve or debug this further? What more information can I provide to help?
Best regards Patrick
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
seems like a vdsm issue - can you check if you have this patch (not sure its related): commit 558994f8ffe030acd1b851dfd074f3417681337b Author: Mark Wu <wudxw@linux.vnet.ibm.com> Date: Tue Oct 9 11:18:10 2012 +0800 Fix a bug in the naming convertion of cpu feature 'sse4_x' The list slice opertion missed the last character. It caused the feature name couldn't get chance to be converted into libvirt's naming. Change-Id: Ia241b09c96fa16441ba9421f61a2f9a417f0d978 Signed-off-by: Mark Wu <wudxw@linux.vnet.ibm.com> Reviewed-on: http://gerrit.ovirt.org/8413 Reviewed-by: Dan Kenigsberg <danken@redhat.com> Reviewed-by: Michal Skrivanek <michal.skrivanek@redhat.com> diff --git a/vdsm/libvirtvm.py b/vdsm/libvirtvm.py index 64a3b6b..bd43b56 100644 --- a/vdsm/libvirtvm.py +++ b/vdsm/libvirtvm.py @@ -735,7 +735,7 @@ class _DomXML: for feature in features[1:]: # convert Linux name of feature to libvirt - if feature[1:5] == 'sse4_': + if feature[1:6] == 'sse4_': feature = feature[0] + 'sse4.' + feature[6:] f = self.doc.createElement('feature')

On Tue 05 Mar 2013 04:52:05 AM CST, Itamar Heim wrote:
On 04/03/2013 19:03, Patrick Hurrelmann wrote:
Hi list,
I tested the upcoming CentOS 6.4 release with my lab installation of oVirt 3.1 and it fails to play well.
Background: freshly installed CentOS 6.3 host in a Nehalem CPU-type Cluster with 2 other hosts. Storage is iSCSI. Datacenter and Cluster are both version 3.1. oVirt 3.1 was installed via Dreyou's repo.
In CentOS 6.3 all is fine and the following rpms are installed:
libvirt.x86_64 0.9.10-21.el6_3.8 libvirt-client.x86_64 0.9.10-21.el6_3.8 libvirt-lock-sanlock.x86_64 0.9.10-21.el6_3.8 libvirt-python.x86_64 0.9.10-21.el6_3.8 vdsm.x86_64 4.10.0-0.46.15.el6 vdsm-cli.noarch 4.10.0-0.46.15.el6 vdsm-python.x86_64 4.10.0-0.46.15.el6 vdsm-xmlrpc.noarch 4.10.0-0.46.15.el6 qemu-kvm.x86_64 2:0.12.1.2-2.295.el6_3.10
uname -a Linux vh-test1.mydomain.com 2.6.32-279.22.1.el6.x86_64 #1 SMP Wed Feb 6 03:10:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
virsh cpu capabilities on 6.3: <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='1'/> <feature name='rdtscp'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu>
and corresponding cpu features from vdsClient:
cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca, cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht, tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon, pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni, dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,
pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi, flexpriority,ept,vpid,model_Conroe,model_Penryn, model_Nehalem cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2394.132
So the system was updated to 6.4 using the continuous release repo.
Installed rpms after update to 6.4 (6.3 + CR):
libvirt.x86_64 0.10.2-18.el6 libvirt-client.x86_64 0.10.2-18.el6 libvirt-lock-sanlock.x86_64 0.10.2-18.el6 libvirt-python.x86_64 0.10.2-18.el6 vdsm.x86_64 4.10.0-0.46.15.el6 vdsm-cli.noarch 4.10.0-0.46.15.el6 vdsm-python.x86_64 4.10.0-0.46.15.el6 vdsm-xmlrpc.noarch 4.10.0-0.46.15.el6 qemu-kvm.x86_64 2:0.12.1.2-2.355.el6_4_4.1
uname -a Linux vh-test1.mydomain.com 2.6.32-358.0.1.el6.x86_64 #1 SMP Wed Feb 27 06:06:45 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
virsh capabilities on 6.4: <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='1'/> <feature name='rdtscp'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu>
and corresponding cpu features from vdsClient:
cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca, cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht, tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon, pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni, dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,
pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi, flexpriority,ept,vpid,model_coreduo,model_Conroe cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2394.098
Full outputs of virsh capabilities and vdsCaps are attached. The only difference I can see is that 6.4 exposes one additional cpu flags (sep) and this seems to break the cpu recognition of vdsm.
Anyone has some hints on how to resolve or debug this further? What more information can I provide to help?
Best regards Patrick
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
seems like a vdsm issue - can you check if you have this patch (not sure its related):
commit 558994f8ffe030acd1b851dfd074f3417681337b Author: Mark Wu <wudxw@linux.vnet.ibm.com> Date: Tue Oct 9 11:18:10 2012 +0800
Fix a bug in the naming convertion of cpu feature 'sse4_x'
The list slice opertion missed the last character. It caused the feature name couldn't get chance to be converted into libvirt's naming.
Change-Id: Ia241b09c96fa16441ba9421f61a2f9a417f0d978 Signed-off-by: Mark Wu <wudxw@linux.vnet.ibm.com> Reviewed-on: http://gerrit.ovirt.org/8413 Reviewed-by: Dan Kenigsberg <danken@redhat.com> Reviewed-by: Michal Skrivanek <michal.skrivanek@redhat.com>
diff --git a/vdsm/libvirtvm.py b/vdsm/libvirtvm.py index 64a3b6b..bd43b56 100644 --- a/vdsm/libvirtvm.py +++ b/vdsm/libvirtvm.py @@ -735,7 +735,7 @@ class _DomXML:
for feature in features[1:]: # convert Linux name of feature to libvirt - if feature[1:5] == 'sse4_': + if feature[1:6] == 'sse4_': feature = feature[0] + 'sse4.' + feature[6:]
f = self.doc.createElement('feature')
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
What's the missing cpu features reported by engine? Do it work if you change cluster cpu model to Conroe?

On 04.03.2013 21:52, Itamar Heim wrote:
On 04/03/2013 19:03, Patrick Hurrelmann wrote:
Hi list,
I tested the upcoming CentOS 6.4 release with my lab installation of oVirt 3.1 and it fails to play well.
Background: freshly installed CentOS 6.3 host in a Nehalem CPU-type Cluster with 2 other hosts. Storage is iSCSI. Datacenter and Cluster are both version 3.1. oVirt 3.1 was installed via Dreyou's repo.
In CentOS 6.3 all is fine and the following rpms are installed:
libvirt.x86_64 0.9.10-21.el6_3.8 libvirt-client.x86_64 0.9.10-21.el6_3.8 libvirt-lock-sanlock.x86_64 0.9.10-21.el6_3.8 libvirt-python.x86_64 0.9.10-21.el6_3.8 vdsm.x86_64 4.10.0-0.46.15.el6 vdsm-cli.noarch 4.10.0-0.46.15.el6 vdsm-python.x86_64 4.10.0-0.46.15.el6 vdsm-xmlrpc.noarch 4.10.0-0.46.15.el6 qemu-kvm.x86_64 2:0.12.1.2-2.295.el6_3.10
uname -a Linux vh-test1.mydomain.com 2.6.32-279.22.1.el6.x86_64 #1 SMP Wed Feb 6 03:10:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
virsh cpu capabilities on 6.3: <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='1'/> <feature name='rdtscp'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu>
and corresponding cpu features from vdsClient:
cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca, cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht, tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon, pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni, dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr, pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi, flexpriority,ept,vpid,model_Conroe,model_Penryn, model_Nehalem cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2394.132
So the system was updated to 6.4 using the continuous release repo.
Installed rpms after update to 6.4 (6.3 + CR):
libvirt.x86_64 0.10.2-18.el6 libvirt-client.x86_64 0.10.2-18.el6 libvirt-lock-sanlock.x86_64 0.10.2-18.el6 libvirt-python.x86_64 0.10.2-18.el6 vdsm.x86_64 4.10.0-0.46.15.el6 vdsm-cli.noarch 4.10.0-0.46.15.el6 vdsm-python.x86_64 4.10.0-0.46.15.el6 vdsm-xmlrpc.noarch 4.10.0-0.46.15.el6 qemu-kvm.x86_64 2:0.12.1.2-2.355.el6_4_4.1
uname -a Linux vh-test1.mydomain.com 2.6.32-358.0.1.el6.x86_64 #1 SMP Wed Feb 27 06:06:45 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
virsh capabilities on 6.4: <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='1'/> <feature name='rdtscp'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu>
and corresponding cpu features from vdsClient:
cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca, cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht, tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon, pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni, dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr, pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi, flexpriority,ept,vpid,model_coreduo,model_Conroe cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2394.098
Full outputs of virsh capabilities and vdsCaps are attached. The only difference I can see is that 6.4 exposes one additional cpu flags (sep) and this seems to break the cpu recognition of vdsm.
Anyone has some hints on how to resolve or debug this further? What more information can I provide to help?
Best regards Patrick
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
seems like a vdsm issue - can you check if you have this patch (not sure its related):
commit 558994f8ffe030acd1b851dfd074f3417681337b Author: Mark Wu <wudxw@linux.vnet.ibm.com> Date: Tue Oct 9 11:18:10 2012 +0800
Fix a bug in the naming convertion of cpu feature 'sse4_x'
The list slice opertion missed the last character. It caused the feature name couldn't get chance to be converted into libvirt's naming.
Change-Id: Ia241b09c96fa16441ba9421f61a2f9a417f0d978 Signed-off-by: Mark Wu <wudxw@linux.vnet.ibm.com> Reviewed-on: http://gerrit.ovirt.org/8413 Reviewed-by: Dan Kenigsberg <danken@redhat.com> Reviewed-by: Michal Skrivanek <michal.skrivanek@redhat.com>
diff --git a/vdsm/libvirtvm.py b/vdsm/libvirtvm.py index 64a3b6b..bd43b56 100644 --- a/vdsm/libvirtvm.py +++ b/vdsm/libvirtvm.py @@ -735,7 +735,7 @@ class _DomXML:
for feature in features[1:]: # convert Linux name of feature to libvirt - if feature[1:5] == 'sse4_': + if feature[1:6] == 'sse4_': feature = feature[0] + 'sse4.' + feature[6:]
f = self.doc.createElement('feature')
My version of vdsm as stated by Dreyou: v 4.10.0-0.46 (.15), builded from b59c8430b2a511bcea3bc1a954eee4ca1c0f4861 (branch ovirt-3.1) I can't see that Ia241b09c96fa16441ba9421f61a2f9a417f0d978 was merged to 3.1 Branch? I applied that patch locally and restarted vdsmd but this does not change anything. Supported cpu is still as low as Conroe instead of Nehalem. Or is there more to do than patching libvirtvm.py? Regards Patrick -- Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg HRB 178831, Amtsgericht München Geschäftsführer: Dr. Martin Fischer, Rolf Henrich

On Tue, Mar 05, 2013 at 10:21:16AM +0100, Patrick Hurrelmann wrote:
On 04.03.2013 21:52, Itamar Heim wrote:
On 04/03/2013 19:03, Patrick Hurrelmann wrote:
Hi list,
I tested the upcoming CentOS 6.4 release with my lab installation of oVirt 3.1 and it fails to play well.
Background: freshly installed CentOS 6.3 host in a Nehalem CPU-type Cluster with 2 other hosts. Storage is iSCSI. Datacenter and Cluster are both version 3.1. oVirt 3.1 was installed via Dreyou's repo.
In CentOS 6.3 all is fine and the following rpms are installed:
libvirt.x86_64 0.9.10-21.el6_3.8 libvirt-client.x86_64 0.9.10-21.el6_3.8 libvirt-lock-sanlock.x86_64 0.9.10-21.el6_3.8 libvirt-python.x86_64 0.9.10-21.el6_3.8 vdsm.x86_64 4.10.0-0.46.15.el6 vdsm-cli.noarch 4.10.0-0.46.15.el6 vdsm-python.x86_64 4.10.0-0.46.15.el6 vdsm-xmlrpc.noarch 4.10.0-0.46.15.el6 qemu-kvm.x86_64 2:0.12.1.2-2.295.el6_3.10
uname -a Linux vh-test1.mydomain.com 2.6.32-279.22.1.el6.x86_64 #1 SMP Wed Feb 6 03:10:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
virsh cpu capabilities on 6.3: <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='1'/> <feature name='rdtscp'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu>
and corresponding cpu features from vdsClient:
cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca, cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht, tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon, pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni, dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr, pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi, flexpriority,ept,vpid,model_Conroe,model_Penryn, model_Nehalem cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2394.132
So the system was updated to 6.4 using the continuous release repo.
Installed rpms after update to 6.4 (6.3 + CR):
libvirt.x86_64 0.10.2-18.el6 libvirt-client.x86_64 0.10.2-18.el6 libvirt-lock-sanlock.x86_64 0.10.2-18.el6 libvirt-python.x86_64 0.10.2-18.el6 vdsm.x86_64 4.10.0-0.46.15.el6 vdsm-cli.noarch 4.10.0-0.46.15.el6 vdsm-python.x86_64 4.10.0-0.46.15.el6 vdsm-xmlrpc.noarch 4.10.0-0.46.15.el6 qemu-kvm.x86_64 2:0.12.1.2-2.355.el6_4_4.1
uname -a Linux vh-test1.mydomain.com 2.6.32-358.0.1.el6.x86_64 #1 SMP Wed Feb 27 06:06:45 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
virsh capabilities on 6.4: <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='1'/> <feature name='rdtscp'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu>
and corresponding cpu features from vdsClient:
cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca, cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht, tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon, pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni, dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr, pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi, flexpriority,ept,vpid,model_coreduo,model_Conroe cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2394.098
Full outputs of virsh capabilities and vdsCaps are attached. The only difference I can see is that 6.4 exposes one additional cpu flags (sep) and this seems to break the cpu recognition of vdsm.
Anyone has some hints on how to resolve or debug this further? What more information can I provide to help?
Best regards Patrick
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
seems like a vdsm issue - can you check if you have this patch (not sure its related):
commit 558994f8ffe030acd1b851dfd074f3417681337b Author: Mark Wu <wudxw@linux.vnet.ibm.com> Date: Tue Oct 9 11:18:10 2012 +0800
Fix a bug in the naming convertion of cpu feature 'sse4_x'
The list slice opertion missed the last character. It caused the feature name couldn't get chance to be converted into libvirt's naming.
Change-Id: Ia241b09c96fa16441ba9421f61a2f9a417f0d978 Signed-off-by: Mark Wu <wudxw@linux.vnet.ibm.com> Reviewed-on: http://gerrit.ovirt.org/8413 Reviewed-by: Dan Kenigsberg <danken@redhat.com> Reviewed-by: Michal Skrivanek <michal.skrivanek@redhat.com>
diff --git a/vdsm/libvirtvm.py b/vdsm/libvirtvm.py index 64a3b6b..bd43b56 100644 --- a/vdsm/libvirtvm.py +++ b/vdsm/libvirtvm.py @@ -735,7 +735,7 @@ class _DomXML:
for feature in features[1:]: # convert Linux name of feature to libvirt - if feature[1:5] == 'sse4_': + if feature[1:6] == 'sse4_': feature = feature[0] + 'sse4.' + feature[6:]
f = self.doc.createElement('feature')
My version of vdsm as stated by Dreyou: v 4.10.0-0.46 (.15), builded from b59c8430b2a511bcea3bc1a954eee4ca1c0f4861 (branch ovirt-3.1)
I can't see that Ia241b09c96fa16441ba9421f61a2f9a417f0d978 was merged to 3.1 Branch?
I applied that patch locally and restarted vdsmd but this does not change anything. Supported cpu is still as low as Conroe instead of Nehalem. Or is there more to do than patching libvirtvm.py?
What is libvirt's opinion about your cpu compatibility? virsh -r cpu-compare <(echo '<cpu match="minimum"><model>Nehalem</model><vendor>Intel</vendor></cpu>') If you do not get "Host CPU is a superset of CPU described in bla", then the problem is within libvirt. Dan.

On 05.03.2013 10:54, Dan Kenigsberg wrote:
On Tue, Mar 05, 2013 at 10:21:16AM +0100, Patrick Hurrelmann wrote:
On 04.03.2013 21:52, Itamar Heim wrote:
On 04/03/2013 19:03, Patrick Hurrelmann wrote:
Hi list,
I tested the upcoming CentOS 6.4 release with my lab installation of oVirt 3.1 and it fails to play well.
Background: freshly installed CentOS 6.3 host in a Nehalem CPU-type Cluster with 2 other hosts. Storage is iSCSI. Datacenter and Cluster are both version 3.1. oVirt 3.1 was installed via Dreyou's repo.
In CentOS 6.3 all is fine and the following rpms are installed:
libvirt.x86_64 0.9.10-21.el6_3.8 libvirt-client.x86_64 0.9.10-21.el6_3.8 libvirt-lock-sanlock.x86_64 0.9.10-21.el6_3.8 libvirt-python.x86_64 0.9.10-21.el6_3.8 vdsm.x86_64 4.10.0-0.46.15.el6 vdsm-cli.noarch 4.10.0-0.46.15.el6 vdsm-python.x86_64 4.10.0-0.46.15.el6 vdsm-xmlrpc.noarch 4.10.0-0.46.15.el6 qemu-kvm.x86_64 2:0.12.1.2-2.295.el6_3.10
uname -a Linux vh-test1.mydomain.com 2.6.32-279.22.1.el6.x86_64 #1 SMP Wed Feb 6 03:10:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
virsh cpu capabilities on 6.3: <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='1'/> <feature name='rdtscp'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu>
and corresponding cpu features from vdsClient:
cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca, cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht, tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon, pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni, dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr, pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi, flexpriority,ept,vpid,model_Conroe,model_Penryn, model_Nehalem cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2394.132
So the system was updated to 6.4 using the continuous release repo.
Installed rpms after update to 6.4 (6.3 + CR):
libvirt.x86_64 0.10.2-18.el6 libvirt-client.x86_64 0.10.2-18.el6 libvirt-lock-sanlock.x86_64 0.10.2-18.el6 libvirt-python.x86_64 0.10.2-18.el6 vdsm.x86_64 4.10.0-0.46.15.el6 vdsm-cli.noarch 4.10.0-0.46.15.el6 vdsm-python.x86_64 4.10.0-0.46.15.el6 vdsm-xmlrpc.noarch 4.10.0-0.46.15.el6 qemu-kvm.x86_64 2:0.12.1.2-2.355.el6_4_4.1
uname -a Linux vh-test1.mydomain.com 2.6.32-358.0.1.el6.x86_64 #1 SMP Wed Feb 27 06:06:45 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
virsh capabilities on 6.4: <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='1'/> <feature name='rdtscp'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu>
and corresponding cpu features from vdsClient:
cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca, cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht, tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon, pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni, dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr, pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi, flexpriority,ept,vpid,model_coreduo,model_Conroe cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2394.098
Full outputs of virsh capabilities and vdsCaps are attached. The only difference I can see is that 6.4 exposes one additional cpu flags (sep) and this seems to break the cpu recognition of vdsm.
Anyone has some hints on how to resolve or debug this further? What more information can I provide to help?
Best regards Patrick
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
seems like a vdsm issue - can you check if you have this patch (not sure its related):
commit 558994f8ffe030acd1b851dfd074f3417681337b Author: Mark Wu <wudxw@linux.vnet.ibm.com> Date: Tue Oct 9 11:18:10 2012 +0800
Fix a bug in the naming convertion of cpu feature 'sse4_x'
The list slice opertion missed the last character. It caused the feature name couldn't get chance to be converted into libvirt's naming.
Change-Id: Ia241b09c96fa16441ba9421f61a2f9a417f0d978 Signed-off-by: Mark Wu <wudxw@linux.vnet.ibm.com> Reviewed-on: http://gerrit.ovirt.org/8413 Reviewed-by: Dan Kenigsberg <danken@redhat.com> Reviewed-by: Michal Skrivanek <michal.skrivanek@redhat.com>
diff --git a/vdsm/libvirtvm.py b/vdsm/libvirtvm.py index 64a3b6b..bd43b56 100644 --- a/vdsm/libvirtvm.py +++ b/vdsm/libvirtvm.py @@ -735,7 +735,7 @@ class _DomXML:
for feature in features[1:]: # convert Linux name of feature to libvirt - if feature[1:5] == 'sse4_': + if feature[1:6] == 'sse4_': feature = feature[0] + 'sse4.' + feature[6:]
f = self.doc.createElement('feature')
My version of vdsm as stated by Dreyou: v 4.10.0-0.46 (.15), builded from b59c8430b2a511bcea3bc1a954eee4ca1c0f4861 (branch ovirt-3.1)
I can't see that Ia241b09c96fa16441ba9421f61a2f9a417f0d978 was merged to 3.1 Branch?
I applied that patch locally and restarted vdsmd but this does not change anything. Supported cpu is still as low as Conroe instead of Nehalem. Or is there more to do than patching libvirtvm.py?
What is libvirt's opinion about your cpu compatibility?
virsh -r cpu-compare <(echo '<cpu match="minimum"><model>Nehalem</model><vendor>Intel</vendor></cpu>')
If you do not get "Host CPU is a superset of CPU described in bla", then the problem is within libvirt.
Dan.
Hi Dan, virsh -r cpu-compare <(echo '<cpu match="minimum"><model>Nehalem</model><vendor>Intel</vendor></cpu>') Host CPU is a superset of CPU described in /dev/fd/63 So libvirt obviously is fine. Something different would have surprised my as virsh capabilities seemed correct anyway. Regards Patrick -- Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg HRB 178831, Amtsgericht München Geschäftsführer: Dr. Martin Fischer, Rolf Henrich

On Tue, Mar 05, 2013 at 11:01:53AM +0100, Patrick Hurrelmann wrote:
On 05.03.2013 10:54, Dan Kenigsberg wrote:
On Tue, Mar 05, 2013 at 10:21:16AM +0100, Patrick Hurrelmann wrote:
On 04.03.2013 21:52, Itamar Heim wrote:
On 04/03/2013 19:03, Patrick Hurrelmann wrote:
Hi list,
I tested the upcoming CentOS 6.4 release with my lab installation of oVirt 3.1 and it fails to play well.
Background: freshly installed CentOS 6.3 host in a Nehalem CPU-type Cluster with 2 other hosts. Storage is iSCSI. Datacenter and Cluster are both version 3.1. oVirt 3.1 was installed via Dreyou's repo.
In CentOS 6.3 all is fine and the following rpms are installed:
libvirt.x86_64 0.9.10-21.el6_3.8 libvirt-client.x86_64 0.9.10-21.el6_3.8 libvirt-lock-sanlock.x86_64 0.9.10-21.el6_3.8 libvirt-python.x86_64 0.9.10-21.el6_3.8 vdsm.x86_64 4.10.0-0.46.15.el6 vdsm-cli.noarch 4.10.0-0.46.15.el6 vdsm-python.x86_64 4.10.0-0.46.15.el6 vdsm-xmlrpc.noarch 4.10.0-0.46.15.el6 qemu-kvm.x86_64 2:0.12.1.2-2.295.el6_3.10
uname -a Linux vh-test1.mydomain.com 2.6.32-279.22.1.el6.x86_64 #1 SMP Wed Feb 6 03:10:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
virsh cpu capabilities on 6.3: <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='1'/> <feature name='rdtscp'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu>
and corresponding cpu features from vdsClient:
cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca, cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht, tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon, pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni, dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr, pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi, flexpriority,ept,vpid,model_Conroe,model_Penryn, model_Nehalem cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2394.132
So the system was updated to 6.4 using the continuous release repo.
Installed rpms after update to 6.4 (6.3 + CR):
libvirt.x86_64 0.10.2-18.el6 libvirt-client.x86_64 0.10.2-18.el6 libvirt-lock-sanlock.x86_64 0.10.2-18.el6 libvirt-python.x86_64 0.10.2-18.el6 vdsm.x86_64 4.10.0-0.46.15.el6 vdsm-cli.noarch 4.10.0-0.46.15.el6 vdsm-python.x86_64 4.10.0-0.46.15.el6 vdsm-xmlrpc.noarch 4.10.0-0.46.15.el6 qemu-kvm.x86_64 2:0.12.1.2-2.355.el6_4_4.1
uname -a Linux vh-test1.mydomain.com 2.6.32-358.0.1.el6.x86_64 #1 SMP Wed Feb 27 06:06:45 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
virsh capabilities on 6.4: <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='1'/> <feature name='rdtscp'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu>
and corresponding cpu features from vdsClient:
cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca, cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht, tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon, pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni, dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr, pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi, flexpriority,ept,vpid,model_coreduo,model_Conroe cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2394.098
Full outputs of virsh capabilities and vdsCaps are attached. The only difference I can see is that 6.4 exposes one additional cpu flags (sep) and this seems to break the cpu recognition of vdsm.
Anyone has some hints on how to resolve or debug this further? What more information can I provide to help?
Best regards Patrick
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
seems like a vdsm issue - can you check if you have this patch (not sure its related):
commit 558994f8ffe030acd1b851dfd074f3417681337b Author: Mark Wu <wudxw@linux.vnet.ibm.com> Date: Tue Oct 9 11:18:10 2012 +0800
Fix a bug in the naming convertion of cpu feature 'sse4_x'
The list slice opertion missed the last character. It caused the feature name couldn't get chance to be converted into libvirt's naming.
Change-Id: Ia241b09c96fa16441ba9421f61a2f9a417f0d978 Signed-off-by: Mark Wu <wudxw@linux.vnet.ibm.com> Reviewed-on: http://gerrit.ovirt.org/8413 Reviewed-by: Dan Kenigsberg <danken@redhat.com> Reviewed-by: Michal Skrivanek <michal.skrivanek@redhat.com>
diff --git a/vdsm/libvirtvm.py b/vdsm/libvirtvm.py index 64a3b6b..bd43b56 100644 --- a/vdsm/libvirtvm.py +++ b/vdsm/libvirtvm.py @@ -735,7 +735,7 @@ class _DomXML:
for feature in features[1:]: # convert Linux name of feature to libvirt - if feature[1:5] == 'sse4_': + if feature[1:6] == 'sse4_': feature = feature[0] + 'sse4.' + feature[6:]
f = self.doc.createElement('feature')
My version of vdsm as stated by Dreyou: v 4.10.0-0.46 (.15), builded from b59c8430b2a511bcea3bc1a954eee4ca1c0f4861 (branch ovirt-3.1)
I can't see that Ia241b09c96fa16441ba9421f61a2f9a417f0d978 was merged to 3.1 Branch?
I applied that patch locally and restarted vdsmd but this does not change anything. Supported cpu is still as low as Conroe instead of Nehalem. Or is there more to do than patching libvirtvm.py?
What is libvirt's opinion about your cpu compatibility?
virsh -r cpu-compare <(echo '<cpu match="minimum"><model>Nehalem</model><vendor>Intel</vendor></cpu>')
If you do not get "Host CPU is a superset of CPU described in bla", then the problem is within libvirt.
Dan.
Hi Dan,
virsh -r cpu-compare <(echo '<cpu match="minimum"><model>Nehalem</model><vendor>Intel</vendor></cpu>') Host CPU is a superset of CPU described in /dev/fd/63
So libvirt obviously is fine. Something different would have surprised my as virsh capabilities seemed correct anyway.
So maybe, just maybe, libvirt has changed their cpu_map, a map that ovirt-3.1 had a bug reading. Would you care to apply http://gerrit.ovirt.org/5035 to see if this is it? Dan.

On 05.03.2013 11:14, Dan Kenigsberg wrote: <snip>
My version of vdsm as stated by Dreyou: v 4.10.0-0.46 (.15), builded from b59c8430b2a511bcea3bc1a954eee4ca1c0f4861 (branch ovirt-3.1)
I can't see that Ia241b09c96fa16441ba9421f61a2f9a417f0d978 was merged to 3.1 Branch?
I applied that patch locally and restarted vdsmd but this does not change anything. Supported cpu is still as low as Conroe instead of Nehalem. Or is there more to do than patching libvirtvm.py?
What is libvirt's opinion about your cpu compatibility?
virsh -r cpu-compare <(echo '<cpu match="minimum"><model>Nehalem</model><vendor>Intel</vendor></cpu>')
If you do not get "Host CPU is a superset of CPU described in bla", then the problem is within libvirt.
Dan.
Hi Dan,
virsh -r cpu-compare <(echo '<cpu match="minimum"><model>Nehalem</model><vendor>Intel</vendor></cpu>') Host CPU is a superset of CPU described in /dev/fd/63
So libvirt obviously is fine. Something different would have surprised my as virsh capabilities seemed correct anyway.
So maybe, just maybe, libvirt has changed their cpu_map, a map that ovirt-3.1 had a bug reading.
Would you care to apply http://gerrit.ovirt.org/5035 to see if this is it?
Dan.
Hi Dan, success! Applying that patch made the cpu recognition work again. The cpu type in admin portal shows again as Nehalem. Output from getVdsCaps: cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge, mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2, ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc, arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc, aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,smx,est,tm2, ssse3,cx16,xtpr,pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida, dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem, model_Conroe,model_coreduo,model_core2duo,model_Penryn, model_n270 cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2393.769 I compared libvirt's cpu_map.xml on both Centos 6.3 and CentOS 6.4 and indeed they do differ in large portions. So this patch should probably be merged to 3.1 branch? I will contact Dreyou and request that this patch will also be included in his builds. I guess otherwise there will be quite some fallout after people start picking CentOS 6.4 for oVirt 3.1. Thanks again and best regards Patrick -- Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg HRB 178831, Amtsgericht München Geschäftsführer: Dr. Martin Fischer, Rolf Henrich

On Tue, Mar 05, 2013 at 12:32:31PM +0100, Patrick Hurrelmann wrote:
On 05.03.2013 11:14, Dan Kenigsberg wrote: <snip>
My version of vdsm as stated by Dreyou: v 4.10.0-0.46 (.15), builded from b59c8430b2a511bcea3bc1a954eee4ca1c0f4861 (branch ovirt-3.1)
I can't see that Ia241b09c96fa16441ba9421f61a2f9a417f0d978 was merged to 3.1 Branch?
I applied that patch locally and restarted vdsmd but this does not change anything. Supported cpu is still as low as Conroe instead of Nehalem. Or is there more to do than patching libvirtvm.py?
What is libvirt's opinion about your cpu compatibility?
virsh -r cpu-compare <(echo '<cpu match="minimum"><model>Nehalem</model><vendor>Intel</vendor></cpu>')
If you do not get "Host CPU is a superset of CPU described in bla", then the problem is within libvirt.
Dan.
Hi Dan,
virsh -r cpu-compare <(echo '<cpu match="minimum"><model>Nehalem</model><vendor>Intel</vendor></cpu>') Host CPU is a superset of CPU described in /dev/fd/63
So libvirt obviously is fine. Something different would have surprised my as virsh capabilities seemed correct anyway.
So maybe, just maybe, libvirt has changed their cpu_map, a map that ovirt-3.1 had a bug reading.
Would you care to apply http://gerrit.ovirt.org/5035 to see if this is it?
Dan.
Hi Dan,
success! Applying that patch made the cpu recognition work again. The cpu type in admin portal shows again as Nehalem. Output from getVdsCaps:
cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge, mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2, ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc, arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc, aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,smx,est,tm2, ssse3,cx16,xtpr,pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida, dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem, model_Conroe,model_coreduo,model_core2duo,model_Penryn, model_n270 cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2393.769
I compared libvirt's cpu_map.xml on both Centos 6.3 and CentOS 6.4 and indeed they do differ in large portions. So this patch should probably be merged to 3.1 branch? I will contact Dreyou and request that this patch will also be included in his builds. I guess otherwise there will be quite some fallout after people start picking CentOS 6.4 for oVirt 3.1.
Thanks again and best regards
Thank you for reporting this issue and verifying its fix. I'm not completely sure that we should keep maintaining the ovirt-3.1 branch upstream - but a build destined for el6.4 must have it. If you believe we should release a fix version for 3.1, please verify that http://gerrit.ovirt.org/12723 has no ill effects. Dan.

On 05.03.2013 13:49, Dan Kenigsberg wrote:
On Tue, Mar 05, 2013 at 12:32:31PM +0100, Patrick Hurrelmann wrote:
On 05.03.2013 11:14, Dan Kenigsberg wrote: <snip>
My version of vdsm as stated by Dreyou: v 4.10.0-0.46 (.15), builded from b59c8430b2a511bcea3bc1a954eee4ca1c0f4861 (branch ovirt-3.1)
I can't see that Ia241b09c96fa16441ba9421f61a2f9a417f0d978 was merged to 3.1 Branch?
I applied that patch locally and restarted vdsmd but this does not change anything. Supported cpu is still as low as Conroe instead of Nehalem. Or is there more to do than patching libvirtvm.py?
What is libvirt's opinion about your cpu compatibility?
virsh -r cpu-compare <(echo '<cpu match="minimum"><model>Nehalem</model><vendor>Intel</vendor></cpu>')
If you do not get "Host CPU is a superset of CPU described in bla", then the problem is within libvirt.
Dan.
Hi Dan,
virsh -r cpu-compare <(echo '<cpu match="minimum"><model>Nehalem</model><vendor>Intel</vendor></cpu>') Host CPU is a superset of CPU described in /dev/fd/63
So libvirt obviously is fine. Something different would have surprised my as virsh capabilities seemed correct anyway.
So maybe, just maybe, libvirt has changed their cpu_map, a map that ovirt-3.1 had a bug reading.
Would you care to apply http://gerrit.ovirt.org/5035 to see if this is it?
Dan.
Hi Dan,
success! Applying that patch made the cpu recognition work again. The cpu type in admin portal shows again as Nehalem. Output from getVdsCaps:
cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge, mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2, ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc, arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc, aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,smx,est,tm2, ssse3,cx16,xtpr,pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida, dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem, model_Conroe,model_coreduo,model_core2duo,model_Penryn, model_n270 cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2393.769
I compared libvirt's cpu_map.xml on both Centos 6.3 and CentOS 6.4 and indeed they do differ in large portions. So this patch should probably be merged to 3.1 branch? I will contact Dreyou and request that this patch will also be included in his builds. I guess otherwise there will be quite some fallout after people start picking CentOS 6.4 for oVirt 3.1.
Thanks again and best regards
Thank you for reporting this issue and verifying its fix.
I'm not completely sure that we should keep maintaining the ovirt-3.1 branch upstream - but a build destined for el6.4 must have it.
If you believe we should release a fix version for 3.1, please verify that http://gerrit.ovirt.org/12723 has no ill effects.
Dan.
I did none additional tests and the new CentOS 6.4 host failed start or migrate any vm. It always boils down to: Thread-43::ERROR::2013-03-07 15:02:51,950::task::853::TaskManager.Task::(_setError) Task=`52a9f96f-3dfd-4bcf-8d7a-db14e650b4c1`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 861, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 38, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.py", line 2551, in getVolumeSize apparentsize = str(volume.Volume.getVSize(sdUUID, imgUUID, volUUID, bs=1)) File "/usr/share/vdsm/storage/volume.py", line 283, in getVSize return mysd.getVolumeClass().getVSize(mysd, imgUUID, volUUID, bs) File "/usr/share/vdsm/storage/blockVolume.py", line 101, in getVSize return int(int(lvm.getLV(sdobj.sdUUID, volUUID).size) / bs) File "/usr/share/vdsm/storage/lvm.py", line 772, in getLV lv = _lvminfo.getLv(vgName, lvName) File "/usr/share/vdsm/storage/lvm.py", line 567, in getLv lvs = self._reloadlvs(vgName) File "/usr/share/vdsm/storage/lvm.py", line 419, in _reloadlvs self._lvs.pop((vgName, lvName), None) File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__ self.gen.throw(type, value, traceback) File "/usr/share/vdsm/storage/misc.py", line 1219, in acquireContext yield self File "/usr/share/vdsm/storage/lvm.py", line 404, in _reloadlvs lv = makeLV(*fields) File "/usr/share/vdsm/storage/lvm.py", line 218, in makeLV attrs = _attr2NamedTuple(args[LV._fields.index("attr")], LV_ATTR_BITS, "LV_ATTR") File "/usr/share/vdsm/storage/lvm.py", line 188, in _attr2NamedTuple attrs = Attrs(*values) TypeError: __new__() takes exactly 9 arguments (10 given) and followed by: Thread-43::ERROR::2013-03-07 15:02:51,987::dispatcher::69::Storage.Dispatcher.Protect::(run) __new__() takes exactly 9 arguments (10 given) Traceback (most recent call last): File "/usr/share/vdsm/storage/dispatcher.py", line 61, in run result = ctask.prepare(self.func, *args, **kwargs) File "/usr/share/vdsm/storage/task.py", line 1164, in prepare raise self.error TypeError: __new__() takes exactly 9 arguments (10 given) Thread-43::DEBUG::2013-03-07 15:02:51,987::vm::580::vm.Vm::(_startUnderlyingVm) vmId=`7db86f12-8c57-4d2b-a853-a6fd6f7ee82d`::_ongoingCreations released Thread-43::ERROR::2013-03-07 15:02:51,987::vm::604::vm.Vm::(_startUnderlyingVm) vmId=`7db86f12-8c57-4d2b-a853-a6fd6f7ee82d`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/vm.py", line 570, in _startUnderlyingVm self._run() File "/usr/share/vdsm/libvirtvm.py", line 1289, in _run devices = self.buildConfDevices() File "/usr/share/vdsm/vm.py", line 431, in buildConfDevices self._normalizeVdsmImg(drv) File "/usr/share/vdsm/vm.py", line 358, in _normalizeVdsmImg drv['truesize'] = res['truesize'] KeyError: 'truesize' In webadmin the start and migrate operations fail with 'truesize'. I could find BZ#876958 which has the very same error. So I tried to apply patch http://gerrit.ovirt.org/9317. I had to apply it manually (guess patch would need a rebase for 3.1), but it works. I now can start new virtual machines successfully on a CentOS 6.4 / oVirt 3.1 host. Migration of vm from CentOS 6.3 hosts work, but not the other way around. Migration from 6.4 to 6.3 fails: Thread-1296::ERROR::2013-03-07 15:55:24,845::vm::176::vm.Vm::(_recover) vmId=`c978cbf8-6b4d-4d6f-9435-480d9fed31c4`::internal error Process exited while reading console log output: Supported machines are: pc RHEL 6.3.0 PC (alias of rhel6.3.0) rhel6.3.0 RHEL 6.3.0 PC (default) rhel6.2.0 RHEL 6.2.0 PC rhel6.1.0 RHEL 6.1.0 PC rhel6.0.0 RHEL 6.0.0 PC rhel5.5.0 RHEL 5.5.0 PC rhel5.4.4 RHEL 5.4.4 PC rhel5.4.0 RHEL 5.4.0 PC Thread-1296::ERROR::2013-03-07 15:55:24,988::vm::240::vm.Vm::(run) vmId=`c978cbf8-6b4d-4d6f-9435-480d9fed31c4`::Failed to migrate Traceback (most recent call last): File "/usr/share/vdsm/vm.py", line 223, in run self._startUnderlyingMigration() File "/usr/share/vdsm/libvirtvm.py", line 451, in _startUnderlyingMigration None, maxBandwidth) File "/usr/share/vdsm/libvirtvm.py", line 491, in f ret = attr(*args, **kwargs) File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line 82, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1178, in migrateToURI2 if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', dom=self) libvirtError: internal error Process exited while reading console log output: Supported machines are: pc RHEL 6.3.0 PC (alias of rhel6.3.0) rhel6.3.0 RHEL 6.3.0 PC (default) rhel6.2.0 RHEL 6.2.0 PC rhel6.1.0 RHEL 6.1.0 PC rhel6.0.0 RHEL 6.0.0 PC rhel5.5.0 RHEL 5.5.0 PC rhel5.4.4 RHEL 5.4.4 PC rhel5.4.0 RHEL 5.4.0 PC But I guess this is fine and migration from higher host version to a lower version is probably not supported, right? Regards Patrick -- Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg HRB 178831, Amtsgericht München Geschäftsführer: Dr. Martin Fischer, Rolf Henrich

On Thu, Mar 07, 2013 at 03:59:27PM +0100, Patrick Hurrelmann wrote:
On 05.03.2013 13:49, Dan Kenigsberg wrote:
On Tue, Mar 05, 2013 at 12:32:31PM +0100, Patrick Hurrelmann wrote:
On 05.03.2013 11:14, Dan Kenigsberg wrote: <snip>
> > My version of vdsm as stated by Dreyou: > v 4.10.0-0.46 (.15), builded from > b59c8430b2a511bcea3bc1a954eee4ca1c0f4861 (branch ovirt-3.1) > > I can't see that Ia241b09c96fa16441ba9421f61a2f9a417f0d978 was merged to > 3.1 Branch? > > I applied that patch locally and restarted vdsmd but this does not > change anything. Supported cpu is still as low as Conroe instead of > Nehalem. Or is there more to do than patching libvirtvm.py?
What is libvirt's opinion about your cpu compatibility?
virsh -r cpu-compare <(echo '<cpu match="minimum"><model>Nehalem</model><vendor>Intel</vendor></cpu>')
If you do not get "Host CPU is a superset of CPU described in bla", then the problem is within libvirt.
Dan.
Hi Dan,
virsh -r cpu-compare <(echo '<cpu match="minimum"><model>Nehalem</model><vendor>Intel</vendor></cpu>') Host CPU is a superset of CPU described in /dev/fd/63
So libvirt obviously is fine. Something different would have surprised my as virsh capabilities seemed correct anyway.
So maybe, just maybe, libvirt has changed their cpu_map, a map that ovirt-3.1 had a bug reading.
Would you care to apply http://gerrit.ovirt.org/5035 to see if this is it?
Dan.
Hi Dan,
success! Applying that patch made the cpu recognition work again. The cpu type in admin portal shows again as Nehalem. Output from getVdsCaps:
cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge, mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2, ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc, arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc, aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,smx,est,tm2, ssse3,cx16,xtpr,pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida, dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem, model_Conroe,model_coreduo,model_core2duo,model_Penryn, model_n270 cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2393.769
I compared libvirt's cpu_map.xml on both Centos 6.3 and CentOS 6.4 and indeed they do differ in large portions. So this patch should probably be merged to 3.1 branch? I will contact Dreyou and request that this patch will also be included in his builds. I guess otherwise there will be quite some fallout after people start picking CentOS 6.4 for oVirt 3.1.
Thanks again and best regards
Thank you for reporting this issue and verifying its fix.
I'm not completely sure that we should keep maintaining the ovirt-3.1 branch upstream - but a build destined for el6.4 must have it.
If you believe we should release a fix version for 3.1, please verify that http://gerrit.ovirt.org/12723 has no ill effects.
Dan.
I did none additional tests and the new CentOS 6.4 host failed start or migrate any vm. It always boils down to:
Thread-43::ERROR::2013-03-07 15:02:51,950::task::853::TaskManager.Task::(_setError) Task=`52a9f96f-3dfd-4bcf-8d7a-db14e650b4c1`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 861, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 38, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.py", line 2551, in getVolumeSize apparentsize = str(volume.Volume.getVSize(sdUUID, imgUUID, volUUID, bs=1)) File "/usr/share/vdsm/storage/volume.py", line 283, in getVSize return mysd.getVolumeClass().getVSize(mysd, imgUUID, volUUID, bs) File "/usr/share/vdsm/storage/blockVolume.py", line 101, in getVSize return int(int(lvm.getLV(sdobj.sdUUID, volUUID).size) / bs) File "/usr/share/vdsm/storage/lvm.py", line 772, in getLV lv = _lvminfo.getLv(vgName, lvName) File "/usr/share/vdsm/storage/lvm.py", line 567, in getLv lvs = self._reloadlvs(vgName) File "/usr/share/vdsm/storage/lvm.py", line 419, in _reloadlvs self._lvs.pop((vgName, lvName), None) File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__ self.gen.throw(type, value, traceback) File "/usr/share/vdsm/storage/misc.py", line 1219, in acquireContext yield self File "/usr/share/vdsm/storage/lvm.py", line 404, in _reloadlvs lv = makeLV(*fields) File "/usr/share/vdsm/storage/lvm.py", line 218, in makeLV attrs = _attr2NamedTuple(args[LV._fields.index("attr")], LV_ATTR_BITS, "LV_ATTR") File "/usr/share/vdsm/storage/lvm.py", line 188, in _attr2NamedTuple attrs = Attrs(*values) TypeError: __new__() takes exactly 9 arguments (10 given)
and followed by:
Thread-43::ERROR::2013-03-07 15:02:51,987::dispatcher::69::Storage.Dispatcher.Protect::(run) __new__() takes exactly 9 arguments (10 given) Traceback (most recent call last): File "/usr/share/vdsm/storage/dispatcher.py", line 61, in run result = ctask.prepare(self.func, *args, **kwargs) File "/usr/share/vdsm/storage/task.py", line 1164, in prepare raise self.error TypeError: __new__() takes exactly 9 arguments (10 given) Thread-43::DEBUG::2013-03-07 15:02:51,987::vm::580::vm.Vm::(_startUnderlyingVm) vmId=`7db86f12-8c57-4d2b-a853-a6fd6f7ee82d`::_ongoingCreations released Thread-43::ERROR::2013-03-07 15:02:51,987::vm::604::vm.Vm::(_startUnderlyingVm) vmId=`7db86f12-8c57-4d2b-a853-a6fd6f7ee82d`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/vm.py", line 570, in _startUnderlyingVm self._run() File "/usr/share/vdsm/libvirtvm.py", line 1289, in _run devices = self.buildConfDevices() File "/usr/share/vdsm/vm.py", line 431, in buildConfDevices self._normalizeVdsmImg(drv) File "/usr/share/vdsm/vm.py", line 358, in _normalizeVdsmImg drv['truesize'] = res['truesize'] KeyError: 'truesize'
In webadmin the start and migrate operations fail with 'truesize'.
I could find BZ#876958 which has the very same error. So I tried to apply patch http://gerrit.ovirt.org/9317. I had to apply it manually (guess patch would need a rebase for 3.1), but it works.
Thanks for the report. I've made a public backport for this in http://gerrit.ovirt.org/12836/ and would ask you again to tick that it is verified by you.
I now can start new virtual machines successfully on a CentOS 6.4 / oVirt 3.1 host. Migration of vm from CentOS 6.3 hosts work, but not the other way around. Migration from 6.4 to 6.3 fails:
Thread-1296::ERROR::2013-03-07 15:55:24,845::vm::176::vm.Vm::(_recover) vmId=`c978cbf8-6b4d-4d6f-9435-480d9fed31c4`::internal error Process exited while reading console log output: Supported machines are: pc RHEL 6.3.0 PC (alias of rhel6.3.0) rhel6.3.0 RHEL 6.3.0 PC (default) rhel6.2.0 RHEL 6.2.0 PC rhel6.1.0 RHEL 6.1.0 PC rhel6.0.0 RHEL 6.0.0 PC rhel5.5.0 RHEL 5.5.0 PC rhel5.4.4 RHEL 5.4.4 PC rhel5.4.0 RHEL 5.4.0 PC
Thread-1296::ERROR::2013-03-07 15:55:24,988::vm::240::vm.Vm::(run) vmId=`c978cbf8-6b4d-4d6f-9435-480d9fed31c4`::Failed to migrate Traceback (most recent call last): File "/usr/share/vdsm/vm.py", line 223, in run self._startUnderlyingMigration() File "/usr/share/vdsm/libvirtvm.py", line 451, in _startUnderlyingMigration None, maxBandwidth) File "/usr/share/vdsm/libvirtvm.py", line 491, in f ret = attr(*args, **kwargs) File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line 82, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1178, in migrateToURI2 if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', dom=self) libvirtError: internal error Process exited while reading console log output: Supported machines are: pc RHEL 6.3.0 PC (alias of rhel6.3.0) rhel6.3.0 RHEL 6.3.0 PC (default) rhel6.2.0 RHEL 6.2.0 PC rhel6.1.0 RHEL 6.1.0 PC rhel6.0.0 RHEL 6.0.0 PC rhel5.5.0 RHEL 5.5.0 PC rhel5.4.4 RHEL 5.4.4 PC rhel5.4.0 RHEL 5.4.0 PC
But I guess this is fine and migration from higher host version to a lower version is probably not supported, right?
Well, I suppose that qemu would allow migration if you begine with a a *guest* of version rhel6.3.0. Please try it out. Dan.

On 07.03.2013 16:18, Dan Kenigsberg wrote:
On Thu, Mar 07, 2013 at 03:59:27PM +0100, Patrick Hurrelmann wrote:
On 05.03.2013 13:49, Dan Kenigsberg wrote:
On Tue, Mar 05, 2013 at 12:32:31PM +0100, Patrick Hurrelmann wrote:
On 05.03.2013 11:14, Dan Kenigsberg wrote: <snip>
>> >> My version of vdsm as stated by Dreyou: >> v 4.10.0-0.46 (.15), builded from >> b59c8430b2a511bcea3bc1a954eee4ca1c0f4861 (branch ovirt-3.1) >> >> I can't see that Ia241b09c96fa16441ba9421f61a2f9a417f0d978 was merged to >> 3.1 Branch? >> >> I applied that patch locally and restarted vdsmd but this does not >> change anything. Supported cpu is still as low as Conroe instead of >> Nehalem. Or is there more to do than patching libvirtvm.py? > > What is libvirt's opinion about your cpu compatibility? > > virsh -r cpu-compare <(echo '<cpu match="minimum"><model>Nehalem</model><vendor>Intel</vendor></cpu>') > > If you do not get "Host CPU is a superset of CPU described in bla", then > the problem is within libvirt. > > Dan.
Hi Dan,
virsh -r cpu-compare <(echo '<cpu match="minimum"><model>Nehalem</model><vendor>Intel</vendor></cpu>') Host CPU is a superset of CPU described in /dev/fd/63
So libvirt obviously is fine. Something different would have surprised my as virsh capabilities seemed correct anyway.
So maybe, just maybe, libvirt has changed their cpu_map, a map that ovirt-3.1 had a bug reading.
Would you care to apply http://gerrit.ovirt.org/5035 to see if this is it?
Dan.
Hi Dan,
success! Applying that patch made the cpu recognition work again. The cpu type in admin portal shows again as Nehalem. Output from getVdsCaps:
cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge, mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2, ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc, arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc, aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,smx,est,tm2, ssse3,cx16,xtpr,pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida, dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem, model_Conroe,model_coreduo,model_core2duo,model_Penryn, model_n270 cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2393.769
I compared libvirt's cpu_map.xml on both Centos 6.3 and CentOS 6.4 and indeed they do differ in large portions. So this patch should probably be merged to 3.1 branch? I will contact Dreyou and request that this patch will also be included in his builds. I guess otherwise there will be quite some fallout after people start picking CentOS 6.4 for oVirt 3.1.
Thanks again and best regards
Thank you for reporting this issue and verifying its fix.
I'm not completely sure that we should keep maintaining the ovirt-3.1 branch upstream - but a build destined for el6.4 must have it.
If you believe we should release a fix version for 3.1, please verify that http://gerrit.ovirt.org/12723 has no ill effects.
Dan.
I did none additional tests and the new CentOS 6.4 host failed start or migrate any vm. It always boils down to:
Thread-43::ERROR::2013-03-07 15:02:51,950::task::853::TaskManager.Task::(_setError) Task=`52a9f96f-3dfd-4bcf-8d7a-db14e650b4c1`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 861, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 38, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.py", line 2551, in getVolumeSize apparentsize = str(volume.Volume.getVSize(sdUUID, imgUUID, volUUID, bs=1)) File "/usr/share/vdsm/storage/volume.py", line 283, in getVSize return mysd.getVolumeClass().getVSize(mysd, imgUUID, volUUID, bs) File "/usr/share/vdsm/storage/blockVolume.py", line 101, in getVSize return int(int(lvm.getLV(sdobj.sdUUID, volUUID).size) / bs) File "/usr/share/vdsm/storage/lvm.py", line 772, in getLV lv = _lvminfo.getLv(vgName, lvName) File "/usr/share/vdsm/storage/lvm.py", line 567, in getLv lvs = self._reloadlvs(vgName) File "/usr/share/vdsm/storage/lvm.py", line 419, in _reloadlvs self._lvs.pop((vgName, lvName), None) File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__ self.gen.throw(type, value, traceback) File "/usr/share/vdsm/storage/misc.py", line 1219, in acquireContext yield self File "/usr/share/vdsm/storage/lvm.py", line 404, in _reloadlvs lv = makeLV(*fields) File "/usr/share/vdsm/storage/lvm.py", line 218, in makeLV attrs = _attr2NamedTuple(args[LV._fields.index("attr")], LV_ATTR_BITS, "LV_ATTR") File "/usr/share/vdsm/storage/lvm.py", line 188, in _attr2NamedTuple attrs = Attrs(*values) TypeError: __new__() takes exactly 9 arguments (10 given)
and followed by:
Thread-43::ERROR::2013-03-07 15:02:51,987::dispatcher::69::Storage.Dispatcher.Protect::(run) __new__() takes exactly 9 arguments (10 given) Traceback (most recent call last): File "/usr/share/vdsm/storage/dispatcher.py", line 61, in run result = ctask.prepare(self.func, *args, **kwargs) File "/usr/share/vdsm/storage/task.py", line 1164, in prepare raise self.error TypeError: __new__() takes exactly 9 arguments (10 given) Thread-43::DEBUG::2013-03-07 15:02:51,987::vm::580::vm.Vm::(_startUnderlyingVm) vmId=`7db86f12-8c57-4d2b-a853-a6fd6f7ee82d`::_ongoingCreations released Thread-43::ERROR::2013-03-07 15:02:51,987::vm::604::vm.Vm::(_startUnderlyingVm) vmId=`7db86f12-8c57-4d2b-a853-a6fd6f7ee82d`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/vm.py", line 570, in _startUnderlyingVm self._run() File "/usr/share/vdsm/libvirtvm.py", line 1289, in _run devices = self.buildConfDevices() File "/usr/share/vdsm/vm.py", line 431, in buildConfDevices self._normalizeVdsmImg(drv) File "/usr/share/vdsm/vm.py", line 358, in _normalizeVdsmImg drv['truesize'] = res['truesize'] KeyError: 'truesize'
In webadmin the start and migrate operations fail with 'truesize'.
I could find BZ#876958 which has the very same error. So I tried to apply patch http://gerrit.ovirt.org/9317. I had to apply it manually (guess patch would need a rebase for 3.1), but it works.
Thanks for the report. I've made a public backport for this in http://gerrit.ovirt.org/12836/ and would ask you again to tick that it is verified by you.
I now can start new virtual machines successfully on a CentOS 6.4 / oVirt 3.1 host. Migration of vm from CentOS 6.3 hosts work, but not the other way around. Migration from 6.4 to 6.3 fails:
Thread-1296::ERROR::2013-03-07 15:55:24,845::vm::176::vm.Vm::(_recover) vmId=`c978cbf8-6b4d-4d6f-9435-480d9fed31c4`::internal error Process exited while reading console log output: Supported machines are: pc RHEL 6.3.0 PC (alias of rhel6.3.0) rhel6.3.0 RHEL 6.3.0 PC (default) rhel6.2.0 RHEL 6.2.0 PC rhel6.1.0 RHEL 6.1.0 PC rhel6.0.0 RHEL 6.0.0 PC rhel5.5.0 RHEL 5.5.0 PC rhel5.4.4 RHEL 5.4.4 PC rhel5.4.0 RHEL 5.4.0 PC
Thread-1296::ERROR::2013-03-07 15:55:24,988::vm::240::vm.Vm::(run) vmId=`c978cbf8-6b4d-4d6f-9435-480d9fed31c4`::Failed to migrate Traceback (most recent call last): File "/usr/share/vdsm/vm.py", line 223, in run self._startUnderlyingMigration() File "/usr/share/vdsm/libvirtvm.py", line 451, in _startUnderlyingMigration None, maxBandwidth) File "/usr/share/vdsm/libvirtvm.py", line 491, in f ret = attr(*args, **kwargs) File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line 82, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1178, in migrateToURI2 if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', dom=self) libvirtError: internal error Process exited while reading console log output: Supported machines are: pc RHEL 6.3.0 PC (alias of rhel6.3.0) rhel6.3.0 RHEL 6.3.0 PC (default) rhel6.2.0 RHEL 6.2.0 PC rhel6.1.0 RHEL 6.1.0 PC rhel6.0.0 RHEL 6.0.0 PC rhel5.5.0 RHEL 5.5.0 PC rhel5.4.4 RHEL 5.4.4 PC rhel5.4.0 RHEL 5.4.0 PC
But I guess this is fine and migration from higher host version to a lower version is probably not supported, right?
Well, I suppose that qemu would allow migration if you begine with a a *guest* of version rhel6.3.0. Please try it out.
Dan.
Alright, just verified it. A vm started on a 6.3 host can be successfully migrated to the new 6.4 host and then back to any other 6.3 host. It just won't migrate a vm started on 6.4 to any host running 6.3. Regards Patrick -- Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg HRB 178831, Amtsgericht München Geschäftsführer: Dr. Martin Fischer, Rolf Henrich

On Thu, Mar 07, 2013 at 04:50:16PM +0100, Patrick Hurrelmann wrote:
On 07.03.2013 16:18, Dan Kenigsberg wrote:
On Thu, Mar 07, 2013 at 03:59:27PM +0100, Patrick Hurrelmann wrote:
On 05.03.2013 13:49, Dan Kenigsberg wrote:
On Tue, Mar 05, 2013 at 12:32:31PM +0100, Patrick Hurrelmann wrote:
On 05.03.2013 11:14, Dan Kenigsberg wrote: <snip>
>>> >>> My version of vdsm as stated by Dreyou: >>> v 4.10.0-0.46 (.15), builded from >>> b59c8430b2a511bcea3bc1a954eee4ca1c0f4861 (branch ovirt-3.1) >>> >>> I can't see that Ia241b09c96fa16441ba9421f61a2f9a417f0d978 was merged to >>> 3.1 Branch? >>> >>> I applied that patch locally and restarted vdsmd but this does not >>> change anything. Supported cpu is still as low as Conroe instead of >>> Nehalem. Or is there more to do than patching libvirtvm.py? >> >> What is libvirt's opinion about your cpu compatibility? >> >> virsh -r cpu-compare <(echo '<cpu match="minimum"><model>Nehalem</model><vendor>Intel</vendor></cpu>') >> >> If you do not get "Host CPU is a superset of CPU described in bla", then >> the problem is within libvirt. >> >> Dan. > > Hi Dan, > > virsh -r cpu-compare <(echo '<cpu > match="minimum"><model>Nehalem</model><vendor>Intel</vendor></cpu>') > Host CPU is a superset of CPU described in /dev/fd/63 > > So libvirt obviously is fine. Something different would have surprised > my as virsh capabilities seemed correct anyway.
So maybe, just maybe, libvirt has changed their cpu_map, a map that ovirt-3.1 had a bug reading.
Would you care to apply http://gerrit.ovirt.org/5035 to see if this is it?
Dan.
Hi Dan,
success! Applying that patch made the cpu recognition work again. The cpu type in admin portal shows again as Nehalem. Output from getVdsCaps:
cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge, mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2, ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc, arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc, aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,smx,est,tm2, ssse3,cx16,xtpr,pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida, dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem, model_Conroe,model_coreduo,model_core2duo,model_Penryn, model_n270 cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz cpuSockets = 1 cpuSpeed = 2393.769
I compared libvirt's cpu_map.xml on both Centos 6.3 and CentOS 6.4 and indeed they do differ in large portions. So this patch should probably be merged to 3.1 branch? I will contact Dreyou and request that this patch will also be included in his builds. I guess otherwise there will be quite some fallout after people start picking CentOS 6.4 for oVirt 3.1.
Thanks again and best regards
Thank you for reporting this issue and verifying its fix.
I'm not completely sure that we should keep maintaining the ovirt-3.1 branch upstream - but a build destined for el6.4 must have it.
If you believe we should release a fix version for 3.1, please verify that http://gerrit.ovirt.org/12723 has no ill effects.
Dan.
I did none additional tests and the new CentOS 6.4 host failed start or migrate any vm. It always boils down to:
Thread-43::ERROR::2013-03-07 15:02:51,950::task::853::TaskManager.Task::(_setError) Task=`52a9f96f-3dfd-4bcf-8d7a-db14e650b4c1`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 861, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 38, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.py", line 2551, in getVolumeSize apparentsize = str(volume.Volume.getVSize(sdUUID, imgUUID, volUUID, bs=1)) File "/usr/share/vdsm/storage/volume.py", line 283, in getVSize return mysd.getVolumeClass().getVSize(mysd, imgUUID, volUUID, bs) File "/usr/share/vdsm/storage/blockVolume.py", line 101, in getVSize return int(int(lvm.getLV(sdobj.sdUUID, volUUID).size) / bs) File "/usr/share/vdsm/storage/lvm.py", line 772, in getLV lv = _lvminfo.getLv(vgName, lvName) File "/usr/share/vdsm/storage/lvm.py", line 567, in getLv lvs = self._reloadlvs(vgName) File "/usr/share/vdsm/storage/lvm.py", line 419, in _reloadlvs self._lvs.pop((vgName, lvName), None) File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__ self.gen.throw(type, value, traceback) File "/usr/share/vdsm/storage/misc.py", line 1219, in acquireContext yield self File "/usr/share/vdsm/storage/lvm.py", line 404, in _reloadlvs lv = makeLV(*fields) File "/usr/share/vdsm/storage/lvm.py", line 218, in makeLV attrs = _attr2NamedTuple(args[LV._fields.index("attr")], LV_ATTR_BITS, "LV_ATTR") File "/usr/share/vdsm/storage/lvm.py", line 188, in _attr2NamedTuple attrs = Attrs(*values) TypeError: __new__() takes exactly 9 arguments (10 given)
and followed by:
Thread-43::ERROR::2013-03-07 15:02:51,987::dispatcher::69::Storage.Dispatcher.Protect::(run) __new__() takes exactly 9 arguments (10 given) Traceback (most recent call last): File "/usr/share/vdsm/storage/dispatcher.py", line 61, in run result = ctask.prepare(self.func, *args, **kwargs) File "/usr/share/vdsm/storage/task.py", line 1164, in prepare raise self.error TypeError: __new__() takes exactly 9 arguments (10 given) Thread-43::DEBUG::2013-03-07 15:02:51,987::vm::580::vm.Vm::(_startUnderlyingVm) vmId=`7db86f12-8c57-4d2b-a853-a6fd6f7ee82d`::_ongoingCreations released Thread-43::ERROR::2013-03-07 15:02:51,987::vm::604::vm.Vm::(_startUnderlyingVm) vmId=`7db86f12-8c57-4d2b-a853-a6fd6f7ee82d`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/vm.py", line 570, in _startUnderlyingVm self._run() File "/usr/share/vdsm/libvirtvm.py", line 1289, in _run devices = self.buildConfDevices() File "/usr/share/vdsm/vm.py", line 431, in buildConfDevices self._normalizeVdsmImg(drv) File "/usr/share/vdsm/vm.py", line 358, in _normalizeVdsmImg drv['truesize'] = res['truesize'] KeyError: 'truesize'
In webadmin the start and migrate operations fail with 'truesize'.
I could find BZ#876958 which has the very same error. So I tried to apply patch http://gerrit.ovirt.org/9317. I had to apply it manually (guess patch would need a rebase for 3.1), but it works.
Thanks for the report. I've made a public backport for this in http://gerrit.ovirt.org/12836/ and would ask you again to tick that it is verified by you.
I now can start new virtual machines successfully on a CentOS 6.4 / oVirt 3.1 host. Migration of vm from CentOS 6.3 hosts work, but not the other way around. Migration from 6.4 to 6.3 fails:
Thread-1296::ERROR::2013-03-07 15:55:24,845::vm::176::vm.Vm::(_recover) vmId=`c978cbf8-6b4d-4d6f-9435-480d9fed31c4`::internal error Process exited while reading console log output: Supported machines are: pc RHEL 6.3.0 PC (alias of rhel6.3.0) rhel6.3.0 RHEL 6.3.0 PC (default) rhel6.2.0 RHEL 6.2.0 PC rhel6.1.0 RHEL 6.1.0 PC rhel6.0.0 RHEL 6.0.0 PC rhel5.5.0 RHEL 5.5.0 PC rhel5.4.4 RHEL 5.4.4 PC rhel5.4.0 RHEL 5.4.0 PC
Thread-1296::ERROR::2013-03-07 15:55:24,988::vm::240::vm.Vm::(run) vmId=`c978cbf8-6b4d-4d6f-9435-480d9fed31c4`::Failed to migrate Traceback (most recent call last): File "/usr/share/vdsm/vm.py", line 223, in run self._startUnderlyingMigration() File "/usr/share/vdsm/libvirtvm.py", line 451, in _startUnderlyingMigration None, maxBandwidth) File "/usr/share/vdsm/libvirtvm.py", line 491, in f ret = attr(*args, **kwargs) File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line 82, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1178, in migrateToURI2 if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', dom=self) libvirtError: internal error Process exited while reading console log output: Supported machines are: pc RHEL 6.3.0 PC (alias of rhel6.3.0) rhel6.3.0 RHEL 6.3.0 PC (default) rhel6.2.0 RHEL 6.2.0 PC rhel6.1.0 RHEL 6.1.0 PC rhel6.0.0 RHEL 6.0.0 PC rhel5.5.0 RHEL 5.5.0 PC rhel5.4.4 RHEL 5.4.4 PC rhel5.4.0 RHEL 5.4.0 PC
But I guess this is fine and migration from higher host version to a lower version is probably not supported, right?
Well, I suppose that qemu would allow migration if you begine with a a *guest* of version rhel6.3.0. Please try it out.
Dan.
Alright, just verified it. A vm started on a 6.3 host can be successfully migrated to the new 6.4 host and then back to any other 6.3 host. It just won't migrate a vm started on 6.4 to any host running 6.3.
This surprises me. Engine should have used the same emulatedMachine value, independent of the initial host. Could you share the vdsm.log lines mentionioning "emulatedMachine" in both cases? Dan.

Alright, just verified it. A vm started on a 6.3 host can be successfully migrated to the new 6.4 host and then back to any other 6.3 host. It just won't migrate a vm started on 6.4 to any host running 6.3.
This surprises me. Engine should have used the same emulatedMachine value, independent of the initial host. Could you share the vdsm.log lines mentionioning "emulatedMachine" in both cases?
Dan.
Hi Dan, sorry for coming back this late. I checked it and the default emulatedMachine is pc. And machine pc differs in the definition between 6.3 and 6.4. virsh 6.3: <guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine>rhel6.3.0</machine> <machine canonical='rhel6.3.0'>pc</machine> <machine>rhel6.2.0</machine> <machine>rhel6.1.0</machine> <machine>rhel6.0.0</machine> <machine>rhel5.5.0</machine> <machine>rhel5.4.4</machine> <machine>rhel5.4.0</machine> <domain type='qemu'> </domain> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest> virsh 6.4: <guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine>rhel6.4.0</machine> <machine canonical='rhel6.4.0'>pc</machine> <machine>rhel6.3.0</machine> <machine>rhel6.2.0</machine> <machine>rhel6.1.0</machine> <machine>rhel6.0.0</machine> <machine>rhel5.5.0</machine> <machine>rhel5.4.4</machine> <machine>rhel5.4.0</machine> <domain type='qemu'> </domain> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest> Regards Patrick -- Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg HRB 178831, Amtsgericht München Geschäftsführer: Dr. Martin Fischer, Rolf Henrich

----- Original Message -----
From: "Patrick Hurrelmann" <patrick.hurrelmann@lobster.de> To: "users" <users@ovirt.org> Cc: "Omer Frenkel" <ofrenkel@redhat.com> Sent: Monday, March 11, 2013 11:39:46 AM Subject: Re: [Users] 6.4 CR: oVirt 3.1 breaks with missing cpu features after update to CentOS 6.4 (6.3 + CR)
Alright, just verified it. A vm started on a 6.3 host can be successfully migrated to the new 6.4 host and then back to any other 6.3 host. It just won't migrate a vm started on 6.4 to any host running 6.3.
This surprises me. Engine should have used the same emulatedMachine value, independent of the initial host. Could you share the vdsm.log lines mentionioning "emulatedMachine" in both cases?
Dan.
Hi Dan,
sorry for coming back this late. I checked it and the default emulatedMachine is pc. And machine pc differs in the definition between 6.3 and 6.4.
the value sent to vdsm from the engine is configurable per cluster compatibility version, no reason for it to be 'pc' anywhere, you can use engine-config to change EmulatedMachine to the correct value (for 3.2 cluster probably you need to use 'rhel6.3.0')
virsh 6.3: <guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine>rhel6.3.0</machine> <machine canonical='rhel6.3.0'>pc</machine> <machine>rhel6.2.0</machine> <machine>rhel6.1.0</machine> <machine>rhel6.0.0</machine> <machine>rhel5.5.0</machine> <machine>rhel5.4.4</machine> <machine>rhel5.4.0</machine> <domain type='qemu'> </domain> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest>
virsh 6.4: <guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine>rhel6.4.0</machine> <machine canonical='rhel6.4.0'>pc</machine> <machine>rhel6.3.0</machine> <machine>rhel6.2.0</machine> <machine>rhel6.1.0</machine> <machine>rhel6.0.0</machine> <machine>rhel5.5.0</machine> <machine>rhel5.4.4</machine> <machine>rhel5.4.0</machine> <domain type='qemu'> </domain> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest>
Regards Patrick
-- Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg
HRB 178831, Amtsgericht München Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
participants (5)
-
Dan Kenigsberg
-
Itamar Heim
-
Mark Wu
-
Omer Frenkel
-
Patrick Hurrelmann