
Hi guys, We had a power failure over the weekend, and when powering everything back on, one of the hosts gives the following error when trying to re-activate it. "Host ovirt02.blabla.co.za running without virtualization hardware acceleration" I've checked in the machine's bios and the VT options are still enabled, I've reset the PC's bios and then confirmed all the CPU features were enabled as a precaution and yet I still get the same error. Could this be an ovirt/vdsm issue or is it more likely a hardware issue? These are my versions... vdsm-cli-4.10.3-10.el6.centos.alt.noarch vdsm-4.10.3-10.el6.centos.alt.x86_64 vdsm-xmlrpc-4.10.3-10.el6.centos.alt.noarch vdsm-hook-nestedvt-4.10.3-10.el6.centos.alt.noarch vdsm-python-4.10.3-10.el6.centos.alt.x86_64 ovirt-engine-genericapi-3.2.1-1.41.el6.noarch ovirt-engine-dbscripts-3.2.1-1.41.el6.noarch ovirt-engine-tools-3.2.1-1.41.el6.noarch ovirt-engine-sdk-3.2.0.9-1.el6.noarch ovirt-engine-restapi-3.2.1-1.41.el6.noarch ovirt-host-deploy-1.1.0-0.0.master.el6.noarch ovirt-log-collector-3.2.0-1.el6.noarch ovirt-engine-backend-3.2.1-1.41.el6.noarch ovirt-image-uploader-3.2.0-1.el6.noarch ovirt-engine-userportal-3.2.1-1.41.el6.noarch ovirt-engine-jbossas711-1-0.x86_64 ovirt-engine-setup-3.2.1-1.41.el6.noarch ovirt-host-deploy-java-1.1.0-0.0.master.el6.noarch ovirt-iso-uploader-3.2.0-1.el6.noarch ovirt-engine-3.2.1-1.41.el6.noarch ovirt-engine-cli-3.2.0.10-1.el6.noarch ovirt-engine-webadmin-portal-3.2.1-1.41.el6.noarch Any help is appreciated. Thanks. Regards. Neil Wilson

On Mon, Oct 21, 2013 at 10:57:12AM +0200, Neil wrote:
Hi guys,
We had a power failure over the weekend, and when powering everything back on, one of the hosts gives the following error when trying to re-activate it.
"Host ovirt02.blabla.co.za running without virtualization hardware acceleration"
I've checked in the machine's bios and the VT options are still enabled, I've reset the PC's bios and then confirmed all the CPU features were enabled as a precaution and yet I still get the same error.
Could this be an ovirt/vdsm issue or is it more likely a hardware issue?
These are my versions...
vdsm-cli-4.10.3-10.el6.centos.alt.noarch vdsm-4.10.3-10.el6.centos.alt.x86_64 vdsm-xmlrpc-4.10.3-10.el6.centos.alt.noarch vdsm-hook-nestedvt-4.10.3-10.el6.centos.alt.noarch vdsm-python-4.10.3-10.el6.centos.alt.x86_64
ovirt-engine-genericapi-3.2.1-1.41.el6.noarch ovirt-engine-dbscripts-3.2.1-1.41.el6.noarch ovirt-engine-tools-3.2.1-1.41.el6.noarch ovirt-engine-sdk-3.2.0.9-1.el6.noarch ovirt-engine-restapi-3.2.1-1.41.el6.noarch ovirt-host-deploy-1.1.0-0.0.master.el6.noarch ovirt-log-collector-3.2.0-1.el6.noarch ovirt-engine-backend-3.2.1-1.41.el6.noarch ovirt-image-uploader-3.2.0-1.el6.noarch ovirt-engine-userportal-3.2.1-1.41.el6.noarch ovirt-engine-jbossas711-1-0.x86_64 ovirt-engine-setup-3.2.1-1.41.el6.noarch ovirt-host-deploy-java-1.1.0-0.0.master.el6.noarch ovirt-iso-uploader-3.2.0-1.el6.noarch ovirt-engine-3.2.1-1.41.el6.noarch ovirt-engine-cli-3.2.0.10-1.el6.noarch ovirt-engine-webadmin-portal-3.2.1-1.41.el6.noarch
Any help is appreciated.
What does `virsh -r capabilities` report when run on that host? And `lsmod | grep kvm` ?

Hi Dan, Thanks, below are the results. [root@ovirt02 ~]# virsh -r capabilities <capabilities> <host> <uuid>6d0aa20e-c6c2-410d-b3c5-3c87af15dd7c</uuid> <cpu> <arch>x86_64</arch> <model>Haswell</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='2'/> <feature name='abm'/> <feature name='pdpe1gb'/> <feature name='rdrand'/> <feature name='f16c'/> <feature name='osxsave'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu> <power_management> <suspend_mem/> <suspend_disk/> </power_management> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> </uri_transports> </migration_features> <topology> <cells num='1'> <cell id='0'> <cpus num='8'> <cpu id='0' socket_id='0' core_id='0' siblings='0,4'/> <cpu id='1' socket_id='0' core_id='1' siblings='1,5'/> <cpu id='2' socket_id='0' core_id='2' siblings='2,6'/> <cpu id='3' socket_id='0' core_id='3' siblings='3,7'/> <cpu id='4' socket_id='0' core_id='0' siblings='0,4'/> <cpu id='5' socket_id='0' core_id='1' siblings='1,5'/> <cpu id='6' socket_id='0' core_id='2' siblings='2,6'/> <cpu id='7' socket_id='0' core_id='3' siblings='3,7'/> </cpus> </cell> </cells> </topology> <secmodel> <model>selinux</model> <doi>0</doi> </secmodel> <secmodel> <model>dac</model> <doi>0</doi> </secmodel> </host> <guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine>rhel6.4.0</machine> <machine canonical='rhel6.4.0'>pc</machine> <machine>rhel6.3.0</machine> <machine>rhel6.2.0</machine> <machine>rhel6.1.0</machine> <machine>rhel6.0.0</machine> <machine>rhel5.5.0</machine> <machine>rhel5.4.4</machine> <machine>rhel5.4.0</machine> <domain type='qemu'> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> <pae/> <nonpae/> </features> </guest> <guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine>rhel6.4.0</machine> <machine canonical='rhel6.4.0'>pc</machine> <machine>rhel6.3.0</machine> <machine>rhel6.2.0</machine> <machine>rhel6.1.0</machine> <machine>rhel6.0.0</machine> <machine>rhel5.5.0</machine> <machine>rhel5.4.4</machine> <machine>rhel5.4.0</machine> <domain type='qemu'> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest> </capabilities> [root@ovirt02 ~]# lsmod | grep kvm kvm 317504 0 Thanks! Regards. Neil Wilson. On Mon, Oct 21, 2013 at 11:29 AM, Dan Kenigsberg <danken@redhat.com> wrote:
On Mon, Oct 21, 2013 at 10:57:12AM +0200, Neil wrote:
Hi guys,
We had a power failure over the weekend, and when powering everything back on, one of the hosts gives the following error when trying to re-activate it.
"Host ovirt02.blabla.co.za running without virtualization hardware acceleration"
I've checked in the machine's bios and the VT options are still enabled, I've reset the PC's bios and then confirmed all the CPU features were enabled as a precaution and yet I still get the same error.
Could this be an ovirt/vdsm issue or is it more likely a hardware issue?
These are my versions...
vdsm-cli-4.10.3-10.el6.centos.alt.noarch vdsm-4.10.3-10.el6.centos.alt.x86_64 vdsm-xmlrpc-4.10.3-10.el6.centos.alt.noarch vdsm-hook-nestedvt-4.10.3-10.el6.centos.alt.noarch vdsm-python-4.10.3-10.el6.centos.alt.x86_64
ovirt-engine-genericapi-3.2.1-1.41.el6.noarch ovirt-engine-dbscripts-3.2.1-1.41.el6.noarch ovirt-engine-tools-3.2.1-1.41.el6.noarch ovirt-engine-sdk-3.2.0.9-1.el6.noarch ovirt-engine-restapi-3.2.1-1.41.el6.noarch ovirt-host-deploy-1.1.0-0.0.master.el6.noarch ovirt-log-collector-3.2.0-1.el6.noarch ovirt-engine-backend-3.2.1-1.41.el6.noarch ovirt-image-uploader-3.2.0-1.el6.noarch ovirt-engine-userportal-3.2.1-1.41.el6.noarch ovirt-engine-jbossas711-1-0.x86_64 ovirt-engine-setup-3.2.1-1.41.el6.noarch ovirt-host-deploy-java-1.1.0-0.0.master.el6.noarch ovirt-iso-uploader-3.2.0-1.el6.noarch ovirt-engine-3.2.1-1.41.el6.noarch ovirt-engine-cli-3.2.0.10-1.el6.noarch ovirt-engine-webadmin-portal-3.2.1-1.41.el6.noarch
Any help is appreciated.
What does `virsh -r capabilities` report when run on that host? And `lsmod | grep kvm` ?

On Mon, Oct 21, 2013 at 12:50:29PM +0200, Neil wrote:
Hi Dan,
Thanks, below are the results.
[root@ovirt02 ~]# virsh -r capabilities <capabilities>
<host> <uuid>6d0aa20e-c6c2-410d-b3c5-3c87af15dd7c</uuid> <cpu> <arch>x86_64</arch> <model>Haswell</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='2'/> <feature name='abm'/> <feature name='pdpe1gb'/> <feature name='rdrand'/> <feature name='f16c'/> <feature name='osxsave'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu> <power_management> <suspend_mem/> <suspend_disk/> </power_management> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> </uri_transports> </migration_features> <topology> <cells num='1'> <cell id='0'> <cpus num='8'> <cpu id='0' socket_id='0' core_id='0' siblings='0,4'/> <cpu id='1' socket_id='0' core_id='1' siblings='1,5'/> <cpu id='2' socket_id='0' core_id='2' siblings='2,6'/> <cpu id='3' socket_id='0' core_id='3' siblings='3,7'/> <cpu id='4' socket_id='0' core_id='0' siblings='0,4'/> <cpu id='5' socket_id='0' core_id='1' siblings='1,5'/> <cpu id='6' socket_id='0' core_id='2' siblings='2,6'/> <cpu id='7' socket_id='0' core_id='3' siblings='3,7'/> </cpus> </cell> </cells> </topology> <secmodel> <model>selinux</model> <doi>0</doi> </secmodel> <secmodel> <model>dac</model> <doi>0</doi> </secmodel> </host>
<guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine>rhel6.4.0</machine> <machine canonical='rhel6.4.0'>pc</machine> <machine>rhel6.3.0</machine> <machine>rhel6.2.0</machine> <machine>rhel6.1.0</machine> <machine>rhel6.0.0</machine> <machine>rhel5.5.0</machine> <machine>rhel5.4.4</machine> <machine>rhel5.4.0</machine> <domain type='qemu'> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> <pae/> <nonpae/> </features> </guest>
<guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine>rhel6.4.0</machine> <machine canonical='rhel6.4.0'>pc</machine> <machine>rhel6.3.0</machine> <machine>rhel6.2.0</machine> <machine>rhel6.1.0</machine> <machine>rhel6.0.0</machine> <machine>rhel5.5.0</machine> <machine>rhel5.4.4</machine> <machine>rhel5.4.0</machine> <domain type='qemu'> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest>
</capabilities>
seems fine to me.
[root@ovirt02 ~]# lsmod | grep kvm kvm 317504 0
I'd expect kvm_intel here, too. Could you manually modprobe it and retry? Another thing that I should have asked for is the output of getVdsCapabilities in your vdsm.log.

Okay, below is the modprobe and attached is the capabilities from the vdsm.log modprobe kvm_intel FATAL: Error inserting kvm_intel (/lib/modules/2.6.32-358.23.2.el6.x86_64/kernel/arch/x86/kvm/kvm-intel.ko): Unknown symbol in module, or unknown parameter (see dmesg) According to this message, I should see something in dmesg, but I don't see anything when I run the modprobe. If I search through my dmesg I do see this though "kvm_intel: Unknown parameter `nested'" Please shout if you need anything else though. Thank so much. Regards. Neil Wilson. On Mon, Oct 21, 2013 at 3:22 PM, Dan Kenigsberg <danken@redhat.com> wrote:
On Mon, Oct 21, 2013 at 12:50:29PM +0200, Neil wrote:
Hi Dan,
Thanks, below are the results.
[root@ovirt02 ~]# virsh -r capabilities <capabilities>
<host> <uuid>6d0aa20e-c6c2-410d-b3c5-3c87af15dd7c</uuid> <cpu> <arch>x86_64</arch> <model>Haswell</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='2'/> <feature name='abm'/> <feature name='pdpe1gb'/> <feature name='rdrand'/> <feature name='f16c'/> <feature name='osxsave'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu> <power_management> <suspend_mem/> <suspend_disk/> </power_management> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> </uri_transports> </migration_features> <topology> <cells num='1'> <cell id='0'> <cpus num='8'> <cpu id='0' socket_id='0' core_id='0' siblings='0,4'/> <cpu id='1' socket_id='0' core_id='1' siblings='1,5'/> <cpu id='2' socket_id='0' core_id='2' siblings='2,6'/> <cpu id='3' socket_id='0' core_id='3' siblings='3,7'/> <cpu id='4' socket_id='0' core_id='0' siblings='0,4'/> <cpu id='5' socket_id='0' core_id='1' siblings='1,5'/> <cpu id='6' socket_id='0' core_id='2' siblings='2,6'/> <cpu id='7' socket_id='0' core_id='3' siblings='3,7'/> </cpus> </cell> </cells> </topology> <secmodel> <model>selinux</model> <doi>0</doi> </secmodel> <secmodel> <model>dac</model> <doi>0</doi> </secmodel> </host>
<guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine>rhel6.4.0</machine> <machine canonical='rhel6.4.0'>pc</machine> <machine>rhel6.3.0</machine> <machine>rhel6.2.0</machine> <machine>rhel6.1.0</machine> <machine>rhel6.0.0</machine> <machine>rhel5.5.0</machine> <machine>rhel5.4.4</machine> <machine>rhel5.4.0</machine> <domain type='qemu'> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> <pae/> <nonpae/> </features> </guest>
<guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine>rhel6.4.0</machine> <machine canonical='rhel6.4.0'>pc</machine> <machine>rhel6.3.0</machine> <machine>rhel6.2.0</machine> <machine>rhel6.1.0</machine> <machine>rhel6.0.0</machine> <machine>rhel5.5.0</machine> <machine>rhel5.4.4</machine> <machine>rhel5.4.0</machine> <domain type='qemu'> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest>
</capabilities>
seems fine to me.
[root@ovirt02 ~]# lsmod | grep kvm kvm 317504 0
I'd expect kvm_intel here, too. Could you manually modprobe it and retry?
Another thing that I should have asked for is the output of getVdsCapabilities in your vdsm.log.

----- Original Message -----
From: "Neil" <nwilson123@gmail.com> To: "Dan Kenigsberg" <danken@redhat.com> Cc: users@ovirt.org Sent: Monday, October 21, 2013 5:20:41 PM Subject: Re: [Users] Host Error
Okay, below is the modprobe and attached is the capabilities from the vdsm.log
modprobe kvm_intel FATAL: Error inserting kvm_intel (/lib/modules/2.6.32-358.23.2.el6.x86_64/kernel/arch/x86/kvm/kvm-intel.ko): Unknown symbol in module, or unknown parameter (see dmesg)
According to this message, I should see something in dmesg, but I don't see anything when I run the modprobe.
If I search through my dmesg I do see this though "kvm_intel: Unknown parameter `nested'"
Do you have anything special at /etc/modprobe.d? Have you complied the kernel manually?
Please shout if you need anything else though.
Thank so much.
Regards.
Neil Wilson.
On Mon, Oct 21, 2013 at 3:22 PM, Dan Kenigsberg <danken@redhat.com> wrote:
On Mon, Oct 21, 2013 at 12:50:29PM +0200, Neil wrote:
Hi Dan,
Thanks, below are the results.
[root@ovirt02 ~]# virsh -r capabilities <capabilities>
<host> <uuid>6d0aa20e-c6c2-410d-b3c5-3c87af15dd7c</uuid> <cpu> <arch>x86_64</arch> <model>Haswell</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='2'/> <feature name='abm'/> <feature name='pdpe1gb'/> <feature name='rdrand'/> <feature name='f16c'/> <feature name='osxsave'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu> <power_management> <suspend_mem/> <suspend_disk/> </power_management> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> </uri_transports> </migration_features> <topology> <cells num='1'> <cell id='0'> <cpus num='8'> <cpu id='0' socket_id='0' core_id='0' siblings='0,4'/> <cpu id='1' socket_id='0' core_id='1' siblings='1,5'/> <cpu id='2' socket_id='0' core_id='2' siblings='2,6'/> <cpu id='3' socket_id='0' core_id='3' siblings='3,7'/> <cpu id='4' socket_id='0' core_id='0' siblings='0,4'/> <cpu id='5' socket_id='0' core_id='1' siblings='1,5'/> <cpu id='6' socket_id='0' core_id='2' siblings='2,6'/> <cpu id='7' socket_id='0' core_id='3' siblings='3,7'/> </cpus> </cell> </cells> </topology> <secmodel> <model>selinux</model> <doi>0</doi> </secmodel> <secmodel> <model>dac</model> <doi>0</doi> </secmodel> </host>
<guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine>rhel6.4.0</machine> <machine canonical='rhel6.4.0'>pc</machine> <machine>rhel6.3.0</machine> <machine>rhel6.2.0</machine> <machine>rhel6.1.0</machine> <machine>rhel6.0.0</machine> <machine>rhel5.5.0</machine> <machine>rhel5.4.4</machine> <machine>rhel5.4.0</machine> <domain type='qemu'> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> <pae/> <nonpae/> </features> </guest>
<guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine>rhel6.4.0</machine> <machine canonical='rhel6.4.0'>pc</machine> <machine>rhel6.3.0</machine> <machine>rhel6.2.0</machine> <machine>rhel6.1.0</machine> <machine>rhel6.0.0</machine> <machine>rhel5.5.0</machine> <machine>rhel5.4.4</machine> <machine>rhel5.4.0</machine> <domain type='qemu'> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest>
</capabilities>
seems fine to me.
[root@ovirt02 ~]# lsmod | grep kvm kvm 317504 0
I'd expect kvm_intel here, too. Could you manually modprobe it and retry?
Another thing that I should have asked for is the output of getVdsCapabilities in your vdsm.log.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Thanks guys, Your questions lead me to the fact that another technician had installed "vdsm-hook-nestedvt" to try and get nested virtualization on a VM working a while back. I have removed the hook and restarted and this resolved the issue. Thanks once again, much appreciated. Regards. Neil Wilson. On Mon, Oct 21, 2013 at 4:25 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Neil" <nwilson123@gmail.com> To: "Dan Kenigsberg" <danken@redhat.com> Cc: users@ovirt.org Sent: Monday, October 21, 2013 5:20:41 PM Subject: Re: [Users] Host Error
Okay, below is the modprobe and attached is the capabilities from the vdsm.log
modprobe kvm_intel FATAL: Error inserting kvm_intel (/lib/modules/2.6.32-358.23.2.el6.x86_64/kernel/arch/x86/kvm/kvm-intel.ko): Unknown symbol in module, or unknown parameter (see dmesg)
According to this message, I should see something in dmesg, but I don't see anything when I run the modprobe.
If I search through my dmesg I do see this though "kvm_intel: Unknown parameter `nested'"
Do you have anything special at /etc/modprobe.d? Have you complied the kernel manually?
Please shout if you need anything else though.
Thank so much.
Regards.
Neil Wilson.
On Mon, Oct 21, 2013 at 3:22 PM, Dan Kenigsberg <danken@redhat.com> wrote:
On Mon, Oct 21, 2013 at 12:50:29PM +0200, Neil wrote:
Hi Dan,
Thanks, below are the results.
[root@ovirt02 ~]# virsh -r capabilities <capabilities>
<host> <uuid>6d0aa20e-c6c2-410d-b3c5-3c87af15dd7c</uuid> <cpu> <arch>x86_64</arch> <model>Haswell</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='2'/> <feature name='abm'/> <feature name='pdpe1gb'/> <feature name='rdrand'/> <feature name='f16c'/> <feature name='osxsave'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu> <power_management> <suspend_mem/> <suspend_disk/> </power_management> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> </uri_transports> </migration_features> <topology> <cells num='1'> <cell id='0'> <cpus num='8'> <cpu id='0' socket_id='0' core_id='0' siblings='0,4'/> <cpu id='1' socket_id='0' core_id='1' siblings='1,5'/> <cpu id='2' socket_id='0' core_id='2' siblings='2,6'/> <cpu id='3' socket_id='0' core_id='3' siblings='3,7'/> <cpu id='4' socket_id='0' core_id='0' siblings='0,4'/> <cpu id='5' socket_id='0' core_id='1' siblings='1,5'/> <cpu id='6' socket_id='0' core_id='2' siblings='2,6'/> <cpu id='7' socket_id='0' core_id='3' siblings='3,7'/> </cpus> </cell> </cells> </topology> <secmodel> <model>selinux</model> <doi>0</doi> </secmodel> <secmodel> <model>dac</model> <doi>0</doi> </secmodel> </host>
<guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine>rhel6.4.0</machine> <machine canonical='rhel6.4.0'>pc</machine> <machine>rhel6.3.0</machine> <machine>rhel6.2.0</machine> <machine>rhel6.1.0</machine> <machine>rhel6.0.0</machine> <machine>rhel5.5.0</machine> <machine>rhel5.4.4</machine> <machine>rhel5.4.0</machine> <domain type='qemu'> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> <pae/> <nonpae/> </features> </guest>
<guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine>rhel6.4.0</machine> <machine canonical='rhel6.4.0'>pc</machine> <machine>rhel6.3.0</machine> <machine>rhel6.2.0</machine> <machine>rhel6.1.0</machine> <machine>rhel6.0.0</machine> <machine>rhel5.5.0</machine> <machine>rhel5.4.4</machine> <machine>rhel5.4.0</machine> <domain type='qemu'> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest>
</capabilities>
seems fine to me.
[root@ovirt02 ~]# lsmod | grep kvm kvm 317504 0
I'd expect kvm_intel here, too. Could you manually modprobe it and retry?
Another thing that I should have asked for is the output of getVdsCapabilities in your vdsm.log.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 10/21/2013 03:39 PM, Neil wrote:
Thanks guys,
Your questions lead me to the fact that another technician had installed "vdsm-hook-nestedvt" to try and get nested virtualization on a VM working a while back. I have removed the hook and restarted and this resolved the issue.
Thanks once again, much appreciated.
danken - shouldn't the hook (if in rpm form) require a kernel which actually supports nested, to not be installed on .el6, etc.?
Regards.
Neil Wilson.
On Mon, Oct 21, 2013 at 4:25 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Neil" <nwilson123@gmail.com> To: "Dan Kenigsberg" <danken@redhat.com> Cc: users@ovirt.org Sent: Monday, October 21, 2013 5:20:41 PM Subject: Re: [Users] Host Error
Okay, below is the modprobe and attached is the capabilities from the vdsm.log
modprobe kvm_intel FATAL: Error inserting kvm_intel (/lib/modules/2.6.32-358.23.2.el6.x86_64/kernel/arch/x86/kvm/kvm-intel.ko): Unknown symbol in module, or unknown parameter (see dmesg)
According to this message, I should see something in dmesg, but I don't see anything when I run the modprobe.
If I search through my dmesg I do see this though "kvm_intel: Unknown parameter `nested'"
Do you have anything special at /etc/modprobe.d? Have you complied the kernel manually?
Please shout if you need anything else though.
Thank so much.
Regards.
Neil Wilson.
On Mon, Oct 21, 2013 at 3:22 PM, Dan Kenigsberg <danken@redhat.com> wrote:
On Mon, Oct 21, 2013 at 12:50:29PM +0200, Neil wrote:
Hi Dan,
Thanks, below are the results.
[root@ovirt02 ~]# virsh -r capabilities <capabilities>
<host> <uuid>6d0aa20e-c6c2-410d-b3c5-3c87af15dd7c</uuid> <cpu> <arch>x86_64</arch> <model>Haswell</model> <vendor>Intel</vendor> <topology sockets='1' cores='4' threads='2'/> <feature name='abm'/> <feature name='pdpe1gb'/> <feature name='rdrand'/> <feature name='f16c'/> <feature name='osxsave'/> <feature name='pdcm'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='smx'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='dtes64'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu> <power_management> <suspend_mem/> <suspend_disk/> </power_management> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> </uri_transports> </migration_features> <topology> <cells num='1'> <cell id='0'> <cpus num='8'> <cpu id='0' socket_id='0' core_id='0' siblings='0,4'/> <cpu id='1' socket_id='0' core_id='1' siblings='1,5'/> <cpu id='2' socket_id='0' core_id='2' siblings='2,6'/> <cpu id='3' socket_id='0' core_id='3' siblings='3,7'/> <cpu id='4' socket_id='0' core_id='0' siblings='0,4'/> <cpu id='5' socket_id='0' core_id='1' siblings='1,5'/> <cpu id='6' socket_id='0' core_id='2' siblings='2,6'/> <cpu id='7' socket_id='0' core_id='3' siblings='3,7'/> </cpus> </cell> </cells> </topology> <secmodel> <model>selinux</model> <doi>0</doi> </secmodel> <secmodel> <model>dac</model> <doi>0</doi> </secmodel> </host>
<guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine>rhel6.4.0</machine> <machine canonical='rhel6.4.0'>pc</machine> <machine>rhel6.3.0</machine> <machine>rhel6.2.0</machine> <machine>rhel6.1.0</machine> <machine>rhel6.0.0</machine> <machine>rhel5.5.0</machine> <machine>rhel5.4.4</machine> <machine>rhel5.4.0</machine> <domain type='qemu'> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> <pae/> <nonpae/> </features> </guest>
<guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine>rhel6.4.0</machine> <machine canonical='rhel6.4.0'>pc</machine> <machine>rhel6.3.0</machine> <machine>rhel6.2.0</machine> <machine>rhel6.1.0</machine> <machine>rhel6.0.0</machine> <machine>rhel5.5.0</machine> <machine>rhel5.4.4</machine> <machine>rhel5.4.0</machine> <domain type='qemu'> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest>
</capabilities>
seems fine to me.
[root@ovirt02 ~]# lsmod | grep kvm kvm 317504 0
I'd expect kvm_intel here, too. Could you manually modprobe it and retry?
Another thing that I should have asked for is the output of getVdsCapabilities in your vdsm.log.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Tue, Oct 22, 2013 at 06:56:09AM +0100, Itamar Heim wrote:
On 10/21/2013 03:39 PM, Neil wrote:
Thanks guys,
Your questions lead me to the fact that another technician had installed "vdsm-hook-nestedvt" to try and get nested virtualization on a VM working a while back. I have removed the hook and restarted and this resolved the issue.
Thanks once again, much appreciated.
danken - shouldn't the hook (if in rpm form) require a kernel which actually supports nested, to not be installed on .el6, etc.?
I do not think we should have such a static requirement. The hook was ment to be used by people on EL6 who love re-compiling the kernel. Dan.

On 22.10.2013 10:57, Dan Kenigsberg wrote:
On Tue, Oct 22, 2013 at 06:56:09AM +0100, Itamar Heim wrote:
On 10/21/2013 03:39 PM, Neil wrote:
Thanks guys,
Your questions lead me to the fact that another technician had installed "vdsm-hook-nestedvt" to try and get nested virtualization on a VM working a while back. I have removed the hook and restarted and this resolved the issue.
Thanks once again, much appreciated.
danken - shouldn't the hook (if in rpm form) require a kernel which actually supports nested, to not be installed on .el6, etc.?
I do not think we should have such a static requirement. The hook was ment to be used by people on EL6 who love re-compiling the kernel.
Hi, but it would be cool if this would be documented, not just on the mailing list? For the people who don't love re-compiling? :-)

Hi guys, Sorry not sure if you need any comments from me, but just in case, our kernel is stock standard. Linux ovirt02.blabla.co.za 2.6.32-358.23.2.el6.x86_64 #1 SMP Wed Oct 16 18:37:12 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux Regards. Neil Wilson. On Tue, Oct 22, 2013 at 1:56 PM, Sven Kieske <S.Kieske@mittwald.de> wrote:
On 22.10.2013 10:57, Dan Kenigsberg wrote:
On Tue, Oct 22, 2013 at 06:56:09AM +0100, Itamar Heim wrote:
On 10/21/2013 03:39 PM, Neil wrote:
Thanks guys,
Your questions lead me to the fact that another technician had installed "vdsm-hook-nestedvt" to try and get nested virtualization on a VM working a while back. I have removed the hook and restarted and this resolved the issue.
Thanks once again, much appreciated.
danken - shouldn't the hook (if in rpm form) require a kernel which actually supports nested, to not be installed on .el6, etc.?
I do not think we should have such a static requirement. The hook was ment to be used by people on EL6 who love re-compiling the kernel.
Hi,
but it would be cool if this would be documented, not just on the mailing list? For the people who don't love re-compiling? :-) _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 10/22/2013 01:16 PM, Neil wrote:
Hi guys,
Sorry not sure if you need any comments from me, but just in case, our kernel is stock standard.
Linux ovirt02.blabla.co.za 2.6.32-358.23.2.el6.x86_64 #1 SMP Wed Oct 16 18:37:12 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
yes. this kernel doesn't support nested virt, hence the hook is not relevant for it (and is causing problems)
Regards.
Neil Wilson.
On Tue, Oct 22, 2013 at 1:56 PM, Sven Kieske <S.Kieske@mittwald.de> wrote:
On 22.10.2013 10:57, Dan Kenigsberg wrote:
On Tue, Oct 22, 2013 at 06:56:09AM +0100, Itamar Heim wrote:
On 10/21/2013 03:39 PM, Neil wrote:
Thanks guys,
Your questions lead me to the fact that another technician had installed "vdsm-hook-nestedvt" to try and get nested virtualization on a VM working a while back. I have removed the hook and restarted and this resolved the issue.
Thanks once again, much appreciated.
danken - shouldn't the hook (if in rpm form) require a kernel which actually supports nested, to not be installed on .el6, etc.?
I do not think we should have such a static requirement. The hook was ment to be used by people on EL6 who love re-compiling the kernel.
Hi,
but it would be cool if this would be documented, not just on the mailing list? For the people who don't love re-compiling? :-) _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Tue, Oct 22, 2013 at 11:56:48AM +0000, Sven Kieske wrote:
On 22.10.2013 10:57, Dan Kenigsberg wrote:
On Tue, Oct 22, 2013 at 06:56:09AM +0100, Itamar Heim wrote:
On 10/21/2013 03:39 PM, Neil wrote:
Thanks guys,
Your questions lead me to the fact that another technician had installed "vdsm-hook-nestedvt" to try and get nested virtualization on a VM working a while back. I have removed the hook and restarted and this resolved the issue.
Thanks once again, much appreciated.
danken - shouldn't the hook (if in rpm form) require a kernel which actually supports nested, to not be installed on .el6, etc.?
I do not think we should have such a static requirement. The hook was ment to be used by people on EL6 who love re-compiling the kernel.
Hi,
but it would be cool if this would be documented, not just on the mailing list? For the people who don't love re-compiling? :-)
Makes sense. http://gerrit.ovirt.org/20399
participants (5)
-
Alon Bar-Lev
-
Dan Kenigsberg
-
Itamar Heim
-
Neil
-
Sven Kieske