Thanks guys,
Your questions lead me to the fact that another technician had
installed "vdsm-hook-nestedvt" to try and get nested virtualization on
a VM working a while back. I have removed the hook and restarted and
this resolved the issue.
Thanks once again, much appreciated.
danken - shouldn't the hook (if in rpm form) require a kernel which
actually supports nested, to not be installed on .el6, etc.?
Regards.
Neil Wilson.
On Mon, Oct 21, 2013 at 4:25 PM, Alon Bar-Lev <alonbl(a)redhat.com> wrote:
>
>
> ----- Original Message -----
>> From: "Neil" <nwilson123(a)gmail.com>
>> To: "Dan Kenigsberg" <danken(a)redhat.com>
>> Cc: users(a)ovirt.org
>> Sent: Monday, October 21, 2013 5:20:41 PM
>> Subject: Re: [Users] Host Error
>>
>> Okay, below is the modprobe and attached is the capabilities from the
>> vdsm.log
>>
>> modprobe kvm_intel
>> FATAL: Error inserting kvm_intel
>> (/lib/modules/2.6.32-358.23.2.el6.x86_64/kernel/arch/x86/kvm/kvm-intel.ko):
>> Unknown symbol in module, or unknown parameter (see dmesg)
>>
>> According to this message, I should see something in dmesg, but I
>> don't see anything when I run the modprobe.
>>
>> If I search through my dmesg I do see this though "kvm_intel: Unknown
>> parameter `nested'"
>
> Do you have anything special at /etc/modprobe.d?
> Have you complied the kernel manually?
>
>>
>> Please shout if you need anything else though.
>>
>> Thank so much.
>>
>> Regards.
>>
>> Neil Wilson.
>>
>>
>>
>> On Mon, Oct 21, 2013 at 3:22 PM, Dan Kenigsberg <danken(a)redhat.com> wrote:
>>> On Mon, Oct 21, 2013 at 12:50:29PM +0200, Neil wrote:
>>>> Hi Dan,
>>>>
>>>> Thanks, below are the results.
>>>>
>>>> [root@ovirt02 ~]# virsh -r capabilities
>>>> <capabilities>
>>>>
>>>> <host>
>>>> <uuid>6d0aa20e-c6c2-410d-b3c5-3c87af15dd7c</uuid>
>>>> <cpu>
>>>> <arch>x86_64</arch>
>>>> <model>Haswell</model>
>>>> <vendor>Intel</vendor>
>>>> <topology sockets='1' cores='4'
threads='2'/>
>>>> <feature name='abm'/>
>>>> <feature name='pdpe1gb'/>
>>>> <feature name='rdrand'/>
>>>> <feature name='f16c'/>
>>>> <feature name='osxsave'/>
>>>> <feature name='pdcm'/>
>>>> <feature name='xtpr'/>
>>>> <feature name='tm2'/>
>>>> <feature name='est'/>
>>>> <feature name='smx'/>
>>>> <feature name='vmx'/>
>>>> <feature name='ds_cpl'/>
>>>> <feature name='monitor'/>
>>>> <feature name='dtes64'/>
>>>> <feature name='pbe'/>
>>>> <feature name='tm'/>
>>>> <feature name='ht'/>
>>>> <feature name='ss'/>
>>>> <feature name='acpi'/>
>>>> <feature name='ds'/>
>>>> <feature name='vme'/>
>>>> </cpu>
>>>> <power_management>
>>>> <suspend_mem/>
>>>> <suspend_disk/>
>>>> </power_management>
>>>> <migration_features>
>>>> <live/>
>>>> <uri_transports>
>>>> <uri_transport>tcp</uri_transport>
>>>> </uri_transports>
>>>> </migration_features>
>>>> <topology>
>>>> <cells num='1'>
>>>> <cell id='0'>
>>>> <cpus num='8'>
>>>> <cpu id='0' socket_id='0'
core_id='0' siblings='0,4'/>
>>>> <cpu id='1' socket_id='0'
core_id='1' siblings='1,5'/>
>>>> <cpu id='2' socket_id='0'
core_id='2' siblings='2,6'/>
>>>> <cpu id='3' socket_id='0'
core_id='3' siblings='3,7'/>
>>>> <cpu id='4' socket_id='0'
core_id='0' siblings='0,4'/>
>>>> <cpu id='5' socket_id='0'
core_id='1' siblings='1,5'/>
>>>> <cpu id='6' socket_id='0'
core_id='2' siblings='2,6'/>
>>>> <cpu id='7' socket_id='0'
core_id='3' siblings='3,7'/>
>>>> </cpus>
>>>> </cell>
>>>> </cells>
>>>> </topology>
>>>> <secmodel>
>>>> <model>selinux</model>
>>>> <doi>0</doi>
>>>> </secmodel>
>>>> <secmodel>
>>>> <model>dac</model>
>>>> <doi>0</doi>
>>>> </secmodel>
>>>> </host>
>>>>
>>>> <guest>
>>>> <os_type>hvm</os_type>
>>>> <arch name='i686'>
>>>> <wordsize>32</wordsize>
>>>> <emulator>/usr/libexec/qemu-kvm</emulator>
>>>> <machine>rhel6.4.0</machine>
>>>> <machine canonical='rhel6.4.0'>pc</machine>
>>>> <machine>rhel6.3.0</machine>
>>>> <machine>rhel6.2.0</machine>
>>>> <machine>rhel6.1.0</machine>
>>>> <machine>rhel6.0.0</machine>
>>>> <machine>rhel5.5.0</machine>
>>>> <machine>rhel5.4.4</machine>
>>>> <machine>rhel5.4.0</machine>
>>>> <domain type='qemu'>
>>>> </domain>
>>>> </arch>
>>>> <features>
>>>> <cpuselection/>
>>>> <deviceboot/>
>>>> <acpi default='on' toggle='yes'/>
>>>> <apic default='on' toggle='no'/>
>>>> <pae/>
>>>> <nonpae/>
>>>> </features>
>>>> </guest>
>>>>
>>>> <guest>
>>>> <os_type>hvm</os_type>
>>>> <arch name='x86_64'>
>>>> <wordsize>64</wordsize>
>>>> <emulator>/usr/libexec/qemu-kvm</emulator>
>>>> <machine>rhel6.4.0</machine>
>>>> <machine canonical='rhel6.4.0'>pc</machine>
>>>> <machine>rhel6.3.0</machine>
>>>> <machine>rhel6.2.0</machine>
>>>> <machine>rhel6.1.0</machine>
>>>> <machine>rhel6.0.0</machine>
>>>> <machine>rhel5.5.0</machine>
>>>> <machine>rhel5.4.4</machine>
>>>> <machine>rhel5.4.0</machine>
>>>> <domain type='qemu'>
>>>> </domain>
>>>> </arch>
>>>> <features>
>>>> <cpuselection/>
>>>> <deviceboot/>
>>>> <acpi default='on' toggle='yes'/>
>>>> <apic default='on' toggle='no'/>
>>>> </features>
>>>> </guest>
>>>>
>>>> </capabilities>
>>>
>>> seems fine to me.
>>>
>>>>
>>>>
>>>> [root@ovirt02 ~]# lsmod | grep kvm
>>>> kvm 317504 0
>>>
>>> I'd expect kvm_intel here, too. Could you manually modprobe it and
>>> retry?
>>>
>>> Another thing that I should have asked for is the output of
>>> getVdsCapabilities in your vdsm.log.
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>>
http://lists.ovirt.org/mailman/listinfo/users
>>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users